What tokenizer do the Titan Text models use?

0

I would like to calculate the number of tokens before sending a prompt to the Titan Text LMM service. What tokenizer do the Titan Text models use, and how can I run it locally?

Ben
demandé il y a 3 mois756 vues
2 réponses
0

I am not finding it published in documentation, but if you ask Amazon Q in the AWS console, you get responses referencing WordPiece and SentencePiece tokenizers.

profile pictureAWS
EXPERT
iBehr
répondu il y a 3 mois
  • Thanks for searching, I haven't accepted the answer yet as I would ideally like to know the exact tokenizer. However, WordPiece and SentencePiece might be able to be used as estimates

0

To calculate the number of tokens for a given prompt before sending it to a Titan Text-like Large Language Model (LMM) service, you would typically need to use the same tokenizer or a very similar one. If the exact tokenizer used by Titan Text is publicly available, you can run it locally by installing the necessary library and using the tokenizer to process your text.

profile picture
EXPERT
répondu il y a 3 mois
  • Yes, but you've just added extra context to the question and not really answered it I'm afraid. I'm looking for the exact tokenizer. Since the use of the LLMs is partly charged per input token I would expect to have the ability to calculate this before sending the prompt

Vous n'êtes pas connecté. Se connecter pour publier une réponse.

Une bonne réponse répond clairement à la question, contient des commentaires constructifs et encourage le développement professionnel de la personne qui pose la question.

Instructions pour répondre aux questions