Missing tokens
There are a few token ids which do not have corresponding values in the tokenizer config. For example, token ids 3
, 4
, and 5
. What should be the value of those tokens?
The model vocab size is 64000
but the tokenizer only has 63992
tokens (len(tokenizer.vocab)
). Should the missing values simply be ignored?
Hi 👋, for the first question you can directly use transformers for encoder on it, for the second question the rest of the 8 tokens are our special tokens similar to <|im_start|>
Hi awni,
These two are actually one question. In the 'slow tokenizer,' SentencePiece is currently used, which involves some control IDs. These IDs cannot be encoded or decoded. Therefore, in the tokenizer.json, to keep the encoding and decoding results consistent with the slow tokenizer, we have removed these command IDs in the tokenizer.json.
I hope this answers your question :-)
That answers my question, thanks! But I wonder, what happens if the model outputs one of the missing IDs? Wouldn't that break currently since the missing ID does not have a value?