pad_token is not defined
Hi, for some reason, the tokenizer's pad_token and pad_token_id are not defined (i.e. are None
). See code below.
In [1]: from transformers import AutoModelForCausalLM, AutoTokenizer
...: model_name = "croissantllm/CroissantLLMBase"
...: tokenizer = AutoTokenizer.from_pretrained(model_name)
In [5]: tokenizer.pad_token_id is None
Out[5]: True
In [6]: tokenizer.pad_token is None
Out[6]: True
# super strange because it IS defined here
In [7]: tokenizer.added_tokens_decoder[3]
Out[7]: AddedToken("<pad>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True)
quick fix for users:
tokenizer.pad_token="<pad>"
tokenizer.pad_token_id=3
Best regards,
Paul
fix for the croissant team:
add "pad_token": "<pad>"
in tokenizer_config.json
Hello, thanks for the interest! It's actually pretty standard not to have a pad token with the pre-trained model since it's trained without (like Llama). You will notice the Chat version has some extra tokens for such uses. As is, you can just use the EOS token as the pad token, which is what is done by default by the transformers library, and the chat version ! If you finetune your own version, feel free to modify the tokenizer (and note 100 extra tokens exist at the beginning of the tokenizer, and are meant to be overwritten for such things !)
Cheers,
Manu
Thanks for the quick answer!
Actually, I found out about this because I had an exception when calling tokenizer
, but anyway...
>>> tokenizer(["foo", "bar baz"], padding="longest")
Using pad_token, but it is not set yet.
Traceback (most recent call last):
File "/home/paul/anaconda3/envs/matos/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3553, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-3-a2e66f86e470>", line 1, in <module>
tokenizer(["foo", "bar baz"], padding="longest")
File "/home/paul/anaconda3/envs/matos/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2602, in __call__
encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
File "/home/paul/anaconda3/envs/matos/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2688, in _call_one
return self.batch_encode_plus(
File "/home/paul/anaconda3/envs/matos/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2870, in batch_encode_plus
padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies(
File "/home/paul/anaconda3/envs/matos/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2507, in _get_padding_truncation_strategies
raise ValueError(
ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.
Yeah, the model.generate does it at runtime but to tokenize directly you can just override tokenizer.pad_token to the eos !