Text Generation
Transformers
PyTorch
code
gpt2
custom_code
Eval Results
text-generation-inference
Inference Endpoints
loubnabnl HF staff commited on
Commit
d63779a
1 Parent(s): cd86b97

Switch from PreTrainedTokenizerFast to GPT2TokenizerFast and add eos_token & bos_token

Browse files

`PreTrainedTokenizerFast` returns `token_type_ids` by default and santacoder is not trained on them so passing `model(tokenizer(text))` can result in weird behavior in some cases. We'll use `GPT2TokenizerFast`instead.

Files changed (1) hide show
  1. tokenizer_config.json +4 -2
tokenizer_config.json CHANGED
@@ -1,5 +1,7 @@
1
  {
2
  "errors": "replace",
3
- "tokenizer_class": "PreTrainedTokenizerFast",
 
 
4
  "model_max_length": 2048
5
- }
 
1
  {
2
  "errors": "replace",
3
+ "tokenizer_class": "GPT2TokenizerFast",
4
+ "bos_token": "<|endoftext|>",
5
+ "eos_token": "<|endoftext|>",
6
  "model_max_length": 2048
7
+ }