hello, i find "vocab_size": 64001 , but when i use llama.cpp, it throw a error
#2
by
caodixy1983
- opened
Exception: Vocab size mismatch (model has 64001, but ../baichuan/baichuan-7B-chat/tokenizer.model has 64000). Most likely you are missing added_tokens.json (should be in ../baichuan/baichuan-7B-chat)
do you missing added_tokens.json?