Tokenizer behaving differently than Meta's original.
#5
by
viniciusarruda
- opened
I'm having a issue while decoding/encoding.
This is also related to the chat completion format already mentioned previously in other discussions.
You can see the issue in details and also replicate it here. I'm comparing Meta original tokenizer with this model using llama-cpp-python.
In summary, the tokens 518
and 29961
are being decoded/encoded differently.
As I think we discussed on my Discord, there's nothing I can do about this as I used the correct tokenizer.model and that is the output that was produced. Have you discussed it on the llama.cpp Github?
Yes, I'm trying to get this conversation on the llama.cpp repo. Thank you very much!
viniciusarruda
changed discussion status to
closed
viniciusarruda
changed discussion status to
open