Chat template problem.

#23
by mylesgoose - opened

Hello There is a slight problem with your chat template. If you train a model with that current chat template the model starts to output as the first token <|eot_id|> and naturally the script will then halt generation. the model learns to see this:

<|begin_of_text|><|start_header_id|>user<|end_header_id|>

<|image|>If I had to write a haiku for this one, it would be: <|eot_id|><|start_header_id|>assistant<|end_header_id|>

Here is a haiku for the image:

Rabbit in a coat
Dapper and dignified
Country cottage charm<|eot_id|>

and so the model learns to do this in its first output:
<|eot_id|><|start_header_id|>assistant<|end_header_id|>
which naturally messes up the training. can you please put a new line character after the eot_id or prior to the start header id in the chat template: so that the format is like so :
<|begin_of_text|><|start_header_id|>user<|end_header_id|>

<|image|>If I had to write a haiku for this one, it would be: <|eot_id|>

<|start_header_id|>assistant<|end_header_id|>

this results in a clearer distinction between the end of the user message and the start of the models.
<|begin_of_text|>
<|start_header_id|>system<|end_header_id|>

Today Date: 26 Sep 2024

You are a helpful language and vision assistant. You are able to understand the visual content that the user provides, and assist the user with a variety of tasks using natural language.<|eot_id|>
<|start_header_id|>user<|end_header_id|>

If I had to write a haiku for this one, it would be:<|eot_id|>#notice that this is sending the message to the next line now. which forms a clear distinction for the model. if you train a model with your current prompt it just outputs[ ]

<|start_header_id|>assistant<|end_header_id|>

['A rabbit on a sunny day']
this is an example of the 3.1 models chat template. i have not examined your one yet whoever i have examined the output of it above. to prevent the cleaver model learning that eot comes first there need to be a clearer distinction made with a \n

  "chat_template": "{{- bos_token }}\n{%- if custom_tools is defined %}\n    {%- set tools = custom_tools %}\n{%- endif %}\n{%- if not tools_in_user_message is defined %}\n    {%- set tools_in_user_message = true %}\n{%- endif %}\n{%- if not date_string is defined %}\n    {%- set date_string = \"26 Sep 2024\" %}\n{%- endif %}\n{%- if not tools is defined %}\n    {%- set tools = none %}\n{%- endif %}\n\n{#- This block extracts the system message, so we can slot it into the right place. #}\n{%- if messages[0]['role'] == 'system' %}\n    {%- set system_message = messages[0]['content']|trim %}\n    {%- set messages = messages[1:] %}\n{%- else %}\n    {%- set system_message = \"\" %}\n{%- endif %}\n\n{#- System message + builtin tools #}\n{{- \"\n<|start_header_id|>system<|end_header_id|>\\n\\n\" }}\n{%- if builtin_tools is defined or tools is not none %}\n    {{- \"Environment: ipython\\n\" }}\n{%- endif %}\n{%- if builtin_tools is defined %}\n    {{- \"Tools: \" + builtin_tools | reject('equalto', 'code_interpreter') | join(\", \") + \"\\n\\n\"}}\n{%- endif %}\n{{- \"\\n\" }}\n{{- \"Today Date: \" + date_string + \"\\n\\n\" }}\n{%- if tools is not none and not tools_in_user_message %}\n    {{- \"You have access to the following functions. To call a function, please respond with JSON for a function call.\" }}\n    {{- 'Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}.' }}\n    {{- \"Do not use variables.\\n\\n\" }}\n    {%- for t in tools %}\n        {{- t | tojson(indent=4) }}\n        {{- \"\\n\\n\" }}\n    {%- endfor %}\n{%- endif %}\n{{- system_message }}\n{{- \"<|eot_id|>\" }}\n\n{#- Custom tools are passed in a user message with some extra guidance #}\n{%- if tools_in_user_message and not tools is none %}\n    {#- Extract the first user message so we can plug it in here #}\n    {%- if messages | length != 0 %}\n        {%- set first_user_message = messages[0]['content']|trim %}\n        {%- set messages = messages[1:] %}\n    {%- else %}\n        {{- raise_exception(\"Cannot put tools in the first user message when there's no first user message!\") }}\n{%- endif %}\n    {{- '\\n<|start_header_id|>user<|end_header_id|>\\n\\n' -}}\n    {{- \"Given the following functions, please respond with a JSON for a function call \" }}\n    {{- \"with its proper arguments that best answers the given prompt.\\n\\n\" }}\n    {{- 'Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}.' }}\n    {{- \"Do not use variables.\\n\\n\" }}\n    {%- for t in tools %}\n        {{- t | tojson(indent=4) }}\n        {{- \"\\n\\n\" }}\n    {%- endfor %}\n    {{- first_user_message + \"<|eot_id|>\"}}\n{%- endif %}\n\n{%- for message in messages %}\n    {%- if not (message.role == 'ipython' or message.role == 'tool' or 'tool_calls' in message) %}\n        {{- '\n<|start_header_id|>' + message['role'] + '<|end_header_id|>\\n\\n'+ message['content'] | trim + '<|eot_id|>\n' }}\n    {%- elif 'tool_calls' in message %}\n        {%- if not message.tool_calls|length == 1 %}\n            {{- raise_exception(\"This model only supports single tool-calls at once!\") }}\n        {%- endif %}\n        {%- set tool_call = message.tool_calls[0].function %}\n        {%- if builtin_tools is defined and tool_call.name in builtin_tools %}\n            {{- '\n<|start_header_id|>assistant<|end_header_id|>\\n\\n' -}}\n            {{- \"<|python_tag|>\" + tool_call.name + \".call(\" }}\n            {%- for arg_name, arg_val in tool_call.arguments | items %}\n                {{- arg_name + '=\"' + arg_val + '\"' }}\n                {%- if not loop.last %}\n                    {{- \", \" }}\n                {%- endif %}\n                {%- endfor %}\n            {{- \")\" }}\n        {%- else  %}\n            {{- '\n<|start_header_id|>assistant<|end_header_id|>\\n\\n' -}}\n            {{- '{\"name\": \"' + tool_call.name + '\", ' }}\n            {{- '\"parameters\": ' }}\n            {{- tool_call.arguments | tojson }}\n            {{- \"}\" }}\n        {%- endif %}\n        {%- if builtin_tools is defined %}\n            {#- This means we're in ipython mode #}\n            {{- \"<|eom_id|>\" }}\n        {%- else %}\n            {{- \"<|eot_id|>\" }}\n        {%- endif %}\n    {%- elif message.role == \"tool\" or message.role == \"ipython\" %}\n        {{- \"<|start_header_id|>ipython<|end_header_id|>\\n\\n\" }}\n        {%- if message.content is mapping or message.content is iterable %}\n            {{- message.content | tojson }}\n        {%- else %}\n            {{- message.content }}\n        {%- endif %}\n        {{- \"<|eot_id|>\" }}\n    {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n    {{- '\n<|start_header_id|>assistant<|end_header_id|>\\n\\n' }}\n{%- endif %}\n",

also your chat_template.json differs from the chat template defined in the tokenizer

Meta Llama org

also your chat_template.json differs from the chat template defined in the tokenizer

You are right, there were some last minute changes that I forgot to sync to the json file. We'll fix!

Regarding the template itself, the intention was to make it like the 3.1 except for the changes in this release: image support (of course), no system message in VLM mode, and other minor stuff. The VLM version of the template was taken from Meta's code, we'll double check everything to verify the version that was used to train the model.

Meta Llama org

@mylesgoose I have submitted https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct/discussions/35 to sync the processor's template to use the same version as the tokenizer. Thanks again for noticing this.

Regarding your original question, I have verified that Meta's reference code tokenizes in exactly the same way you described here (no newline character after <|eot_id|>. I would suggest you open a discussion in https://github.com/meta-llama/llama-stack and enquire whether this is the same format used during training.

Nevertheless, copying @vontimitta and @Hamid-Nazeri in case they can confirm.

I think the official prompt guide also did not have newline character after <|eot_id|> and the tokenizer can recognize the <|eot_id|> without newline, otherwise it will be extra newline token added.

If you are running inference then the model can seem to understand this format:
<|begin_of_text|><|start_header_id|>user<|end_header_id|>

<|image|>Describe this image in two sentences<|eot_id|><|start_header_id|>assistant<|end_header_id|>

however if during training the chat template is used to train the model building from json data. The models runs fine during training. however at inference time it outputs this as its first message:
<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Many training scripts build the json structure during training themselves manually specifying the tokens. In that case there wont be a problem. Yet in the case where its not built manually and built from the chat template it is an issue. It can even cause the model not to ouput the <|eot_id|> at the end. leading to filling the sapce like this
<|begin_of_text|>
<|start_header_id|>system<|end_header_id|>

Today Date: 28 Sep 2024

You are a helpful language and vision assistant. You are able to understand the visual content that the user provides, and assist the user with a variety of tasks using natural language. <|eot_id|>
<|start_header_id|>user<|end_header_id|>

Read the image back to me<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>

['Nasdaq & Amex 51 45 48 51 52 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51 51

I think this is very unclear to the model.
<|image|>Describe this image in two sentences<|eot_id|><|start_header_id|>assistant<|end_header_id|>

i believe it should look like this
<|image|>Describe this image in two sentences<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>

Here is the result of the same model trained with the chat template new line character:

<|begin_of_text|>
<|start_header_id|>system<|end_header_id|>

Today Date: 28 Sep 2024

You are a helpful language and vision assistant. You are able to understand the visual content that the user provides, and assist the user with a variety of tasks using natural language. <|eot_id|>
<|start_header_id|>user<|end_header_id|>

Read the image back to me<|eot_id|>

<|start_header_id|>assistant<|end_header_id|>

["Nasdaq & amex today's stock market snapshot: stocks in bold rose lower or fell more than 5% lower. stocks in red rose more than 5% lower. stocks in green rose more than 5% higher. stocks in blue rose less than 5% higher. stocks in yellow rose less than 5% higher. stocks in orange fell less than 5% lower. stocks in black fell less than 5% lower. stocks in black and white did not move. stock market snapshot updated at 2:40 p.m. et. a s of monday, aug. 26, 1996"]

Also this sting your putting in the chat template: {{- "Cutting Knowledge Date: December 2023\n" }}\n It causes the model to learn it over and over and over. and sometimes it ill just output that text. When i removed that string the model converged much faster this string did not seem to bother it Today Date: 28 Sep 2024. We can clearly see above the models at least the smaller ones are having trouble dealing with the eot_id
I tested this theory above with this model
mylesgoose/Llama-3.1-Minitron-4B-Width-Base

into this one
mylesgoose/Llama-3.1-Minitron-4B-Llava-Nvidia-siglip-ov

you can see the effect yourself by trying that model Llama-3.1-Minitron-4B-Llava-Nvidia-siglip-ov and varying the new line in the chat template as per your one and as per the one below. Because its such a small model the effect is more pronounced.

  "chat_template": "{{- bos_token }}\n{%- if custom_tools is defined %}\n    {%- set tools = custom_tools %}\n{%- endif %}\n{%- if not tools_in_user_message is defined %}\n    {%- set tools_in_user_message = true %}\n{%- endif %}\n{%- if not date_string is defined %}\n    {%- set date_string = \"26 Sep 2024\" %}\n{%- endif %}\n{%- if not tools is defined %}\n    {%- set tools = none %}\n{%- endif %}\n\n{#- This block extracts the system message, so we can slot it into the right place. #}\n{%- if messages[0]['role'] == 'system' %}\n    {%- set system_message = messages[0]['content']|trim %}\n    {%- set messages = messages[1:] %}\n{%- else %}\n    {%- set system_message = \"\" %}\n{%- endif %}\n\n{#- System message + builtin tools #}\n{{- \"\n<|start_header_id|>system<|end_header_id|>\\n\\n\" }}\n{%- if builtin_tools is defined or tools is not none %}\n    {{- \"Environment: ipython\\n\" }}\n{%- endif %}\n{%- if builtin_tools is defined %}\n    {{- \"Tools: \" + builtin_tools | reject('equalto', 'code_interpreter') | join(\", \") + \"\\n\\n\"}}\n{%- endif %}\n{{- \"\\n\" }}\n{{- \"Today Date: \" + date_string + \"\\n\\n\" }}\n{%- if tools is not none and not tools_in_user_message %}\n    {{- \"You have access to the following functions. To call a function, please respond with JSON for a function call.\" }}\n    {{- 'Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}.' }}\n    {{- \"Do not use variables.\\n\\n\" }}\n    {%- for t in tools %}\n        {{- t | tojson(indent=4) }}\n        {{- \"\\n\\n\" }}\n    {%- endfor %}\n{%- endif %}\n{{- system_message }}\n{{- \"<|eot_id|>\" }}\n\n{#- Custom tools are passed in a user message with some extra guidance #}\n{%- if tools_in_user_message and not tools is none %}\n    {#- Extract the first user message so we can plug it in here #}\n    {%- if messages | length != 0 %}\n        {%- set first_user_message = messages[0]['content']|trim %}\n        {%- set messages = messages[1:] %}\n    {%- else %}\n        {{- raise_exception(\"Cannot put tools in the first user message when there's no first user message!\") }}\n{%- endif %}\n    {{- '\\n<|start_header_id|>user<|end_header_id|>\\n\\n' -}}\n    {{- \"Given the following functions, please respond with a JSON for a function call \" }}\n    {{- \"with its proper arguments that best answers the given prompt.\\n\\n\" }}\n    {{- 'Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}.' }}\n    {{- \"Do not use variables.\\n\\n\" }}\n    {%- for t in tools %}\n        {{- t | tojson(indent=4) }}\n        {{- \"\\n\\n\" }}\n    {%- endfor %}\n    {{- first_user_message + \"<|eot_id|>\"}}\n{%- endif %}\n\n{%- for message in messages %}\n    {%- if not (message.role == 'ipython' or message.role == 'tool' or 'tool_calls' in message) %}\n        {{- '\n<|start_header_id|>' + message['role'] + '<|end_header_id|>\\n\\n'+ message['content'] | trim + '<|eot_id|>\n' }}\n    {%- elif 'tool_calls' in message %}\n        {%- if not message.tool_calls|length == 1 %}\n            {{- raise_exception(\"This model only supports single tool-calls at once!\") }}\n        {%- endif %}\n        {%- set tool_call = message.tool_calls[0].function %}\n        {%- if builtin_tools is defined and tool_call.name in builtin_tools %}\n            {{- '\n<|start_header_id|>assistant<|end_header_id|>\\n\\n' -}}\n            {{- \"<|python_tag|>\" + tool_call.name + \".call(\" }}\n            {%- for arg_name, arg_val in tool_call.arguments | items %}\n                {{- arg_name + '=\"' + arg_val + '\"' }}\n                {%- if not loop.last %}\n                    {{- \", \" }}\n                {%- endif %}\n                {%- endfor %}\n            {{- \")\" }}\n        {%- else  %}\n            {{- '\n<|start_header_id|>assistant<|end_header_id|>\\n\\n' -}}\n            {{- '{\"name\": \"' + tool_call.name + '\", ' }}\n            {{- '\"parameters\": ' }}\n            {{- tool_call.arguments | tojson }}\n            {{- \"}\" }}\n        {%- endif %}\n        {%- if builtin_tools is defined %}\n            {#- This means we're in ipython mode #}\n            {{- \"<|eom_id|>\" }}\n        {%- else %}\n            {{- \"<|eot_id|>\" }}\n        {%- endif %}\n    {%- elif message.role == \"tool\" or message.role == \"ipython\" %}\n        {{- \"<|start_header_id|>ipython<|end_header_id|>\\n\\n\" }}\n        {%- if message.content is mapping or message.content is iterable %}\n            {{- message.content | tojson }}\n        {%- else %}\n            {{- message.content }}\n        {%- endif %}\n        {{- \"<|eot_id|>\" }}\n    {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n    {{- '\n<|start_header_id|>assistant<|end_header_id|>\\n\\n' }}\n{%- endif %}\n",

@wukaixingxp @pcuenq @vontimitta @Hamid-Nazeri

Hello. So i made the model compatible with system prompts and images now. and i have adjusted the chat templates to match. each other and removed that redundant string of text about cutoff training date. and the model is working well. As you can see a system prompt was able to be applied even when images where present.

here is a link to a working example
https://huggingface.co/mylesgoose/Llama-3.2-11B-Vision-Instruct/resolve/main/tokenizer_config.json
and chat tempalte
https://huggingface.co/mylesgoose/Llama-3.2-11B-Vision-Instruct/resolve/main/chat_template.json

<|begin_of_text|>
<|start_header_id|>system<|end_header_id|>

Today Date: 28 Sep 2024

[{'You are a helpful and creative AI assistant. You excel at generating poetic responses.'}]<|eot_id|>
<|start_header_id|>user<|end_header_id|>

<|image|>If I had to write a haiku for this one, It would be:<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>

Sure, here is a haiku for the image:

A rabbit in a coat
Stands on a path, flowers around
Spring's gentle delight<|eot_id|>

In your example can you import date time like below so that the correct date is given the jnija template and add the system message. as follows. if you update your read me with this code. Ensure to change model back to your base one.

import requests
import torch
from PIL import Image
from transformers import MllamaForConditionalGeneration, AutoProcessor
from datetime import date
date_string: str = date.today().strftime("%d %b %Y")
model_id = "mylesgoose/Llama-3.2-11B-Vision-Instruct"

model = MllamaForConditionalGeneration.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
processor = AutoProcessor.from_pretrained(model_id)
model.tie_weights()
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg"
image = Image.open(requests.get(url, stream=True).raw)

messages = [
    {"role": "system", "content": [{"You are a helpful and creative AI assistant. You excel at generating poetic responses."}]},
    {"role": "user", "content": [
        {"type": "image"},
        {"type": "text", "text": "If I had to write a haiku for this one, It would be:"}
    ]}
]
input_text = processor.apply_chat_template(messages, add_generation_prompt=True, date_string=date_string)
inputs = processor(image, input_text, return_tensors="pt").to(model.device)

output = model.generate(**inputs, max_new_tokens=300)
print(processor.decode(output[0]))

@wukaixingxp @pcuenq @vontimitta @Hamid-Nazeri

Sign up or log in to comment