Finetuning

#10
by kaidanti - opened

I was trying to finetune the model, but kept running into issues with training examples being skipped:
This instance will be ignored in loss calculation. Note, if this happens often, consider increasing the max_seq_length.
I increased the value up to 128_000 but still keep running into this issue.

Meta Llama org

What are you using for FT? Here is a ref implementation from the Meta team:
https://github.com/meta-llama/llama-recipes/tree/main/recipes/quickstart/finetuning

Sign up or log in to comment