Small Model Learnability Gap: Models
Collection
24 items
•
Updated
•
1
This model is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct on the MATH_training_Qwen_QwQ_32B_Preview dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.3395 | 0.5988 | 200 | 0.3680 |
0.1896 | 1.1976 | 400 | 0.3721 |
0.181 | 1.7964 | 600 | 0.3546 |
Base model
meta-llama/Llama-3.1-8B