Configurations choice
Collection
Choice of configuration based on the results of different fine-tuning. All provide mor or less same results but 1 and 2 are way faster! (lr)
•
52 items
•
Updated
This model is a fine-tuned version of meta-llama/Meta-Llama-3.1-8B-Instruct on the GaetanMichelet/chat-60_ft_task-1 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.1691 | 0.8696 | 5 | 2.0741 |
2.0145 | 1.9130 | 11 | 2.0674 |
2.0918 | 2.9565 | 17 | 2.0492 |
2.0838 | 4.0 | 23 | 2.0192 |
2.0314 | 4.8696 | 28 | 1.9792 |
1.9775 | 5.9130 | 34 | 1.9190 |
1.8873 | 6.9565 | 40 | 1.8339 |
1.7547 | 8.0 | 46 | 1.7314 |
1.6653 | 8.8696 | 51 | 1.6435 |
1.5709 | 9.9130 | 57 | 1.5691 |
1.533 | 10.9565 | 63 | 1.5254 |
1.4035 | 12.0 | 69 | 1.4860 |
1.4227 | 12.8696 | 74 | 1.4573 |
1.4167 | 13.9130 | 80 | 1.4216 |
1.3733 | 14.9565 | 86 | 1.3884 |
1.2917 | 16.0 | 92 | 1.3621 |
1.2393 | 16.8696 | 97 | 1.3432 |
1.1512 | 17.9130 | 103 | 1.3246 |
1.1361 | 18.9565 | 109 | 1.3081 |
1.089 | 20.0 | 115 | 1.2985 |
1.0272 | 20.8696 | 120 | 1.2924 |
1.0591 | 21.9130 | 126 | 1.2934 |
0.9601 | 22.9565 | 132 | 1.3023 |
0.9245 | 24.0 | 138 | 1.3152 |
0.8188 | 24.8696 | 143 | 1.3258 |
0.8866 | 25.9130 | 149 | 1.3491 |
0.7508 | 26.9565 | 155 | 1.3779 |
0.7961 | 28.0 | 161 | 1.4176 |
Base model
meta-llama/Llama-3.1-8B