Configurations choice
Collection
Choice of configuration based on the results of different fine-tuning. All provide mor or less same results but 1 and 2 are way faster! (lr)
•
52 items
•
Updated
This model is a fine-tuned version of meta-llama/Meta-Llama-3.1-8B-Instruct on the GaetanMichelet/chat-60_ft_task-1 and the GaetanMichelet/chat-120_ft_task-1 datasets. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.4681 | 1.0 | 11 | 2.4539 |
2.3894 | 2.0 | 22 | 2.4260 |
2.4746 | 3.0 | 33 | 2.3827 |
2.4177 | 4.0 | 44 | 2.3138 |
2.1959 | 5.0 | 55 | 2.2269 |
2.16 | 6.0 | 66 | 2.1177 |
2.0388 | 7.0 | 77 | 1.9844 |
1.8932 | 8.0 | 88 | 1.8442 |
1.7199 | 9.0 | 99 | 1.6830 |
1.4973 | 10.0 | 110 | 1.4929 |
1.2726 | 11.0 | 121 | 1.2980 |
1.204 | 12.0 | 132 | 1.1554 |
1.0597 | 13.0 | 143 | 1.0772 |
1.0642 | 14.0 | 154 | 1.0425 |
1.0466 | 15.0 | 165 | 1.0201 |
1.0044 | 16.0 | 176 | 1.0010 |
0.9967 | 17.0 | 187 | 0.9866 |
0.9863 | 18.0 | 198 | 0.9736 |
0.9065 | 19.0 | 209 | 0.9644 |
0.8669 | 20.0 | 220 | 0.9539 |
0.9253 | 21.0 | 231 | 0.9454 |
0.872 | 22.0 | 242 | 0.9398 |
0.8824 | 23.0 | 253 | 0.9328 |
0.8582 | 24.0 | 264 | 0.9283 |
0.8763 | 25.0 | 275 | 0.9221 |
0.8199 | 26.0 | 286 | 0.9177 |
0.7986 | 27.0 | 297 | 0.9146 |
0.7754 | 28.0 | 308 | 0.9142 |
0.7893 | 29.0 | 319 | 0.9086 |
0.7312 | 30.0 | 330 | 0.9087 |
0.7431 | 31.0 | 341 | 0.9050 |
0.7103 | 32.0 | 352 | 0.9037 |
0.6967 | 33.0 | 363 | 0.9092 |
0.6502 | 34.0 | 374 | 0.9071 |
0.6659 | 35.0 | 385 | 0.9019 |
0.7003 | 36.0 | 396 | 0.9015 |
0.629 | 37.0 | 407 | 0.9018 |
0.6299 | 38.0 | 418 | 0.9081 |
0.6259 | 39.0 | 429 | 0.9162 |
0.6262 | 40.0 | 440 | 0.9212 |
0.5707 | 41.0 | 451 | 0.9212 |
0.5749 | 42.0 | 462 | 0.9274 |
0.533 | 43.0 | 473 | 0.9369 |
Base model
meta-llama/Llama-3.1-8B