Llama-3.1-8B_auto
Collection
36 items
•
Updated
This model is a fine-tuned version of meta-llama/Meta-Llama-3.1-8B-Instruct on the GaetanMichelet/chat-60_ft_task-3_auto dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.6601 | 0.6957 | 2 | 1.6697 |
1.6649 | 1.7391 | 5 | 1.6260 |
1.591 | 2.7826 | 8 | 1.5468 |
1.4992 | 3.8261 | 11 | 1.4664 |
1.4061 | 4.8696 | 14 | 1.3963 |
1.352 | 5.9130 | 17 | 1.3313 |
1.2367 | 6.9565 | 20 | 1.2646 |
1.2127 | 8.0 | 23 | 1.2216 |
1.1571 | 8.6957 | 25 | 1.2052 |
1.1165 | 9.7391 | 28 | 1.1900 |
1.124 | 10.7826 | 31 | 1.1787 |
1.0947 | 11.8261 | 34 | 1.1694 |
1.0606 | 12.8696 | 37 | 1.1634 |
1.0621 | 13.9130 | 40 | 1.1573 |
1.0235 | 14.9565 | 43 | 1.1550 |
1.0274 | 16.0 | 46 | 1.1531 |
0.9827 | 16.6957 | 48 | 1.1526 |
0.9959 | 17.7391 | 51 | 1.1536 |
0.9813 | 18.7826 | 54 | 1.1576 |
0.9571 | 19.8261 | 57 | 1.1600 |
0.9413 | 20.8696 | 60 | 1.1619 |
0.9355 | 21.9130 | 63 | 1.1652 |
0.9063 | 22.9565 | 66 | 1.1698 |
0.8949 | 24.0 | 69 | 1.1736 |
Base model
meta-llama/Llama-3.1-8B