Llama-3.1-8B
Collection
36 items
•
Updated
This model is a fine-tuned version of meta-llama/Meta-Llama-3.1-8B-Instruct on the GaetanMichelet/chat-60_ft_task-3, the GaetanMichelet/chat-120_ft_task-3 and the GaetanMichelet/chat-180_ft_task-3 datasets. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.6392 | 1.0 | 17 | 1.6279 |
1.4618 | 2.0 | 34 | 1.4343 |
1.2006 | 3.0 | 51 | 1.2241 |
1.0799 | 4.0 | 68 | 1.1761 |
1.0615 | 5.0 | 85 | 1.1524 |
1.0045 | 6.0 | 102 | 1.1361 |
0.9831 | 7.0 | 119 | 1.1392 |
0.8698 | 8.0 | 136 | 1.1567 |
0.7759 | 9.0 | 153 | 1.1918 |
0.7296 | 10.0 | 170 | 1.2537 |
0.6747 | 11.0 | 187 | 1.2852 |
0.4777 | 12.0 | 204 | 1.3877 |
0.5052 | 13.0 | 221 | 1.4002 |
Base model
meta-llama/Llama-3.1-8B