Small LLMs
Collection
Collection of Fine Tuned Small LLMs
•
13 items
•
Updated
•
2
Note: This model card has been generated automatically according to the information the Trainer had access to. Visit the model card to see the full description
This model is a fine-tuned version of h2oai/h2o-danube2-1.8b-base on the Ritvik19/open-hermes-2_5-reformatted dataset. It achieves the following results on the evaluation set:
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.0838 | 0.9999 | 1704 | 1.1197 |
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 44.12 |
AI2 Reasoning Challenge (25-Shot) | 43.26 |
HellaSwag (10-Shot) | 73.12 |
MMLU (5-Shot) | 40.19 |
TruthfulQA (0-shot) | 38.93 |
Winogrande (5-shot) | 67.88 |
GSM8k (5-shot) | 1.36 |
Base model
h2oai/h2o-danube2-1.8b-base