Handbook v0.1 models and datasets
Collection
Models and datasets for v0.1 of the alignment handbook
β’
6 items
β’
Updated
β’
24
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the HuggingFaceH4/ultrachat_200k dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.9075 | 1.0 | 1090 | 0.9353 |
Base model
mistralai/Mistral-7B-v0.1