zephyr-7b-align-scan-7e-07-0.45-cosine-3.0
This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set:
- Loss: 0.9379
- Rewards/chosen: -0.3417
- Rewards/rejected: -2.0404
- Rewards/accuracies: 0.3472
- Rewards/margins: 1.6986
- Logps/rejected: -85.6625
- Logps/chosen: -75.2506
- Logits/rejected: -2.6727
- Logits/chosen: -2.6887
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.6948 | 0.3484 | 100 | 0.6997 | 0.9816 | 0.4942 | 0.3452 | 0.4873 | -80.0301 | -72.3100 | -2.5413 | -2.5577 |
0.7373 | 0.6969 | 200 | 0.7720 | 1.2732 | 0.5117 | 0.3294 | 0.7615 | -79.9912 | -71.6619 | -2.5716 | -2.5870 |
0.4002 | 1.0453 | 300 | 0.8163 | 0.4524 | -0.4497 | 0.3472 | 0.9021 | -82.1276 | -73.4859 | -2.6256 | -2.6409 |
0.3982 | 1.3937 | 400 | 0.8872 | 1.2165 | 0.0680 | 0.3313 | 1.1485 | -80.9772 | -71.7879 | -2.7106 | -2.7265 |
0.389 | 1.7422 | 500 | 0.9107 | 0.3181 | -0.9594 | 0.3353 | 1.2775 | -83.2604 | -73.7844 | -2.7188 | -2.7346 |
0.3707 | 2.0906 | 600 | 0.8992 | 0.6908 | -0.7854 | 0.3472 | 1.4762 | -82.8736 | -72.9561 | -2.6904 | -2.7065 |
0.3672 | 2.4390 | 700 | 0.9354 | -0.5110 | -2.2396 | 0.3492 | 1.7285 | -86.1051 | -75.6269 | -2.6662 | -2.6823 |
0.3596 | 2.7875 | 800 | 0.9344 | -0.3373 | -2.0235 | 0.3452 | 1.6862 | -85.6249 | -75.2407 | -2.6727 | -2.6886 |
Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- 2
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for taicheng/zephyr-7b-align-scan-7e-07-0.45-cosine-3.0
Base model
mistralai/Mistral-7B-v0.1
Finetuned
alignment-handbook/zephyr-7b-sft-full