metadata
license: apache-2.0
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: TheBloke/OpenHermes-2-Mistral-7B-GPTQ
model-index:
- name: openhermes-mistral-dpo-gptq
results: []
openhermes-mistral-dpo-gptq
This model is a fine-tuned version of TheBloke/OpenHermes-2-Mistral-7B-GPTQ on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.0134
- Rewards/chosen: 7.8160
- Rewards/rejected: 8.7194
- Rewards/accuracies: 0.375
- Rewards/margins: -0.9034
- Logps/rejected: -146.7816
- Logps/chosen: -235.0709
- Logits/rejected: -2.1916
- Logits/chosen: -2.6628
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 50
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.6763 | 0.005 | 10 | 0.7302 | 0.0677 | 0.1589 | 0.1875 | -0.0912 | -232.3873 | -312.5545 | -2.1541 | -2.6006 |
0.7076 | 0.01 | 20 | 0.7628 | 0.0161 | 0.1965 | 0.25 | -0.1804 | -232.0109 | -313.0701 | -2.1614 | -2.6040 |
0.6925 | 0.015 | 30 | 0.7999 | 0.1974 | 0.5152 | 0.25 | -0.3179 | -228.8234 | -311.2576 | -2.1717 | -2.6192 |
0.6966 | 0.02 | 40 | 0.8630 | 0.7996 | 1.2680 | 0.3125 | -0.4684 | -221.2960 | -305.2355 | -2.1636 | -2.6203 |
0.6764 | 0.025 | 50 | 1.0134 | 7.8160 | 8.7194 | 0.375 | -0.9034 | -146.7816 | -235.0709 | -2.1916 | -2.6628 |
Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.0.1+cu117
- Datasets 2.19.2
- Tokenizers 0.19.1