SleepyGorilla's picture
SleepyGorilla/Mistral_7B
bababa8 verified
|
raw
history blame
2.89 kB
metadata
license: apache-2.0
library_name: peft
tags:
  - trl
  - dpo
  - generated_from_trainer
base_model: TheBloke/OpenHermes-2-Mistral-7B-GPTQ
model-index:
  - name: openhermes-mistral-dpo-gptq
    results: []

openhermes-mistral-dpo-gptq

This model is a fine-tuned version of TheBloke/OpenHermes-2-Mistral-7B-GPTQ on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0082
  • Rewards/chosen: -0.7999
  • Rewards/rejected: -11.8804
  • Rewards/accuracies: 1.0
  • Rewards/margins: 11.0804
  • Logps/rejected: -383.9451
  • Logps/chosen: -160.8140
  • Logits/rejected: -2.4692
  • Logits/chosen: -2.6059

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2
  • training_steps: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.5652 0.0 10 0.4581 0.1358 -0.6615 1.0 0.7973 -271.7569 -151.4569 -2.4902 -2.7008
0.3737 0.0 20 0.1877 0.0778 -2.8831 1.0 2.9609 -293.9724 -152.0366 -2.4897 -2.6893
0.2022 0.0 30 0.0621 -0.1154 -6.0503 1.0 5.9349 -325.6448 -153.9687 -2.4890 -2.6603
0.0284 0.0 40 0.0155 -0.5833 -10.1231 1.0 9.5397 -366.3722 -158.6483 -2.4792 -2.6266
0.0593 0.0 50 0.0082 -0.7999 -11.8804 1.0 11.0804 -383.9451 -160.8140 -2.4692 -2.6059

Framework versions

  • PEFT 0.9.0
  • Transformers 4.38.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.18.0
  • Tokenizers 0.15.2