nrishabh's picture
End of training
de0f7bd verified
metadata
license: mit
library_name: peft
tags:
  - trl
  - sft
  - generated_from_trainer
base_model: LoftQ/Meta-Llama-3-8B-Instruct-4bit-64rank
model-index:
  - name: llama3-8b-instruct-qlora-large
    results: []

llama3-8b-instruct-qlora-large

This model is a fine-tuned version of LoftQ/Meta-Llama-3-8B-Instruct-4bit-64rank on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8530

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss
2.3454 1.0 158 1.2439
2.1288 2.0 316 1.0900
2.0335 3.0 474 1.0394
1.9315 4.0 632 0.9995
1.804 5.0 790 0.9605
1.6583 6.0 948 0.9411
1.4994 7.0 1106 0.9283
1.3388 8.0 1264 0.9158
1.1894 9.0 1422 0.9103
1.0616 10.0 1580 0.9027
0.9461 11.0 1738 0.8963
0.8447 12.0 1896 0.8922
0.7575 13.0 2054 0.8887
0.6817 14.0 2212 0.8803
0.6192 15.0 2370 0.8761
0.5669 16.0 2528 0.8715
0.5196 17.0 2686 0.8719
0.479 18.0 2844 0.8683
0.4473 19.0 3002 0.8662
0.4202 20.0 3160 0.8624
0.397 21.0 3318 0.8590
0.377 22.0 3476 0.8573
0.3622 23.0 3634 0.8558
0.3514 24.0 3792 0.8548
0.3434 25.0 3950 0.8543
0.3349 26.0 4108 0.8541
0.332 27.0 4266 0.8538
0.328 28.0 4424 0.8541
0.3286 29.0 4582 0.8532
0.3279 30.0 4740 0.8530

Framework versions

  • PEFT 0.10.0
  • Transformers 4.40.0
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.19.1