LLM_old / README.md
george2704's picture
george2704/LLM2
627b18b verified
|
raw
history blame
2.67 kB
metadata
license: apache-2.0
library_name: peft
tags:
  - generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
model-index:
  - name: LLM
    results: []

LLM

This model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.2-GPTQ on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.1056

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.02
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2
  • num_epochs: 27

Training results

Training Loss Epoch Step Validation Loss
2.2807 0.92 6 2.1056
1.9558 2.0 13 2.1056
2.2821 2.92 19 2.1056
1.956 4.0 26 2.1056
2.2816 4.92 32 2.1056
1.9561 6.0 39 2.1056
2.2812 6.92 45 2.1056
1.9556 8.0 52 2.1056
2.2815 8.92 58 2.1056
1.9555 10.0 65 2.1056
2.2824 10.92 71 2.1056
1.9554 12.0 78 2.1056
2.2823 12.92 84 2.1056
1.9566 14.0 91 2.1056
2.2814 14.92 97 2.1056
1.956 16.0 104 2.1056
2.2829 16.92 110 2.1056
1.9559 18.0 117 2.1056
2.2821 18.92 123 2.1056
1.9556 20.0 130 2.1056
2.2817 20.92 136 2.1056
1.9559 22.0 143 2.1056
2.2816 22.92 149 2.1056
1.9561 24.0 156 2.1056
2.1055 24.92 162 2.1056

Framework versions

  • PEFT 0.10.0
  • Transformers 4.39.3
  • Pytorch 2.2.2+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2