Built with Axolotl

18c5fa0c-fe09-46ec-ba5d-7f2ede9adea9

This model is a fine-tuned version of migtissera/Tess-v2.5-Phi-3-medium-128k-14B on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.3083

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.000208
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 50
  • training_steps: 394

Training results

Training Loss Epoch Step Validation Loss
No log 0.0025 1 7.0019
8.4818 0.1269 50 6.9127
8.8266 0.2538 100 4.1633
8.6808 0.3807 150 4.3894
7.9309 0.5076 200 3.6368
7.147 0.6345 250 3.0470
7.1612 0.7614 300 2.3794
7.3849 0.8883 350 2.3083

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.0
  • Pytorch 2.5.0+cu124
  • Datasets 3.0.1
  • Tokenizers 0.20.1
Downloads last month
0
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for lesso/18c5fa0c-fe09-46ec-ba5d-7f2ede9adea9