Built with Axolotl

See axolotl config

axolotl version: 0.6.0

base_model: Qwen/Qwen2.5-7B
hub_model_id: sumuks/purple-wintermute-0.1-7b
trust_remote_code: true

load_in_8bit: false
load_in_4bit: false
strict: false
bf16: true
hf_use_auth_token: true

plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_layer_norm: true
liger_fused_linear_cross_entropy: true
save_safetensors:

datasets:
  - path: sumuks/openreview_wintermute_0.1_training_data
    type: completion
    field: text
dataset_prepared_path: .axolotl_cache_data/wintermute_0.1
shuffle_merged_datasets: true
# dataset_exact_deduplication: true
val_set_size: 0.005
output_dir: ./../../outputs/purple-wintermute-0.1-7b
push_dataset_to_hub: sumuks/purple_wintermute_0.1_training_data_in_progress

sequence_length: 2048
sample_packing: true
pad_to_sequence_len: true

adapter: lora
lora_r: 256
lora_alpha: 32
lora_dropout: 0.05
peft_use_rslora: true
lora_target_linear: true

gradient_accumulation_steps: 1
micro_batch_size: 32
eval_batch_size: 1
num_epochs: 3
learning_rate: 5e-5
warmup_ratio: 0.05
evals_per_epoch: 10
saves_per_epoch: 5
gradient_checkpointing: true
lr_scheduler: cosine
optimizer: paged_adamw_8bit

profiler_steps: 100
save_safetensors: true
train_on_inputs: true
wandb_project: wintermute 
wandb_name: purple-wintermute-0.1-7b
deepspeed: deepspeed_configs/zero1.json

purple-wintermute-0.1-7b

This model is a fine-tuned version of Qwen/Qwen2.5-7B on the sumuks/openreview_wintermute_0.1_training_data dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4027

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • total_train_batch_size: 256
  • total_eval_batch_size: 8
  • optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 386
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
1.8108 0.1002 258 1.9127
1.6982 0.2004 516 1.8592
1.663 0.3006 774 1.8258
1.585 0.4008 1032 1.7978
1.5201 0.5010 1290 1.7578
1.4313 0.6012 1548 1.7181
1.3256 0.7014 1806 1.6692
1.2364 0.8016 2064 1.6194
1.161 0.9017 2322 1.5741
1.1284 1.0016 2580 1.5281
1.0433 1.1017 2838 1.4999
1.0058 1.2019 3096 1.4770
1.0179 1.3021 3354 1.4603
0.9993 1.4023 3612 1.4409
0.99 1.5025 3870 1.4319
0.9971 1.6027 4128 1.4222
0.9626 1.7029 4386 1.4126
0.9396 1.8031 4644 1.4083
0.9497 1.9033 4902 1.4041
0.901 2.0031 5160 1.4068
0.9222 2.1033 5418 1.4081
0.8882 2.2035 5676 1.4060
0.9253 2.3037 5934 1.4043
0.8687 2.4039 6192 1.4035
0.9058 2.5041 6450 1.4025
0.8624 2.6043 6708 1.4033
0.8928 2.7045 6966 1.4028
0.874 2.8047 7224 1.4029
0.8892 2.9049 7482 1.4027

Framework versions

  • PEFT 0.14.0
  • Transformers 4.48.0
  • Pytorch 2.5.1
  • Datasets 3.1.0
  • Tokenizers 0.21.0
Downloads last month
42
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for sumuks/purple-wintermute-0.1-7b

Base model

Qwen/Qwen2.5-7B
Adapter
(362)
this model

Dataset used to train sumuks/purple-wintermute-0.1-7b