---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: EVA-Qwen2.5-1.5B-FFT-v0.0
results: []
---
[](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config
axolotl version: `0.4.1`
```yaml
base_model: /media/kearm/Disk_2/HF_FAST_MoE_Fodder/Qwen2.5-1.5B
load_in_8bit: false
load_in_4bit: false
strict: false
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
# plugins:
# - axolotl.integrations.spectrum.SpectrumPlugin
# spectrum_top_fraction: 0.5
# # Optional if using a pre-scanned model as your base_model. Useful if using a model mirror
# spectrum_model_name: Qwen/Qwen2.5-32B
datasets:
- path: datasets/Celeste_Filtered_utf8fix.jsonl
type: sharegpt
- path: datasets/deduped_not_samantha_norefusals.jsonl
type: sharegpt
- path: datasets/deduped_SynthRP-Gens_processed_ShareGPT_converted_cleaned.jsonl
type: sharegpt
- path: datasets/deduped_Synthstruct-Gens_processed_sharegpt_converted_cleaned.jsonl
type: sharegpt
- path: datasets/Gryphe-4o-WP-filtered-sharegpt_utf8fix.jsonl
type: sharegpt
- path: datasets/Sonnet3-5-charcard-names-filtered-sharegpt_utf8fix.jsonl
type: sharegpt
- path: datasets/SystemChat_subset_filtered_sharegpt_utf8fix.jsonl
type: sharegpt
- path: datasets/S2.jsonl
type: sharegpt
- path: datasets/Turing.jsonl
type: sharegpt
chat_template: chatml
shuffle_merged_datasets: true
val_set_size: 0.05
output_dir: EVA-Qwen2.5-1.5B-FFT-v0.0
sequence_len: 10240
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
# adapter: qlora
# lora_model_dir:
# lora_r: 64
# lora_alpha: 128
# lora_dropout: 0.05
# lora_target_linear: true
# peft_use_dora: true
wandb_project: EVA-Qwen2.5-1.5B-FFT-v0.0
wandb_entity:
wandb_watch:
wandb_name: Unit-00
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 3
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.000005
max_grad_norm: 1.5
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: "unsloth"
gradient_checkpointing_kwargs:
use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 20
evals_per_epoch: 4
saves_per_epoch: 4
save_safetensors: true
save_total_limit: 8
hub_model_id:
hub_strategy:
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
weight_decay: 0.15
# fsdp:
# - full_shard
# - auto_wrap
# fsdp_config:
# fsdp_limit_all_gathers: true
# fsdp_sync_module_states: false
# fsdp_offload_params: true
# fsdp_cpu_ram_efficient_loading: true
# fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
# fsdp_transformer_layer_cls_to_wrap: Qwen2DecoderLayer
# fsdp_activation_checkpointing: true
# fsdp_state_dict_type: SHARDED_STATE_DICT # Changed from FULL_STATE_DICT
# fsdp_sharding_strategy: FULL_SHARD
# fsdp_forward_prefetch: false # Added
# fsdp_backward_prefetch: "BACKWARD_PRE" # Added
# fsdp_backward_prefetch_limit: 1 # Added
# fsdp_mixed_precision: BF16 # Added
```
# EVA-Qwen2.5-1.5B-FFT-v0.0
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3685
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8166 | 0.0028 | 1 | 1.6772 |
| 1.7031 | 0.2519 | 89 | 1.4633 |
| 1.5925 | 0.5037 | 178 | 1.4171 |
| 1.512 | 0.7556 | 267 | 1.3993 |
| 1.5122 | 1.0050 | 356 | 1.3888 |
| 1.5281 | 1.2574 | 445 | 1.3825 |
| 1.4895 | 1.5099 | 534 | 1.3775 |
| 1.4599 | 1.7624 | 623 | 1.3731 |
| 1.4754 | 2.0103 | 712 | 1.3705 |
| 1.4841 | 2.2619 | 801 | 1.3696 |
| 1.4861 | 2.5136 | 890 | 1.3689 |
| 1.5258 | 2.7653 | 979 | 1.3685 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1+cu124
- Datasets 2.21.0
- Tokenizers 0.20.3