---
library_name: peft
license: llama3.2
base_model: minpeter/Llama-3.2-1B-AlternateTokenizer-chatml
tags:
- generated_from_trainer
datasets:
- teknium/OpenHermes-2.5
- func-calling-singleturn.jsonl
model-index:
- name: output-test
results: []
---
[
](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config
axolotl version: `0.6.0`
```yaml
base_model: minpeter/Llama-3.2-1B-AlternateTokenizer-chatml
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: teknium/OpenHermes-2.5
type: chat_template
chat_template: chatml
field_messages: conversations
message_field_role: from
message_field_content: value
shards: 800
- path: func-calling-singleturn.jsonl
type: chat_template
chat_template: chatml
field_messages: conversations
message_field_role: from
message_field_content: value
shards: 2
save_safetensors: true
auto_resume_from_checkpoints: false
save_steps: 200
chat_template: chatml
dataset_prepared_path: last_run_prepared
val_set_size: 0.1
output_dir: ./output
adapter: qlora
lora_model_dir:
sequence_len: 4096
sample_packing: true
eval_sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out:
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
wandb_project: "axolotl"
wandb_entity: "kasfiekfs-e"
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: <|begin_of_text|>
eos_token: <|im_end|>
pad_token: <|end_of_text|>
# <--- unsloth config --->
unsloth_lora_mlp: true
unsloth_lora_qkv: true
unsloth_lora_o: true
```
# output-test
This model is a fine-tuned version of [minpeter/Llama-3.2-1B-AlternateTokenizer-chatml](https://huggingface.co/minpeter/Llama-3.2-1B-AlternateTokenizer-chatml) on the teknium/OpenHermes-2.5 and the func-calling-singleturn.jsonl datasets.
It achieves the following results on the evaluation set:
- Loss: 0.7811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.022 | 0.0144 | 1 | 1.2278 |
| 0.8218 | 0.2590 | 18 | 0.9024 |
| 1.0332 | 0.5180 | 36 | 0.8385 |
| 0.7912 | 0.7770 | 54 | 0.8068 |
| 0.811 | 1.0288 | 72 | 0.7930 |
| 1.0611 | 1.2878 | 90 | 0.7872 |
| 0.7405 | 1.5468 | 108 | 0.7831 |
| 0.8284 | 1.8058 | 126 | 0.7811 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0