---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: llama-3.2-3B-rowiki
results: []
---
[](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config
axolotl version: `0.5.0`
```yaml
base_model: meta-llama/Llama-3.2-3B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: chrisgru/ro_wiki_chatml_small
type: chat_template
chat_template: llama3
field_messages: conversations
message_field_role: from
message_field_content: value
dataset_prepared_path: /workspace/data/ds_preprocess
val_set_size: 0.01
output_dir: ./data/outputs
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
#adapter: lora
##lora_model_dir:
#lora_r: 64
#lora_alpha: 16
#lora_dropout: 0.05
#lora_target_linear: true
#lora_fan_in_fan_out:
#lora_modules_to_save:
# - embed_tokens
# - lm_head
wandb_project: wiki-llm
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 5e-5
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 20
evals_per_epoch: 10
eval_table_size:
saves_per_epoch: 1
#eval_max_new_tokens: 128
save_total_limit: 2
debug:
#deepspeed:
weight_decay: 0.0
# fsdp:
# - full_shard
# - auto_wrap
# fsdp_config:
# fsdp_limit_all_gathers: true
# fsdp_sync_module_states: true
# fsdp_offload_params: true
# fsdp_use_orig_params: false
# fsdp_cpu_ram_efficient_loading: true
# fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
# fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
# fsdp_state_dict_type: FULL_STATE_DICT
# fsdp_sharding_strategy: FULL_SHARD
# fsdp_backward_prefetch: BACKWARD_PRE
seed: 1234
hf_use_auth_token: true
hub_strategy: end
hub_model_id: chrisgru/llama-3.2-3B-rowiki
special_tokens:
bos_token: "<|begin_of_text|>"
pad_token: "<|finetune_right_pad_id|>"
```
# llama-3.2-3B-rowiki
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 1234
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4683 | 0.0009 | 1 | 1.6826 |
| 1.7777 | 0.1001 | 117 | 1.6274 |
| 1.4701 | 0.2003 | 234 | 1.6031 |
| 1.6591 | 0.3004 | 351 | 1.5815 |
| 1.664 | 0.4006 | 468 | 1.5587 |
| 1.5308 | 0.5007 | 585 | 1.5404 |
| 1.3583 | 0.6009 | 702 | 1.5268 |
| 1.4297 | 0.7010 | 819 | 1.5198 |
| 1.7561 | 0.8012 | 936 | 1.5168 |
| 1.6656 | 0.9013 | 1053 | 1.5161 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.3