File size: 3,868 Bytes
b6f9954 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-3B
tags:
- axolotl
- generated_from_trainer
datasets:
- axolotl-ai-co/numina-cot-logprobs-859k-8b-sft
model-index:
- name: numina-3b-ep3-lr3e-5-kd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.6.0`
```yaml
base_model: meta-llama/Llama-3.2-3B
# Automatically upload checkpoint and final model to HF
hub_model_id: axolotl-ai-co/numina-3b-ep3-lr3e-5-kd
plugins:
- axolotl.integrations.kd.KDPlugin
- axolotl.integrations.liger.LigerPlugin
liger_rms_norm: true
liger_glu_activation: true
torch_compile: true
strict: false
chat_template: llama3
kd_trainer: true
kd_ce_alpha: 0.4
kd_alpha: 1.0
kd_temperature: 2.0
dataloader_prefetch_factor: 512
dataloader_num_workers: 8
dataloader_pin_memory: true
datasets:
- field_messages: messages
message_field_content: content
message_field_role: role
logprobs_field: llm_logprobs_logprobs
path: axolotl-ai-co/numina-cot-logprobs-859k-8b-sft
type: axolotl.integrations.kd.chat_template
split: train
temperature: 1.0
# preprocess_shards: 10
dataset_prepared_path: last_run_prepared
val_set_size: 0.0
output_dir: ./outputs/out
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
wandb_project: numina-kd-experiment
wandb_entity: axolotl-ai
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 4
num_epochs: 3
optimizer: adamw_8bit
lr_scheduler: cosine
learning_rate: 3e-5
save_safetensors: true
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 100
evals_per_epoch: 2
eval_table_size:
saves_per_epoch: 1
debug:
# deepspeed: deepspeed_configs/zero1.json
weight_decay: 0.0
_fsdp:
- full_shard
- auto_wrap
_fsdp_config:
fsdp_limit_all_gathers: true
fsdp_sync_module_states: true
fsdp_offload_params: true
fsdp_use_orig_params: false
fsdp_cpu_ram_efficient_loading: true
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sharding_strategy: FULL_SHARD
fsdp_backward_prefetch: BACKWARD_PRE
special_tokens:
pad_token: <|finetune_right_pad_id|>
eos_token: <|eot_id|>
```
</details><br>
# numina-3b-ep3-lr3e-5-kd
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on the axolotl-ai-co/numina-cot-logprobs-859k-8b-sft dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|