File size: 7,457 Bytes
af115cd
 
0b59c4a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
af115cd
 
0b59c4a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
af115cd
 
 
 
0b59c4a
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
---
library_name: peft
license: apache-2.0
base_model: internlm/internlm3-8b-instruct
tags:
- axolotl
- generated_from_trainer
datasets:
- ToastyPigeon/some-rp
- BeaverAI/cedo-unalignment
- BeaverAI/foundRP
- PocketDoc/Dans-Prosemaxx-Gutenberg
- ToastyPigeon/SpringDragon-Instruct
- allenai/tulu-3-sft-personas-instruction-following
- allura-org/fujin-cleaned-stage-2
model-index:
- name: intern-rp-lora
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>

axolotl version: `0.6.0`
```yaml
# git clone https://github.com/axolotl-ai-cloud/axolotl
# cd axolotl
# git checkout bd2a594b8954103719f8d1ef739e2c3267ca36f6
# pip3 install packaging ninja huggingface_hub[cli]
# pip3 install -e '.[flash-attn,deepspeed]'
# huggingface-cli login --token $hf_key && wandb login $wandb_key
# python -m axolotl.cli.preprocess intern-rp-test-human.yml
# accelerate launch -m axolotl.cli.train intern-rp-test-human.yml
# python -m axolotl.cli.merge_lora qwen-rp-test-human.yml
# huggingface-cli upload ToastyPigeon/tqi-some-rp-40 train-workspace/merged . --exclude "*.md"
# sleep 10h; runpodctl stop pod $RUNPOD_POD_ID &

# git clone https://github.com/axolotl-ai-cloud/axolotl && cd axolotl && pip3 install packaging ninja huggingface_hub[cli] && pip3 install -e '.[flash-attn,deepspeed]' && cd .. && huggingface-cli login --token $hf_key && wandb login $wandb_key

# Model
base_model: internlm/internlm3-8b-instruct
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
trust_remote_code: true

load_in_8bit: false
load_in_4bit: true
strict: false
bf16: true
fp16:
tf32: false
flash_attention: true
special_tokens:

# Output
output_dir: ./train-workspace
hub_model_id: ToastyPigeon/intern-rp-lora
hub_strategy: "all_checkpoints"
auto_resume_from_checkpoint: true
#resume_from_checkpoint: ./train-workspace/checkpoint-304
saves_per_epoch: 2
save_total_limit: 4

# Data
sequence_len: 8192 # fits
min_sample_len: 128
chat_template: chatml
dataset_prepared_path: last_run_prepared
datasets:
  - path: ToastyPigeon/some-rp
    type: chat_template
    field_messages: conversations
    message_field_role: from
    message_field_content: value
    #train_on_inputs: true
  - path: BeaverAI/cedo-unalignment
    type: chat_template
    field_messages: conversations
    message_field_role: from
    message_field_content: value
  - path: BeaverAI/foundRP
    type: chat_template
    field_messages: conversations
    message_field_role: from
    message_field_content: value
    split: train[:1000]
  - path: PocketDoc/Dans-Prosemaxx-Gutenberg
    type: chat_template
    field_messages: conversations
    message_field_role: from
    message_field_content: value
  - path: ToastyPigeon/SpringDragon-Instruct
    type: chat_template
    field_messages: conversations
    message_field_role: from
    message_field_content: value
    split: train[:500]
  - path: allenai/tulu-3-sft-personas-instruction-following
    type: chat_template
    field_messages: messages
    message_field_role: role
    message_field_content: content
    split: train[:500]
  - path: allura-org/fujin-cleaned-stage-2
    type: completion
    field: text
    split: train[:500]
warmup_steps: 20
shuffle_merged_datasets: true
sample_packing: true
pad_to_sequence_len: true

# Batching
num_epochs: 2
gradient_accumulation_steps: 1
micro_batch_size: 1
eval_batch_size: 1

# Evaluation
val_set_size: 100
evals_per_epoch: 10
eval_table_size:
eval_max_new_tokens: 256
eval_sample_packing: false

save_safetensors: true

# WandB
wandb_project: Intern-Rp-Test
#wandb_entity:

gradient_checkpointing: 'unsloth'
gradient_checkpointing_kwargs:
  use_reentrant: false

unsloth_cross_entropy_loss: true
#unsloth_lora_mlp: true
#unsloth_lora_qkv: true
#unsloth_lora_o: true

# LoRA
adapter: qlora
lora_r: 32
lora_alpha: 64
lora_dropout: 0.25
lora_target_linear: true
lora_target_modules:
  - gate_proj
  - down_proj
  - up_proj
  - q_proj
  - v_proj
  - k_proj
  - o_proj
lora_modules_to_save:
#peft_use_rslora: true
#loraplus_lr_ratio: 8

# Optimizer
optimizer: paged_ademamix_8bit
lr_scheduler: cosine
learning_rate: 3e-5
cosine_min_lr_ratio: 0.1
weight_decay: 0.01
max_grad_norm: 1.0

# Misc
train_on_inputs: false
group_by_length: false
early_stopping_patience:
local_rank:
logging_steps: 1
xformers_attention:
#debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json # previously blank
fsdp:
fsdp_config:

plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: true

gc_steps: 10
seed: 69
```

</details><br>

# intern-rp-lora

This model is a fine-tuned version of [internlm/internlm3-8b-instruct](https://huggingface.co/internlm/internlm3-8b-instruct) on the ToastyPigeon/some-rp, the BeaverAI/cedo-unalignment, the BeaverAI/foundRP, the PocketDoc/Dans-Prosemaxx-Gutenberg, the ToastyPigeon/SpringDragon-Instruct, the allenai/tulu-3-sft-personas-instruction-following and the allura-org/fujin-cleaned-stage-2 datasets.
It achieves the following results on the evaluation set:
- Loss: 1.7197

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 69
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.PAGED_ADEMAMIX_8BIT and the args are:
No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 2

### Training results

| Training Loss | Epoch  | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2794        | 0.0013 | 1    | 1.8317          |
| 1.6416        | 0.1    | 75   | 1.7826          |
| 2.3547        | 0.2    | 150  | 1.7643          |
| 1.9114        | 0.3    | 225  | 1.7546          |
| 2.0004        | 0.4    | 300  | 1.7474          |
| 2.2052        | 0.5    | 375  | 1.7428          |
| 1.9314        | 0.6    | 450  | 1.7377          |
| 2.202         | 0.7    | 525  | 1.7350          |
| 2.2453        | 0.8    | 600  | 1.7303          |
| 1.8392        | 0.9    | 675  | 1.7283          |
| 1.7018        | 1.0    | 750  | 1.7271          |
| 1.9736        | 1.0987 | 825  | 1.7264          |
| 2.0917        | 1.1987 | 900  | 1.7245          |
| 1.5679        | 1.2987 | 975  | 1.7239          |
| 2.0799        | 1.3987 | 1050 | 1.7225          |
| 1.8398        | 1.4987 | 1125 | 1.7220          |
| 1.9806        | 1.5987 | 1200 | 1.7211          |
| 1.7334        | 1.6987 | 1275 | 1.7209          |
| 2.1457        | 1.7987 | 1350 | 1.7205          |
| 1.7804        | 1.8987 | 1425 | 1.7202          |
| 2.1652        | 1.9987 | 1500 | 1.7197          |


### Framework versions

- PEFT 0.14.0
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0