lucyknada commited on
Commit
776a2cf
·
verified ·
1 Parent(s): 1572b47

Upload ./README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +163 -0
README.md ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: other
4
+ license_name: qwen
5
+ license_link: https://huggingface.co/Qwen/Qwen2.5-14B/blob/main/LICENSE
6
+ base_model: Qwen/Qwen2.5-14B
7
+ tags:
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: 14B-Qwen2.5-Freya-x1
11
+ results: []
12
+ ---
13
+ ### exl2 quant (measurement.json in main branch)
14
+ ---
15
+ ### check revisions for quants
16
+ ---
17
+
18
+
19
+ ![Freya](https://huggingface.co/Sao10K/14B-Qwen2.5-Freya-x1/resolve/main/sad.png)
20
+ *Me during failed runs*
21
+
22
+ # 14B-Qwen2.5-Freya-v1
23
+
24
+ I decided to mess around with training methods again, considering the re-emegence of methods like multi-step training. Some people began doing it again, and so, why not? Inspired by AshhLimaRP's methology but done it my way.
25
+
26
+ Freya-S1
27
+ - LoRA Trained on ~1.1GB of literature and raw text over Qwen 2.5's base model.
28
+ - Cleaned text and literature as best as I could, still, may have had issues here and there.
29
+
30
+ Freya-S2
31
+ - The first LoRA was applied over Qwen 2.5 Instruct, then I trained on top of that.
32
+ - Reduced LoRA rank because it's mainly instruct and other details I won't get into.
33
+
34
+ Recommended Model Settings | *Look, I just use these, they work fine enough. I don't even know how DRY or other meme samplers work. Your system prompt matters more anyway.*
35
+ ```
36
+ Prompt Format: ChatML
37
+ Temperature: 1+ # I don't know, man.
38
+ min_p: 0.05
39
+ ```
40
+
41
+ Training time in total was ~10 Hours on a 8xH100 Node, sponsored by the Government of Singapore or something. Thanks for the national service allowance, MHA.
42
+
43
+ https://sao10k.carrd.co/ for contact.
44
+
45
+ ---
46
+
47
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
48
+ <details><summary>See axolotl config</summary>
49
+
50
+ axolotl version: `0.6.0`
51
+ ```yaml
52
+ base_model:
53
+ - s1: Qwen/Qwen2.5-14B
54
+ - s2: Qwen/Qwen2.5-14B-Instruct
55
+ model_type: AutoModelForCausalLM
56
+ tokenizer_type: AutoTokenizer
57
+
58
+ load_in_8bit: false
59
+ load_in_4bit: false
60
+ strict: false
61
+ sequence_len: 16384
62
+ bf16: auto
63
+ fp16:
64
+ tf32: false
65
+ flash_attention: true
66
+ special_tokens:
67
+
68
+ adapter: lora # 16-bit
69
+ lora_r:
70
+ - s1: 64
71
+ - s2: 32
72
+ lora_alpha: 64
73
+ lora_dropout: 0.2
74
+ lora_fan_in_fan_out:
75
+ peft_use_rslora: true
76
+ lora_target_linear: true
77
+
78
+ # Data
79
+ dataset_prepared_path: dataset_run_freya
80
+ datasets:
81
+ # S1 - Writing / Completion
82
+ - path: datasets/eBooks-cleaned-75K
83
+ type: completion
84
+ - path: datasets/novels-clean-dedupe-10K
85
+ type: completion
86
+ # S2 - Instruct
87
+ - path: datasets/10k-amoral-full-fixed-sys.json
88
+ type: chat_template
89
+ chat_template: chatml
90
+ roles_to_train: ["gpt"]
91
+ field_messages: conversations
92
+ message_field_role: from
93
+ message_field_content: value
94
+ train_on_eos: turn
95
+ - path: datasets/44k-hespera-smartshuffle.json
96
+ type: chat_template
97
+ chat_template: chatml
98
+ roles_to_train: ["gpt"]
99
+ field_messages: conversations
100
+ message_field_role: from
101
+ message_field_content: value
102
+ train_on_eos: turn
103
+ - path: datasets/5k_rpg_adventure_instruct-sys.json
104
+ type: chat_template
105
+ chat_template: chatml
106
+ roles_to_train: ["gpt"]
107
+ field_messages: conversations
108
+ message_field_role: from
109
+ message_field_content: value
110
+ train_on_eos: turn
111
+ shuffle_merged_datasets: true
112
+ warmup_ratio: 0.1
113
+
114
+ plugins:
115
+ - axolotl.integrations.liger.LigerPlugin
116
+ liger_rope: true
117
+ liger_rms_norm: true
118
+ liger_layer_norm: true
119
+ liger_glu_activation: true
120
+ liger_fused_linear_cross_entropy: true
121
+
122
+ # Iterations
123
+ num_epochs:
124
+ - s1: 1
125
+ - s2: 2
126
+
127
+ # Sampling
128
+ sample_packing: true
129
+ pad_to_sequence_len: true
130
+ train_on_inputs: false
131
+ group_by_length: false
132
+
133
+ # Batching
134
+ gradient_accumulation_steps: 4
135
+ micro_batch_size: 2
136
+ gradient_checkpointing: unsloth
137
+
138
+ # Evaluation
139
+ val_set_size: 0.025
140
+ evals_per_epoch: 5
141
+ eval_table_size:
142
+ eval_max_new_tokens: 256
143
+ eval_sample_packing: false
144
+ eval_batch_size: 1
145
+
146
+ # Optimizer
147
+ optimizer: paged_ademamix_8bit
148
+ lr_scheduler: cosine
149
+ learning_rate:
150
+ - s1: 0.000002
151
+ - s2: 0.000004
152
+ weight_decay: 0.2
153
+ max_grad_norm: 10.0
154
+
155
+ # Garbage Collection
156
+ gc_steps: 10
157
+
158
+ # Misc
159
+ deepspeed: ./deepspeed_configs/zero2.json
160
+
161
+ ```
162
+
163
+ </details><br>