lucyknada commited on
Commit
de79c6e
·
verified ·
1 Parent(s): bb8f6a2

Upload ./README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +129 -0
README.md ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: other
4
+ license_name: qwen
5
+ license_link: https://huggingface.co/Qwen/Qwen2.5-32B-Instruct/blob/main/LICENSE
6
+ base_model: Qwen/Qwen2.5-32B-Instruct
7
+ tags:
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: 32B-Qwen2.5-Kunou-v1
11
+ results: []
12
+ ---
13
+ ### exl2 quant (measurement.json in main branch)
14
+ ---
15
+ ### check revisions for quants
16
+ ---
17
+
18
+
19
+ ![Kunou](https://huggingface.co/Sao10K/72B-Qwen2.5-Kunou-v1/resolve/main/knn.png)
20
+
21
+ **Sister Versions for Lightweight and Heavyweight Use!**
22
+
23
+ [72B-Kunou-v1](https://huggingface.co/Sao10K/72B-Qwen2.5-Kunou-v1)
24
+
25
+ [14B-Kunou-v1](https://huggingface.co/Sao10K/14B-Qwen2.5-Kunou-v1)
26
+
27
+ # 32B-Qwen2.5-Kunou-v1
28
+
29
+ *training delays and all...*
30
+
31
+ I do not really have anything planned for this model other than it being a generalist, and Roleplay Model? It was just something made and planned in minutes.
32
+ <br>Same with the 14B and 72B version.
33
+ <br>Kunou's the name of an OC I worked on for a couple of years, for a... fanfic. mmm...
34
+
35
+ A kind-of successor to L3-70B-Euryale-v2.2 in all but name? I'm keeping Stheno/Euryale lineage to Llama series for now.
36
+ <br>I had a version made on top of Nemotron, a supposed Euryale 2.4 but that flopped hard, it was not my cup of tea.
37
+ <br>This version is basically a better, more cleaned up Dataset used on Euryale and Stheno.
38
+
39
+ Recommended Model Settings | *Look, I just use these, they work fine enough. I don't even know how DRY or other meme samplers work. Your system prompt matters more anyway.*
40
+ ```
41
+ Prompt Format: ChatML
42
+ Temperature: 1.1
43
+ min_p: 0.1
44
+ ```
45
+
46
+
47
+ Future-ish plans:
48
+ ~~<br>\- Complete this model series.~~
49
+ <br>\- Further refine the Datasets used for quality, more secondary chats, more creative-related domains. (Inspired by Drummer)
50
+ <br>\- Work on my other incomplete projects. About half a dozen on the backburner for a while now.
51
+
52
+ Special thanks to my wallet for funding this, my juniors who share a single braincell between them, and my current national service.
53
+ <br>Stay safe. There have been more emergency calls, more incidents this holiday season.
54
+
55
+ Also sorry for the inactivity. Life was in the way. It still is, just less so, for now. Burnout is a thing, huh?
56
+
57
+ https://sao10k.carrd.co/ for contact.
58
+
59
+ ---
60
+
61
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
62
+ <details><summary>See axolotl config</summary>
63
+
64
+ axolotl version: `0.5.2`
65
+ ```yaml
66
+ base_model: Qwen/Qwen2.5-32B-Instruct
67
+ model_type: AutoModelForCausalLM
68
+ tokenizer_type: AutoTokenizer
69
+
70
+ load_in_8bit: false
71
+ load_in_4bit: true
72
+ strict: false
73
+ sequence_len: 16384
74
+ bf16: auto
75
+ fp16:
76
+ tf32: false
77
+ flash_attention: true
78
+
79
+ adapter: qlora
80
+ lora_model_dir:
81
+ lora_r: 32
82
+ lora_alpha: 64
83
+ lora_dropout: 0.1
84
+ lora_target_linear: true
85
+ lora_fan_in_fan_out:
86
+ peft_use_rslora: true
87
+
88
+ # Data
89
+ dataset_prepared_path: last_run_prepared
90
+ datasets:
91
+ - path: datasets/amoral-full-sys-prompt.json # Unalignment Data - Cleaned Up from Original, Split to its own file
92
+ type: customchatml
93
+ - path: datasets/mimi-superfix-RP-filtered-fixed.json # RP / Creative-Instruct Data
94
+ type: customchatml
95
+ - path: datasets/hespera-smartshuffle.json # Hesperus-v2-Instruct Data
96
+ type: customchatml
97
+ warmup_steps: 15
98
+
99
+ plugins:
100
+ - axolotl.integrations.liger.LigerPlugin
101
+ liger_rope: true
102
+ liger_rms_norm: true
103
+ liger_layer_norm: true
104
+ liger_glu_activation: true
105
+ liger_fused_linear_cross_entropy: true
106
+
107
+ # Iterations
108
+ num_epochs: 1
109
+
110
+ # Batching
111
+ gradient_accumulation_steps: 4
112
+ micro_batch_size: 1
113
+ gradient_checkpointing: "unsloth"
114
+
115
+ # Optimizer
116
+ optimizer: paged_ademamix_8bit
117
+ lr_scheduler: cosine
118
+ learning_rate: 0.000004
119
+ weight_decay: 0.1
120
+ max_grad_norm: 25.0
121
+
122
+ # Iterations
123
+ num_epochs: 1
124
+
125
+ # Misc
126
+ deepspeed: ./deepspeed_configs/zero3_bf16.json
127
+ ```
128
+
129
+ </details><br>