Delta-Vector commited on
Commit
0f15743
·
verified ·
1 Parent(s): d89a1f3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +245 -20
README.md CHANGED
@@ -1,39 +1,264 @@
1
  ---
2
- base_model: []
3
- library_name: transformers
4
  tags:
5
- - mergekit
6
- - merge
7
-
 
 
 
 
 
 
 
 
 
 
8
  ---
 
 
 
9
  ### exl2 quant (measurement.json in main branch)
10
  ---
11
  ### check revisions for quants
12
- ---
13
 
14
- # control-nemo-v2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
 
 
 
17
 
18
- ## Merge Details
19
- ### Merge Method
20
 
21
- This model was merged using the passthrough merge method using /home/mango/Misc/MergeLora/model + /home/mango/Misc/MergeLora/12b-control-lora as a base.
22
 
23
- ### Models Merged
 
 
 
24
 
25
- The following models were included in the merge:
 
26
 
 
 
 
 
27
 
28
- ### Configuration
 
 
 
 
29
 
30
- The following YAML configuration was used to produce this model:
31
 
 
32
  ```yaml
33
- base_model: /home/mango/Misc/MergeLora/model+/home/mango/Misc/MergeLora/12b-control-lora
34
- dtype: bfloat16
35
- merge_method: passthrough
36
- models:
37
- - model: /home/mango/Misc/MergeLora/model+/home/mango/Misc/MergeLora/12b-control-lora
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
2
  tags:
3
+ - chat
4
+ datasets:
5
+ - NewEden/OpenCAI-ShareGPT
6
+ - NewEden/vanilla-backrooms-claude-sharegpt
7
+ - anthracite-org/kalo_opus_misc_240827
8
+ - anthracite-org/kalo_misc_part2
9
+ - NewEden/Roleplay-Logs-V2
10
+ Language:
11
+ - En
12
+ Pipeline_tag: text-generation
13
+ Base_model: mistralai/Mistral-Nemo-Instruct-2407
14
+ Tags:
15
+ - Chat
16
  ---
17
+
18
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/7F2mX4Qqzmp8b0KG5i2DM.png)
19
+
20
  ### exl2 quant (measurement.json in main branch)
21
  ---
22
  ### check revisions for quants
 
23
 
24
+ A finetune of Mistral-Nemo-Instruct-2407 using conversational data, aiming for prose that's best described as 'short' and 'sweet.' The model strictly adheres to one-on-one roleplay and is very dialogue heavy.
25
+
26
+ # Quants
27
+
28
+ GGUF: [Placeholder till Mradermacher quants.]
29
+
30
+ EXL2 : https://huggingface.co/Delta-Vector/Ohashi-NeMo-12B-EXL2
31
+
32
+
33
+ ## Prompting
34
+ Model has been tuned with the Mistral formatting. A typical input would look like this:
35
+
36
+ ```py
37
+ <s>[INST] SYSTEM MESSAGE
38
+ USER MESSAGE[/INST] ASSISTANT MESSAGE</s>[INST] USER MESSAGE[/INST]
39
+
40
+ ```
41
+
42
+
43
+ ## System Prompting
44
+
45
+ I would highly recommend using either Euryale's system prompt or the EVA system prompt with the model.
46
+
47
+ <details><summary>See Sao10k's Euryale System Prompt</summary>
48
+
49
+ ```
50
+ Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.
51
+ <Guidelines>
52
+ • Maintain the character persona but allow it to evolve with the story.
53
+ • Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.
54
+ • All types of outputs are encouraged; respond accordingly to the narrative.
55
+ • Include dialogues, actions, and thoughts in each response.
56
+ • Utilize all five senses to describe scenarios within {{char}}'s dialogue.
57
+ • Use emotional symbols such as "!" and "~" in appropriate contexts.
58
+ • Incorporate onomatopoeia when suitable.
59
+ • Allow time for {{user}} to respond with their own input, respecting their agency.
60
+ • Act as secondary characters and NPCs as needed, and remove them when appropriate.
61
+ • When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.
62
+ </Guidelines>
63
+
64
+ <Forbidden>
65
+ • Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.
66
+ • Writing for, speaking, thinking, acting, or replying as {{user}} in your response.
67
+ • Repetitive and monotonous outputs.
68
+ • Positivity bias in your replies.
69
+ • Being overly extreme or NSFW when the narrative context is inappropriate.
70
+ </Forbidden>
71
+
72
+ Follow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.
73
+
74
+ ```
75
+ </details><br>
76
+
77
+ <details><summary>See EVA System Prompt</summary>
78
+
79
+ ```
80
+ A fictional, narrative-driven role-play emphasizing versatility and UNCENSORED content. Adherence to the `Role-playing Guidelines` is mandatory. Refer to the `Role-play Context` for accurate information.\n\n\n
81
+
82
+ <!-- Start of Role-playing Guidelines -->
83
+
84
+ ### Narration
85
+ Concise Descriptions: Keep narration short and to the point, avoiding redundant unnecessary details. Use a dynamic and varied vocabulary for impact.
86
+ Complementary Role: Use narration to complement dialogue and action, not overshadow them.
87
+ Avoid Repetition: Ensure narration does not repeat information already conveyed through dialogue or action.
88
+
89
+ ### Narrative Consistency
90
+ Continuity: Adhere to established story elements, expanding without contradicting previous details.\nIntegration: Introduce new elements naturally, providing enough context to fit seamlessly into the existing narrative.
91
 
92
+ ### Character Embodiment
93
+ Analysis: Examine the context, subtext, and implications of the given information to gain a deeper understandings of the characters'.
94
+ Reflection: Take time to consider the situation, characters' motivations, and potential consequences.
95
+ Authentic Portrayal: Bring characters to life by consistently and realistically portraying their unique traits, thoughts, emotions, appearances, physical sensations, speech patterns, and tone. Ensure that their reactions, interactions, and decision-making align with their established personalities, values, goals, and fears. Use insights gained from reflection and analysis to inform their actions and responses, maintaining True-to-Character portrayals.
96
 
97
+ <!-- End of Role-playing Guidelines -->
 
98
 
99
+ </details><br>
100
 
101
+ ### Narration
102
+ Concise Descriptions: Keep narration short and to the point, avoiding redundant unnecessary details. Use a dynamic and varied vocabulary for impact.
103
+ Complementary Role: Use narration to complement dialogue and action, not overshadow them.
104
+ Avoid Repetition: Ensure narration does not repeat information already conveyed through dialogue or action.
105
 
106
+ ### Narrative Consistency
107
+ Continuity: Adhere to established story elements, expanding without contradicting previous details.\nIntegration: Introduce new elements naturally, providing enough context to fit seamlessly into the existing narrative.
108
 
109
+ ### Character Embodiment
110
+ Analysis: Examine the context, subtext, and implications of the given information to gain a deeper understandings of the characters'.
111
+ Reflection: Take time to consider the situation, characters' motivations, and potential consequences.
112
+ Authentic Portrayal: Bring characters to life by consistently and realistically portraying their unique traits, thoughts, emotions, appearances, physical sensations, speech patterns, and tone. Ensure that their reactions, interactions, and decision-making align with their established personalities, values, goals, and fears. Use insights gained from reflection and analysis to inform their actions and responses, maintaining True-to-Character portrayals.
113
 
114
+ <!-- End of Role-playing Guidelines -->",
115
+ ```
116
+ </details><br>
117
+
118
+ ## Axolotl config
119
 
120
+ <details><summary>See axolotl config</summary>
121
 
122
+ Axolotl version: ` 0.5.0`
123
  ```yaml
124
+ base_model: mistralai_Mistral-Nemo-Instruct-2407
125
+ model_type: AutoModelForCausalLM
126
+ tokenizer_type: AutoTokenizer
127
+
128
+ plugins:
129
+ - axolotl.integrations.liger.LigerPlugin
130
+ liger_rope: true
131
+ liger_rms_norm: true
132
+ liger_swiglu: true
133
+ liger_fused_linear_cross_entropy: true
134
+
135
+ load_in_8bit: false
136
+ load_in_4bit: false
137
+ strict: false
138
+
139
+ datasets:
140
+ - path: NewEden/OpenCAI-ShareGPT
141
+ type: chat_template
142
+ # chat_template: mistralv3tekken
143
+ roles_to_train: ["gpt"]
144
+ field_messages: conversations
145
+ message_field_role: from
146
+ message_field_content: value
147
+ train_on_eos: turn
148
+ - path: NewEden/vanilla-backrooms-claude-sharegpt
149
+ type: chat_template
150
+ # chat_template: mistralv3tekken
151
+ roles_to_train: ["gpt"]
152
+ field_messages: conversations
153
+ message_field_role: from
154
+ message_field_content: value
155
+ train_on_eos: turn
156
+ - path: anthracite-org/kalo_opus_misc_240827
157
+ type: chat_template
158
+ # chat_template: mistralv3tekken
159
+ roles_to_train: ["gpt"]
160
+ field_messages: conversations
161
+ message_field_role: from
162
+ message_field_content: value
163
+ train_on_eos: turn
164
+ - path: anthracite-org/kalo_misc_part2
165
+ type: chat_template
166
+ # chat_template: mistralv3tekken
167
+ roles_to_train: ["gpt"]
168
+ field_messages: conversations
169
+ message_field_role: from
170
+ message_field_content: value
171
+ train_on_eos: turn
172
+ - path: NewEden/Roleplay-Logs-V2
173
+ type: chat_template
174
+ # chat_template: mistralv3tekken
175
+ roles_to_train: ["gpt"]
176
+ field_messages: conversations
177
+ message_field_role: from
178
+ message_field_content: value
179
+ train_on_eos: turn
180
+ dataset_prepared_path: dataset_prepared
181
+ val_set_size: 0.0
182
+ output_dir: 12b-out-r2
183
+
184
+ sequence_len: 16384
185
+ sample_packing: true
186
+ pad_to_sequence_len: true
187
+
188
+ adapter: lora
189
+ lora_model_dir:
190
+ lora_r: 128
191
+ lora_alpha: 16
192
+ lora_dropout: 0.05
193
+ #lora_target_linear:
194
+ #lora_fan_in_fan_out: true
195
+ peft_use_rslora: true
196
+ lora_target_modules:
197
+ - gate_proj
198
+ - down_proj
199
+ - up_proj
200
+ - q_proj
201
+ - v_proj
202
+ - k_proj
203
+ - o_proj
204
+
205
+
206
+ wandb_project: 12b-control
207
+ wandb_entity:
208
+ wandb_watch:
209
+ wandb_name: 12b-control-r2
210
+ wandb_log_model:
211
 
212
+ gradient_accumulation_steps: 2
213
+ micro_batch_size: 1
214
+ num_epochs: 4
215
+ optimizer: paged_adamw_8bit
216
+ lr_scheduler: cosine
217
+ learning_rate: 0.00001
218
+
219
+ train_on_inputs: false
220
+ group_by_length: false
221
+ bf16: auto
222
+ fp16:
223
+ tf32: false
224
+
225
+ gradient_checkpointing: unsloth
226
+ #gradient_checkpointing_kwargs:
227
+ # use_reentrant: false
228
+ early_stopping_patience:
229
+ resume_from_checkpoint:
230
+ local_rank:
231
+ logging_steps: 1
232
+ xformers_attention:
233
+ flash_attention: true
234
+
235
+ warmup_steps: 40
236
+ evals_per_epoch:
237
+ eval_table_size:
238
+ eval_max_new_tokens:
239
+ saves_per_epoch: 1
240
+ debug:
241
+ deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json
242
+ weight_decay: 0.03
243
+ fsdp:
244
+ fsdp_config:
245
+ special_tokens:
246
+ pad_token: <pad>
247
  ```
248
+
249
+ </details><br>
250
+
251
+ ## Credits
252
+
253
+ Thank you to [Lucy Knada](https://huggingface.co/lucyknada), [Intervitens](https://huggingface.co/intervitens), [Tav](https://huggingface.co/tavtav), [Trappu](https://huggingface.co/Trappu), [Cgato](https://huggingface.co/cgato), [Kubernetes Bad](https://huggingface.co/kubernetes-bad) and the rest of [Anthracite](https://huggingface.co/anthracite-org)
254
+
255
+
256
+ ## Training
257
+ The training was done for 4 epochs. We used 4 x [RTX 3090s](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3090-3090ti/) GPUs graciously provided by [Intervitens](https://huggingface.co/intervitens) for the fine-tuning of the model.
258
+
259
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
260
+
261
+ ## Safety
262
+
263
+
264
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/cgeub1ZibfEwh8-FvCbOY.png)