RichardErkhov commited on
Commit
bd4ce7e
·
verified ·
1 Parent(s): a99e6c1

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +232 -0
README.md ADDED
@@ -0,0 +1,232 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Llama-3-LewdPlay-8B-evo - GGUF
11
+ - Model creator: https://huggingface.co/TOPAI-Network/
12
+ - Original model: https://huggingface.co/TOPAI-Network/Llama-3-LewdPlay-8B-evo/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [Llama-3-LewdPlay-8B-evo.Q2_K.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q2_K.gguf) | Q2_K | 2.96GB |
18
+ | [Llama-3-LewdPlay-8B-evo.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
19
+ | [Llama-3-LewdPlay-8B-evo.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.IQ3_S.gguf) | IQ3_S | 2.99GB |
20
+ | [Llama-3-LewdPlay-8B-evo.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
21
+ | [Llama-3-LewdPlay-8B-evo.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.IQ3_M.gguf) | IQ3_M | 3.52GB |
22
+ | [Llama-3-LewdPlay-8B-evo.Q3_K.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q3_K.gguf) | Q3_K | 3.74GB |
23
+ | [Llama-3-LewdPlay-8B-evo.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
24
+ | [Llama-3-LewdPlay-8B-evo.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
25
+ | [Llama-3-LewdPlay-8B-evo.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
26
+ | [Llama-3-LewdPlay-8B-evo.Q4_0.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q4_0.gguf) | Q4_0 | 4.34GB |
27
+ | [Llama-3-LewdPlay-8B-evo.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
28
+ | [Llama-3-LewdPlay-8B-evo.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
29
+ | [Llama-3-LewdPlay-8B-evo.Q4_K.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q4_K.gguf) | Q4_K | 4.58GB |
30
+ | [Llama-3-LewdPlay-8B-evo.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
31
+ | [Llama-3-LewdPlay-8B-evo.Q4_1.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q4_1.gguf) | Q4_1 | 4.78GB |
32
+ | [Llama-3-LewdPlay-8B-evo.Q5_0.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q5_0.gguf) | Q5_0 | 5.21GB |
33
+ | [Llama-3-LewdPlay-8B-evo.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
34
+ | [Llama-3-LewdPlay-8B-evo.Q5_K.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q5_K.gguf) | Q5_K | 5.34GB |
35
+ | [Llama-3-LewdPlay-8B-evo.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
36
+ | [Llama-3-LewdPlay-8B-evo.Q5_1.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q5_1.gguf) | Q5_1 | 5.65GB |
37
+ | [Llama-3-LewdPlay-8B-evo.Q6_K.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q6_K.gguf) | Q6_K | 6.14GB |
38
+ | [Llama-3-LewdPlay-8B-evo.Q8_0.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q8_0.gguf) | Q8_0 | 7.95GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: cc-by-nc-4.0
46
+ base_model:
47
+ - vicgalle/Roleplay-Llama-3-8B
48
+ - Undi95/Llama-3-Unholy-8B-e4
49
+ - Undi95/Llama-3-LewdPlay-8B
50
+ library_name: transformers
51
+ tags:
52
+ - mergekit
53
+ - merge
54
+ ---
55
+
56
+ # LewdPlay-8B
57
+
58
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
59
+
60
+ The new EVOLVE merge method was used (on MMLU specifically), see below for more information!
61
+
62
+ Unholy was used for uncensoring, Roleplay Llama 3 for the DPO train he got on top, and LewdPlay for the... lewd side.
63
+
64
+ ## Prompt template: Llama3
65
+
66
+ ```
67
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
68
+
69
+ {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
70
+
71
+ {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
72
+
73
+ {output}<|eot_id|>
74
+ ```
75
+
76
+ ## Merge Details
77
+ ### Merge Method
78
+
79
+ This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 as a base.
80
+
81
+ ### Models Merged
82
+
83
+ The following models were included in the merge:
84
+ * ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923
85
+ * ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066
86
+
87
+ ### Configuration
88
+
89
+ The following YAML configuration was used to produce this model:
90
+
91
+ ```yaml
92
+ base_model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727
93
+ dtype: bfloat16
94
+ merge_method: dare_ties
95
+ parameters:
96
+ int8_mask: 1.0
97
+ normalize: 0.0
98
+ slices:
99
+ - sources:
100
+ - layer_range: [0, 4]
101
+ model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066
102
+ parameters:
103
+ density: 1.0
104
+ weight: 0.6861808716092435
105
+ - layer_range: [0, 4]
106
+ model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923
107
+ parameters:
108
+ density: 0.6628290134113985
109
+ weight: 0.5815923052193855
110
+ - layer_range: [0, 4]
111
+ model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727
112
+ parameters:
113
+ density: 1.0
114
+ weight: 0.5113886163963061
115
+ - sources:
116
+ - layer_range: [4, 8]
117
+ model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066
118
+ parameters:
119
+ density: 0.892655547455918
120
+ weight: 0.038732602391021484
121
+ - layer_range: [4, 8]
122
+ model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923
123
+ parameters:
124
+ density: 1.0
125
+ weight: 0.1982145486303527
126
+ - layer_range: [4, 8]
127
+ model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727
128
+ parameters:
129
+ density: 1.0
130
+ weight: 0.6843011350690802
131
+ - sources:
132
+ - layer_range: [8, 12]
133
+ model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066
134
+ parameters:
135
+ density: 0.7817511027396784
136
+ weight: 0.13053333213489704
137
+ - layer_range: [8, 12]
138
+ model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923
139
+ parameters:
140
+ density: 0.6963703515864826
141
+ weight: 0.20525481492667985
142
+ - layer_range: [8, 12]
143
+ model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727
144
+ parameters:
145
+ density: 0.6983086326765777
146
+ weight: 0.5843953969574106
147
+ - sources:
148
+ - layer_range: [12, 16]
149
+ model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066
150
+ parameters:
151
+ density: 0.9632895768462915
152
+ weight: 0.2101146706607748
153
+ - layer_range: [12, 16]
154
+ model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923
155
+ parameters:
156
+ density: 0.597557434542081
157
+ weight: 0.6728172621848589
158
+ - layer_range: [12, 16]
159
+ model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727
160
+ parameters:
161
+ density: 0.756263557607837
162
+ weight: 0.2581423726361908
163
+ - sources:
164
+ - layer_range: [16, 20]
165
+ model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066
166
+ parameters:
167
+ density: 1.0
168
+ weight: 0.2116035543552448
169
+ - layer_range: [16, 20]
170
+ model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923
171
+ parameters:
172
+ density: 1.0
173
+ weight: 0.22654226422958418
174
+ - layer_range: [16, 20]
175
+ model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727
176
+ parameters:
177
+ density: 0.8925914810507647
178
+ weight: 0.42243766315440867
179
+ - sources:
180
+ - layer_range: [20, 24]
181
+ model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066
182
+ parameters:
183
+ density: 0.7697608089825734
184
+ weight: 0.1535118632140203
185
+ - layer_range: [20, 24]
186
+ model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923
187
+ parameters:
188
+ density: 0.9886758076773643
189
+ weight: 0.3305040603868546
190
+ - layer_range: [20, 24]
191
+ model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727
192
+ parameters:
193
+ density: 1.0
194
+ weight: 0.40670083428654535
195
+ - sources:
196
+ - layer_range: [24, 28]
197
+ model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066
198
+ parameters:
199
+ density: 1.0
200
+ weight: 0.4542810478500622
201
+ - layer_range: [24, 28]
202
+ model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923
203
+ parameters:
204
+ density: 0.8330662483310117
205
+ weight: 0.2587495367324508
206
+ - layer_range: [24, 28]
207
+ model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727
208
+ parameters:
209
+ density: 0.9845313983551542
210
+ weight: 0.40378452705975915
211
+ - sources:
212
+ - layer_range: [28, 32]
213
+ model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066
214
+ parameters:
215
+ density: 1.0
216
+ weight: 0.2951962192288415
217
+ - layer_range: [28, 32]
218
+ model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923
219
+ parameters:
220
+ density: 0.960315594933433
221
+ weight: 0.13142971773782525
222
+ - layer_range: [28, 32]
223
+ model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727
224
+ parameters:
225
+ density: 1.0
226
+ weight: 0.30838472094518804
227
+ ```
228
+
229
+ ## Support
230
+
231
+ If you want to support me, you can [here](https://ko-fi.com/undiai).
232
+