gghfez commited on
Commit
6eac4a5
·
verified ·
1 Parent(s): 58af37d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ base_model:
6
+ - alpindale/WizardLM-2-8x22B
7
+ pipeline_tag: text-generation
8
+ library_name: transformers
9
+ tags:
10
+ - chat
11
+ - creative
12
+ - writing
13
+ - roleplay
14
+ ---
15
+
16
+
17
+ # gghfez/WizardLM2-22b-RP
18
+
19
+ <img src="https://files.catbox.moe/acl4ld.png" width="400"/>
20
+
21
+ ⚠️ **IMPORTANT: Experimental Model - Not recommended for Production Use**
22
+ - This is an experimental model created through bespoke, unorthodox merging techniques
23
+ - The safety alignment and guardrails from the original WizardLM2 model may be compromised
24
+ - This model is intended for creative writing and roleplay purposes ONLY
25
+ - Use at your own risk and with appropriate content filtering in place
26
+
27
+
28
+ This model is an experimental derivative of WizardLM2-8x22B, created by extracting the individual experts from the original mixture-of-experts (MoE) model, renaming the mlp modules to match the Mistral architecture, and merging them into a single dense model using linear merging via mergekit.
29
+
30
+ The resulting model initially produced gibberish, but after fine-tuning on synthetic data generated by the original WizardLM2-8x22B, it regained the ability to generate relatively coherent text. However, the model exhibits confusion about world knowledge and mixes up the names of well known people.
31
+
32
+ Despite efforts to train the model on factual data, the confusion persisted, so instead I trained it for creative tasks.
33
+
34
+ As a result, this model is not recommended for use as a general assistant or for tasks that require accurate real-world knowledge (don't bother running MMLU-Pro on it).
35
+
36
+ It actually retrieves details out of context very accurately, but I still can't recommend it for anything other than creative tasks.
37
+
38
+ ## Prompt format
39
+ Mistral-v1 + the system tags from Mistral-V7 :
40
+ ```
41
+ [SYSTEM_PROMPT] {system}[SYSTEM_PROMPT] [INST] {prompt}[/INST]
42
+ ```
43
+ **NOTE:** This model is based on WizardLM2-8x22B, which is a finetune of Mixtral-8x22B - not to be confused with the more recent Mistral-Small-22B model.
44
+ As such, it uses the same vocabulary and tokenizer as Mixtral-v0.1 and inherites the Apache2.0 license.
45
+ I expanded the vocab to include the system prompt and instruction tags before training (including embedding heads).
46
+
47
+ ## Quants
48
+
49
+ TODO
50
+
51
+ ## Examples:
52
+
53
+ ### Strength: Information Extraction from Context
54
+ [example 1]
55
+
56
+ ### Weakness: Basic Factual Knowledge
57
+ [example 2]