jsfs11 commited on
Commit
30b7ac4
·
verified ·
1 Parent(s): adb3332

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +131 -0
README.md ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - moe
5
+ - frankenmoe
6
+ - merge
7
+ - mergekit
8
+ - lazymergekit
9
+ - flemmingmiguel/MBX-7B-v3
10
+ - Kukedlc/NeuTrixOmniBe-7B-model-remix
11
+ - PetroGPT/WestSeverus-7B-DPO
12
+ - vanillaOVO/supermario_v4
13
+ base_model:
14
+ - flemmingmiguel/MBX-7B-v3
15
+ - Kukedlc/NeuTrixOmniBe-7B-model-remix
16
+ - PetroGPT/WestSeverus-7B-DPO
17
+ - vanillaOVO/supermario_v4
18
+ ---
19
+
20
+ # Open-LLM Benchmark Results:
21
+ MixtureofMerges-MoE-4x7b-v4 (As of 12/02/24 PB Score) on Open LLM Leaderboard📑
22
+ Average: 76.23
23
+ ARC: 72.53
24
+ HellaSwag: 88.85
25
+ MMLU: 64.53
26
+ TruthfulQA: 75.3
27
+ Winogrande: 84.85
28
+ GSM8K: 71.34
29
+
30
+ # MixtureofMerges-MoE-4x7b-v4
31
+
32
+ MixtureofMerges-MoE-4x7b-v4 is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
33
+ * [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3)
34
+ * [Kukedlc/NeuTrixOmniBe-7B-model-remix](https://huggingface.co/Kukedlc/NeuTrixOmniBe-7B-model-remix)
35
+ * [PetroGPT/WestSeverus-7B-DPO](https://huggingface.co/PetroGPT/WestSeverus-7B-DPO)
36
+ * [vanillaOVO/supermario_v4](https://huggingface.co/vanillaOVO/supermario_v4)
37
+
38
+ ## 🧩 Configuration
39
+
40
+ ```yaml
41
+ base_model: Kukedlc/NeuTrixOmniBe-7B-model-remix
42
+ gate_mode: hidden
43
+ dtype: bfloat16
44
+ experts:
45
+ - source_model: flemmingmiguel/MBX-7B-v3
46
+ positive_prompts:
47
+ - "Answer this question from the ARC (Argument Reasoning Comprehension)."
48
+ - "Use common sense and logical reasoning skills."
49
+ - "What assumptions does this argument rely on?"
50
+ - "Are these assumptions valid? Explain."
51
+ - "Could this be explained in a different way? Provide an alternative explanation."
52
+ - "Identify any weaknesses in this argument."
53
+ - "Does this argument contain any logical fallacies? If so, which ones?"
54
+ negative_prompts:
55
+ - "misses key evidence"
56
+ - "overly general"
57
+ - "focuses on irrelevant details"
58
+ - "assumes information not provided"
59
+ - "relies on stereotypes"
60
+ - source_model: Kukedlc/NeuTrixOmniBe-7B-model-remix
61
+ positive_prompts:
62
+ - "Answer this question, demonstrating commonsense understanding and using any relevant general knowledge you may have."
63
+ - "Provide a concise summary of this passage, then explain why the highlighted section is essential to the main idea."
64
+ - "Read these two brief articles presenting different viewpoints on the same topic. List their key arguments and highlight where they disagree."
65
+ - "Paraphrase this statement, changing the emotional tone but keeping the core meaning intact. Example: Rephrase a worried statement in a humorous way"
66
+ - "Create a short analogy that helps illustrate the main concept of this article."
67
+ negative_prompts:
68
+ - "sounds too basic"
69
+ - "understated"
70
+ - "dismisses important details"
71
+ - "avoids the question's nuance"
72
+ - "takes this statement too literally"
73
+ - source_model: PetroGPT/WestSeverus-7B-DPO
74
+ positive_prompts:
75
+ - "Calculate the answer to this math problem"
76
+ - "My mathematical capabilities are strong, allowing me to handle complex mathematical queries"
77
+ - "solve for"
78
+ - "A store sells apples at $0.50 each. If Emily buys 12 apples, how much does she need to pay?"
79
+ - "Isolate x in the following equation: 2x + 5 = 17"
80
+ - "Solve this equation and show your working."
81
+ - "Explain why you used this formula to solve the problem."
82
+ - "Attempt to divide this number by zero. Explain why this cannot be done."
83
+ negative_prompts:
84
+ - "incorrect"
85
+ - "inaccurate"
86
+ - "creativity"
87
+ - "assumed without proof"
88
+ - "rushed calculation"
89
+ - "confuses mathematical concepts"
90
+ - "draws illogical conclusions"
91
+ - "circular reasoning"
92
+ - source_model: vanillaOVO/supermario_v4
93
+ positive_prompts:
94
+ - "Generate a few possible continuations to this scenario."
95
+ - "Demonstrate understanding of everyday commonsense in your response."
96
+ - "Use contextual clues to determine the most likely outcome."
97
+ - "Continue this scenario, but make the writing style sound archaic and overly formal."
98
+ - "This narrative is predictable. Can you introduce an unexpected yet plausible twist?"
99
+ - "The character is angry. Continue this scenario showcasing a furious outburst."
100
+ negative_prompts:
101
+ - "repetitive phrases"
102
+ - "overuse of the same words"
103
+ - "contradicts earlier statements - breaks the internal logic of the scenario"
104
+ - "out of character dialogue"
105
+ - "awkward phrasing - sounds unnatural"
106
+ - "doesn't match the given genre"
107
+ ```
108
+
109
+ ## 💻 Usage
110
+
111
+ ```python
112
+ !pip install -qU transformers bitsandbytes accelerate
113
+
114
+ from transformers import AutoTokenizer
115
+ import transformers
116
+ import torch
117
+
118
+ model = "jsfs11/MixtureofMerges-MoE-4x7b-v4"
119
+
120
+ tokenizer = AutoTokenizer.from_pretrained(model)
121
+ pipeline = transformers.pipeline(
122
+ "text-generation",
123
+ model=model,
124
+ model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
125
+ )
126
+
127
+ messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
128
+ prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
129
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
130
+ print(outputs[0]["generated_text"])
131
+ ```