davideuler
commited on
Commit
•
ad80d21
1
Parent(s):
2af5dff
Update README.md
Browse files
README.md
CHANGED
@@ -1,18 +1,21 @@
|
|
1 |
---
|
2 |
-
language:
|
3 |
-
|
4 |
-
thumbnail:
|
5 |
tags:
|
6 |
- coding
|
7 |
- moe
|
8 |
-
license:
|
9 |
-
base_model:
|
|
|
10 |
---
|
11 |
|
12 |
## Usage
|
13 |
NebulaNet-v2: An MOE of 4 7b expert models.
|
14 |
It is good at coding and multi language translation. It should be fluent at chat and math too.
|
15 |
|
|
|
|
|
16 |
## mergekit config
|
17 |
```
|
18 |
base_model: ContextualAI/Contextual_KTO_Mistral_PairRM
|
@@ -41,4 +44,4 @@ experts:
|
|
41 |
- "mathematics"
|
42 |
- "solve"
|
43 |
- "count"
|
44 |
-
```
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- multilingual
|
4 |
+
thumbnail: url to a thumbnail used in social sharing
|
5 |
tags:
|
6 |
- coding
|
7 |
- moe
|
8 |
+
license: mit
|
9 |
+
base_model: ContextualAI/Contextual_KTO_Mistral_PairRM
|
10 |
+
pipeline_tag: text-generation
|
11 |
---
|
12 |
|
13 |
## Usage
|
14 |
NebulaNet-v2: An MOE of 4 7b expert models.
|
15 |
It is good at coding and multi language translation. It should be fluent at chat and math too.
|
16 |
|
17 |
+
The 4x7b merged model performs much better than the original Contextual_KTO_Mistral_PairRM on both coding and multilingual text generation in my observation.
|
18 |
+
|
19 |
## mergekit config
|
20 |
```
|
21 |
base_model: ContextualAI/Contextual_KTO_Mistral_PairRM
|
|
|
44 |
- "mathematics"
|
45 |
- "solve"
|
46 |
- "count"
|
47 |
+
```
|