qq8933
commited on
Commit
·
91674c2
1
Parent(s):
2b8c332
Update README.md
Browse files
README.md
CHANGED
@@ -8,11 +8,11 @@ datasets:
|
|
8 |
- HuggingFaceH4/no_robots
|
9 |
- Open-Orca/OpenOrca
|
10 |
language:
|
11 |
-
-
|
12 |
- en
|
13 |
---
|
14 |
# Zephyr-8x7b:Zephyr Models but Mixtral 8x7B
|
15 |
|
16 |
-
We present to you the Zephyr-8x7b, a MoE model that SFT-only training on a dataset of nearly four million conversation corpora.
|
17 |
|
18 |
-
It has demonstrated strong contextual understanding, reasoning, and human moral alignment without
|
|
|
8 |
- HuggingFaceH4/no_robots
|
9 |
- Open-Orca/OpenOrca
|
10 |
language:
|
11 |
+
- zh
|
12 |
- en
|
13 |
---
|
14 |
# Zephyr-8x7b:Zephyr Models but Mixtral 8x7B
|
15 |
|
16 |
+
We present to you the Zephyr-8x7b, a Mixtral 8x7B MoE model that SFT-only training on a dataset of nearly four million conversation corpora.
|
17 |
|
18 |
+
It has demonstrated strong contextual understanding, reasoning, and human moral alignment without alignment like DPO, and we invite you to participate in our exploration!
|