Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,10 @@ datasets:
|
|
13 |
|
14 |
# NeuralMarcoro14-7B
|
15 |
|
16 |
-
This is a DPO fine-tuned version of [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) using the [chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) preference dataset.
|
|
|
|
|
|
|
17 |
|
18 |
You can try it out in this [Space](https://huggingface.co/spaces/mlabonne/NeuralMarcoro14-7B-GGUF-Chat) (GGUF Q4_K_M).
|
19 |
|
@@ -23,6 +26,14 @@ You can try it out in this [Space](https://huggingface.co/spaces/mlabonne/Neural
|
|
23 |
|
24 |
## 🏆 Evaluation
|
25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
| Model |AGIEval|GPT4ALL|TruthfulQA|Bigbench|Average|
|
27 |
|-------------------------|------:|------:|---------:|-------:|------:|
|
28 |
|[NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B)| 44.59| 76.17| 65.94| 46.9| 58.4|
|
|
|
13 |
|
14 |
# NeuralMarcoro14-7B
|
15 |
|
16 |
+
This is a DPO fine-tuned version of [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) using the [chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) preference dataset.
|
17 |
+
It improves the performance of the model on Nous benchmark suite and the Open LLM Benchmark.
|
18 |
+
|
19 |
+
It is currently the best-performing 7B LLM on the Open LLM Leaderboard (08/01/24).
|
20 |
|
21 |
You can try it out in this [Space](https://huggingface.co/spaces/mlabonne/NeuralMarcoro14-7B-GGUF-Chat) (GGUF Q4_K_M).
|
22 |
|
|
|
26 |
|
27 |
## 🏆 Evaluation
|
28 |
|
29 |
+
### Open LLM Leaderboard
|
30 |
+
|
31 |
+
![](https://i.imgur.com/Int9P07.png)
|
32 |
+
|
33 |
+
![](https://i.imgur.com/70NXUKD.png)
|
34 |
+
|
35 |
+
### Nous
|
36 |
+
|
37 |
| Model |AGIEval|GPT4ALL|TruthfulQA|Bigbench|Average|
|
38 |
|-------------------------|------:|------:|---------:|-------:|------:|
|
39 |
|[NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B)| 44.59| 76.17| 65.94| 46.9| 58.4|
|