Adding Evaluation Results
#5
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -68,4 +68,17 @@ Please cite our paper and github when using our code, data or model.
|
|
68 |
journal = {GitHub repository},
|
69 |
howpublished = {\url{https://github.com/LianjiaTech/BELLE}},
|
70 |
}
|
71 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
68 |
journal = {GitHub repository},
|
69 |
howpublished = {\url{https://github.com/LianjiaTech/BELLE}},
|
70 |
}
|
71 |
+
```
|
72 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
73 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BELLE-2__BELLE-Llama2-13B-chat-0.4M)
|
74 |
+
|
75 |
+
| Metric | Value |
|
76 |
+
|-----------------------|---------------------------|
|
77 |
+
| Avg. | 53.77 |
|
78 |
+
| ARC (25-shot) | 60.67 |
|
79 |
+
| HellaSwag (10-shot) | 82.31 |
|
80 |
+
| MMLU (5-shot) | 55.94 |
|
81 |
+
| TruthfulQA (0-shot) | 50.85 |
|
82 |
+
| Winogrande (5-shot) | 75.53 |
|
83 |
+
| GSM8K (5-shot) | 14.4 |
|
84 |
+
| DROP (3-shot) | 36.7 |
|