Adding Evaluation Results
Browse filesThis is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr
The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.
If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions
README.md
CHANGED
@@ -194,3 +194,17 @@ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-le
|
|
194 |
|Winogrande (5-shot) |83.82|
|
195 |
|GSM8k (5-shot) |72.78|
|
196 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
194 |
|Winogrande (5-shot) |83.82|
|
195 |
|GSM8k (5-shot) |72.78|
|
196 |
|
197 |
+
|
198 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
199 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jsfs11__MoEv4Config-TestWeightedTIES-7b)
|
200 |
+
|
201 |
+
| Metric |Value|
|
202 |
+
|---------------------------------|----:|
|
203 |
+
|Avg. |75.39|
|
204 |
+
|AI2 Reasoning Challenge (25-Shot)|71.59|
|
205 |
+
|HellaSwag (10-Shot) |88.19|
|
206 |
+
|MMLU (5-Shot) |65.07|
|
207 |
+
|TruthfulQA (0-shot) |70.87|
|
208 |
+
|Winogrande (5-shot) |83.82|
|
209 |
+
|GSM8k (5-shot) |72.78|
|
210 |
+
|