Adding Evaluation Results
#4
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -233,3 +233,17 @@ This is the **Full-Weight** of WizardLM-13B V1.1 model.
|
|
233 |
- π₯π₯π₯ [7/7/2023] The **WizardLM-13B-V1.1** achieves **6.74** on [MT-Bench Leaderboard](https://chat.lmsys.org/?leaderboard), **86.32%** on [AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpaca_eval/), and **99.3%** on [WizardLM Eval](https://github.com/nlpxucan/WizardLM/blob/main/WizardLM/data/WizardLM_testset.jsonl). (Note: MT-Bench and AlpacaEval are all self-test, will push update and request review. All tests are completed under their official settings.)
|
234 |
|
235 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
233 |
- π₯π₯π₯ [7/7/2023] The **WizardLM-13B-V1.1** achieves **6.74** on [MT-Bench Leaderboard](https://chat.lmsys.org/?leaderboard), **86.32%** on [AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpaca_eval/), and **99.3%** on [WizardLM Eval](https://github.com/nlpxucan/WizardLM/blob/main/WizardLM/data/WizardLM_testset.jsonl). (Note: MT-Bench and AlpacaEval are all self-test, will push update and request review. All tests are completed under their official settings.)
|
234 |
|
235 |
|
236 |
+
|
237 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
238 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__WizardLM-13B-V1-1-SuperHOT-8K-GPTQ)
|
239 |
+
|
240 |
+
| Metric | Value |
|
241 |
+
|-----------------------|---------------------------|
|
242 |
+
| Avg. | 48.79 |
|
243 |
+
| ARC (25-shot) | 57.0 |
|
244 |
+
| HellaSwag (10-shot) | 80.32 |
|
245 |
+
| MMLU (5-shot) | 47.08 |
|
246 |
+
| TruthfulQA (0-shot) | 53.46 |
|
247 |
+
| Winogrande (5-shot) | 74.35 |
|
248 |
+
| GSM8K (5-shot) | 0.68 |
|
249 |
+
| DROP (3-shot) | 28.62 |
|