leaderboard-pr-bot commited on
Commit
3a4777d
·
1 Parent(s): d7888b3

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -225,4 +225,17 @@ Colossal-LLaMA-2-7B is a derivation of LLaMA-2 that carries risks with use. Test
225
  author={Dao, Tri},
226
  year={2023}
227
  }
228
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
225
  author={Dao, Tri},
226
  year={2023}
227
  }
228
+ ```
229
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
230
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_hpcai-tech__Colossal-LLaMA-2-7b-base)
231
+
232
+ | Metric | Value |
233
+ |-----------------------|---------------------------|
234
+ | Avg. | 49.3 |
235
+ | ARC (25-shot) | 53.5 |
236
+ | HellaSwag (10-shot) | 70.5 |
237
+ | MMLU (5-shot) | 54.4 |
238
+ | TruthfulQA (0-shot) | 50.19 |
239
+ | Winogrande (5-shot) | 70.01 |
240
+ | GSM8K (5-shot) | 9.7 |
241
+ | DROP (3-shot) | 36.82 |