Datasets:

Modalities:
Text
Formats:
json
Size:
< 1K
Libraries:
Datasets
Dask
License:
clefourrier HF staff commited on
Commit
a4b129d
1 Parent(s): 1d22518

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -9,14 +9,14 @@ This repository contains the request files of models that have been submitted to
9
  You can take a look at the current status of your model by finding its request file in this dataset. If your model failed, feel free to open an issue on the Open LLM Leaderboard! (We don't follow issues in this repository as often)
10
 
11
  ## Evaluation Methodology
12
- The evaluation process involves running your models against several crucial benchmarks from the Eleuther AI Language Model Evaluation Harness, a unified framework for measuring the effectiveness of generative language models. Below is a brief overview of each benchmark:
13
 
14
- 1. AI2 Reasoning Challenge (ARC) - 25-shot Grade-School Science Questions
15
- 2. HellaSwag - 10-shot Commonsense Inference
16
- 3. MMLU - 5-shot Multi-Task Accuracy Test (Covers 57 Tasks)
17
  4. TruthfulQA - Propensity to Produce Falsehoods (0-shot)
18
- 5. Winogrande - 5-shot Adversarial Winograd Schmea Challenge
19
- 6. GSM8k - 5-shot Grade School Math Word Problems Solving Complex Mathematical Reasoning
20
 
21
  Together, these benchmarks provide an assessment of a model's capabilities in terms of knowledge, reasoning, and some math, in various scenarios.
22
 
 
9
  You can take a look at the current status of your model by finding its request file in this dataset. If your model failed, feel free to open an issue on the Open LLM Leaderboard! (We don't follow issues in this repository as often)
10
 
11
  ## Evaluation Methodology
12
+ The evaluation process involves running your models against several benchmarks from the Eleuther AI Harness, a unified framework for measuring the effectiveness of generative language models. Below is a brief overview of each benchmark:
13
 
14
+ 1. AI2 Reasoning Challenge (ARC) - Grade-School Science Questions (25-shot)
15
+ 2. HellaSwag - Commonsense Inference (10-shot)
16
+ 3. MMLU - Massive Multi-Task Language Understanding, knowledge on 57 domains (5-shot)
17
  4. TruthfulQA - Propensity to Produce Falsehoods (0-shot)
18
+ 5. Winogrande - Adversarial Winograd Schema Challenge (5-shot)
19
+ 6. GSM8k - Grade School Math Word Problems Solving Complex Mathematical Reasoning (5-shot)
20
 
21
  Together, these benchmarks provide an assessment of a model's capabilities in terms of knowledge, reasoning, and some math, in various scenarios.
22