Commit
•
1d22518
1
Parent(s):
18b09f8
Update README.md
Browse files
README.md
CHANGED
@@ -2,9 +2,11 @@
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
![HuggingFace LeaderBoard](https://cdn-uploads.huggingface.co/production/uploads/6202a599216215a22221dea9/Uh5JX7Kq-rUxoVrdsV-M-.gif)
|
5 |
-
# Open LLM Leaderboard
|
6 |
|
7 |
-
This repository contains the
|
|
|
|
|
8 |
|
9 |
## Evaluation Methodology
|
10 |
The evaluation process involves running your models against several crucial benchmarks from the Eleuther AI Language Model Evaluation Harness, a unified framework for measuring the effectiveness of generative language models. Below is a brief overview of each benchmark:
|
@@ -15,26 +17,11 @@ The evaluation process involves running your models against several crucial benc
|
|
15 |
4. TruthfulQA - Propensity to Produce Falsehoods (0-shot)
|
16 |
5. Winogrande - 5-shot Adversarial Winograd Schmea Challenge
|
17 |
6. GSM8k - 5-shot Grade School Math Word Problems Solving Complex Mathematical Reasoning
|
18 |
-
7. DROP - 3-shot Reading Comprehension Benchmark
|
19 |
|
20 |
-
Together, these benchmarks provide
|
21 |
|
22 |
## Accessing Your Results
|
23 |
To view the numerical results of your evaluated models, visit the dedicated Hugging Face Dataset at https://huggingface.co/datasets/open-llm-leaderboard/results. This dataset offers a thorough breakdown of each model's performance on the individual benchmarks.
|
24 |
|
25 |
## Exploring Model Details
|
26 |
For further insights into the inputs and outputs of specific models, locate the "📄" emoji associated with the desired model within this repository. Clicking on this icon will direct you to the respective GitHub page containing detailed information about the model's behavior during the evaluation process.
|
27 |
-
|
28 |
-
## Tracking Evaluation Requests
|
29 |
-
To monitor the progress of your evaluation requests, navigate to the Hugging Face Dataset at https://huggingface.co/datasets/open-llm-leaderboard/requests. This dataset encompasses community queries related to the evaluation process and displays the current status of each request.
|
30 |
-
|
31 |
-
## Recent Developments and Additions
|
32 |
-
The Open LLM Leaderboard recently underwent a massive revamp, dedicating a year's worth of GPU time to integrate three additional benchmark metrics from the EleutherAI Harness. Working alongside Saylor Twift, 2000+ models were re-run on these new benchmarks, resulting in more informative findings for both model creators and users.
|
33 |
-
|
34 |
-
### New Evaluations Introduced:
|
35 |
-
|
36 |
-
1. DROP - Requiring both reading comprehension skills and various reasoning steps to address questions derived from Wikipedia paragraphs.
|
37 |
-
2. GSM8K - Designed to test the model's capacity to tackle complex, multi-step mathematical reasoning problems in grade-school math word problems.
|
38 |
-
3. WinoGrande - An adversarial Winograd completion dataset, focusing on the selection of the most relevant word between two options that significantly alters the meaning of the statement.
|
39 |
-
|
40 |
-
These additions enable a more in-depth examination of a model's reasoning abilities and ultimately contribute to a fairer ranking system.
|
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
![HuggingFace LeaderBoard](https://cdn-uploads.huggingface.co/production/uploads/6202a599216215a22221dea9/Uh5JX7Kq-rUxoVrdsV-M-.gif)
|
5 |
+
# Open LLM Leaderboard Requests
|
6 |
|
7 |
+
This repository contains the request files of models that have been submitted to the Open LLM Leaderboard.
|
8 |
+
|
9 |
+
You can take a look at the current status of your model by finding its request file in this dataset. If your model failed, feel free to open an issue on the Open LLM Leaderboard! (We don't follow issues in this repository as often)
|
10 |
|
11 |
## Evaluation Methodology
|
12 |
The evaluation process involves running your models against several crucial benchmarks from the Eleuther AI Language Model Evaluation Harness, a unified framework for measuring the effectiveness of generative language models. Below is a brief overview of each benchmark:
|
|
|
17 |
4. TruthfulQA - Propensity to Produce Falsehoods (0-shot)
|
18 |
5. Winogrande - 5-shot Adversarial Winograd Schmea Challenge
|
19 |
6. GSM8k - 5-shot Grade School Math Word Problems Solving Complex Mathematical Reasoning
|
|
|
20 |
|
21 |
+
Together, these benchmarks provide an assessment of a model's capabilities in terms of knowledge, reasoning, and some math, in various scenarios.
|
22 |
|
23 |
## Accessing Your Results
|
24 |
To view the numerical results of your evaluated models, visit the dedicated Hugging Face Dataset at https://huggingface.co/datasets/open-llm-leaderboard/results. This dataset offers a thorough breakdown of each model's performance on the individual benchmarks.
|
25 |
|
26 |
## Exploring Model Details
|
27 |
For further insights into the inputs and outputs of specific models, locate the "📄" emoji associated with the desired model within this repository. Clicking on this icon will direct you to the respective GitHub page containing detailed information about the model's behavior during the evaluation process.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|