Commit
•
bf803eb
1
Parent(s):
7f37597
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,123 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
|
5 |
+
# Dataset for MixEval
|
6 |
+
|
7 |
+
|
8 |
+
[MixEval](https://github.com/Psycoy/MixEval/) is a A dynamic benchmark evaluating LLMs using real-world user queries and benchmarks, achieving a 0.96 model ranking correlation with Chatbot Arena and costs around $0.6 to run using GPT-3.5 as a Judge.
|
9 |
+
|
10 |
+
You can find more information and access the MixEval leaderboard [here](https://mixeval.github.io/#leaderboard).
|
11 |
+
|
12 |
+
This is a fork of the original MixEval repository. The original repository can be found [here](https://github.com/Psycoy/MixEval/). I created this fork to make the integration and use of MixEval easier during the training of new models. This Fork includes several improved feature to make usages easier and more flexible. Including:
|
13 |
+
|
14 |
+
* Evaluation of Local Models during or post trainig with `transformers`
|
15 |
+
* Hugging Face Datasets integration to avoid the need of local files.
|
16 |
+
* Use of Hugging Face TGI or vLLM to accelerate evaluation and making it more manageable
|
17 |
+
* Improved markdown outputs and timing for the training
|
18 |
+
* Fixed pip install for remote or CI Integration.
|
19 |
+
|
20 |
+
## Getting started
|
21 |
+
|
22 |
+
```bash
|
23 |
+
# Fork with more losely dependencies
|
24 |
+
pip install git+https://github.com/philschmid/MixEval --upgrade
|
25 |
+
```
|
26 |
+
|
27 |
+
_Note: If you want to evaluate models that are not included Take a look [here](https://github.com/philschmid/MixEval?tab=readme-ov-file#registering-new-models). Zephyr example [here](https://github.com/philschmid/MixEval/blob/main/mix_eval/models/zephyr_7b_beta.py)._
|
28 |
+
|
29 |
+
## Evaluation open LLMs
|
30 |
+
|
31 |
+
**Remote Hugging Face model with existing config:**
|
32 |
+
|
33 |
+
```bash
|
34 |
+
# MODEL_PARSER_API=<your openai api key
|
35 |
+
MODEL_PARSER_API=$(echo $OPENAI_API_KEY) python -m mix_eval.evaluate \
|
36 |
+
--data_path hf://zeitgeist-ai/mixeval \
|
37 |
+
--model_name zephyr_7b_beta \
|
38 |
+
--benchmark mixeval_hard \
|
39 |
+
--version 2024-06-01 \
|
40 |
+
--batch_size 20 \
|
41 |
+
--output_dir results \
|
42 |
+
--api_parallel_num 20
|
43 |
+
```
|
44 |
+
|
45 |
+
**Using vLLM/TGI with hosted or local API:**
|
46 |
+
|
47 |
+
1. start you environment
|
48 |
+
```bash
|
49 |
+
python -m vllm.entrypoints.openai.api_server --model alignment-handbook/zephyr-7b-dpo-full
|
50 |
+
```
|
51 |
+
|
52 |
+
2. run the following command
|
53 |
+
|
54 |
+
```bash
|
55 |
+
MODEL_PARSER_API=$(echo $OPENAI_API_KEY) API_URL=http://localhost:8000/v1 python -m mix_eval.evaluate \
|
56 |
+
--data_path hf://zeitgeist-ai/mixeval \
|
57 |
+
--model_name local_api \
|
58 |
+
--model_path alignment-handbook/zephyr-7b-dpo-full \
|
59 |
+
--benchmark mixeval_hard \
|
60 |
+
--version 2024-06-01 \
|
61 |
+
--batch_size 20 \
|
62 |
+
--output_dir results \
|
63 |
+
--api_parallel_num 20
|
64 |
+
```
|
65 |
+
|
66 |
+
3. Results
|
67 |
+
```bash
|
68 |
+
| Metric | Score |
|
69 |
+
|--------|-------|
|
70 |
+
| MBPP | 100.00% |
|
71 |
+
| OpenBookQA | 62.50% |
|
72 |
+
| DROP | 47.60% |
|
73 |
+
| BBH | 43.10% |
|
74 |
+
| MATH | 38.10% |
|
75 |
+
| PIQA | 37.50% |
|
76 |
+
| TriviaQA | 37.30% |
|
77 |
+
| BoolQ | 35.10% |
|
78 |
+
| CommonsenseQA | 34.00% |
|
79 |
+
| GSM8k | 33.60% |
|
80 |
+
| MMLU | 29.00% |
|
81 |
+
| HellaSwag | 27.90% |
|
82 |
+
| AGIEval | 26.80% |
|
83 |
+
| GPQA | 0.00% |
|
84 |
+
| ARC | 0.00% |
|
85 |
+
| SIQA | 0.00% |
|
86 |
+
| overall score (final score) | 34.85% |
|
87 |
+
|
88 |
+
Total time: 398.0534451007843
|
89 |
+
```
|
90 |
+
|
91 |
+
Takes around 5 minutes to evaluate.
|
92 |
+
|
93 |
+
**Local Hugging Face model from path:**
|
94 |
+
|
95 |
+
```bash
|
96 |
+
# MODEL_PARSER_API=<your openai api key>
|
97 |
+
MODEL_PARSER_API=$(echo $OPENAI_API_KEY) python -m mix_eval.evaluate \
|
98 |
+
--data_path hf://zeitgeist-ai/mixeval \
|
99 |
+
--model_path my/local/path \
|
100 |
+
--output_dir results/agi-5 \
|
101 |
+
--model_name local_chat \
|
102 |
+
--benchmark mixeval_hard \
|
103 |
+
--version 2024-06-01 \
|
104 |
+
--batch_size 20 \
|
105 |
+
--api_parallel_num 20
|
106 |
+
```
|
107 |
+
|
108 |
+
**Remote Hugging Face model without config and defaults**
|
109 |
+
|
110 |
+
_Note: We use the model name `local_chat` to avoid the need for a config file and load it from the Hugging Face model hub._
|
111 |
+
|
112 |
+
```bash
|
113 |
+
# MODEL_PARSER_API=<your openai api key>
|
114 |
+
MODEL_PARSER_API=$(echo $OPENAI_API_KEY) python -m mix_eval.evaluate \
|
115 |
+
--data_path hf://zeitgeist-ai/mixeval \
|
116 |
+
--model_path alignment-handbook/zephyr-7b-sft-full \
|
117 |
+
--output_dir results/handbook-zephyr \
|
118 |
+
--model_name local_chat \
|
119 |
+
--benchmark mixeval_hard \
|
120 |
+
--version 2024-06-01 \
|
121 |
+
--batch_size 20 \
|
122 |
+
--api_parallel_num 20
|
123 |
+
```
|