Lin-K76 commited on
Commit
9e40799
·
verified ·
1 Parent(s): 62a997c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -8
README.md CHANGED
@@ -25,7 +25,7 @@ language:
25
  - **Model Developers:** Neural Magic
26
 
27
  Quantized version of [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct).
28
- It achieves an average score of 77.75 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 78.67.
29
 
30
  ### Model Optimizations
31
 
@@ -162,7 +162,8 @@ oneshot(
162
 
163
  ## Evaluation
164
 
165
- The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command:
 
166
  ```
167
  lm_eval \
168
  --model vllm \
@@ -198,11 +199,11 @@ lm_eval \
198
  <tr>
199
  <td>ARC Challenge (25-shot)
200
  </td>
201
- <td>70.65
202
  </td>
203
- <td>69.03
204
  </td>
205
- <td>97.71%
206
  </td>
207
  </tr>
208
  <tr>
@@ -248,11 +249,11 @@ lm_eval \
248
  <tr>
249
  <td><strong>Average</strong>
250
  </td>
251
- <td><strong>78.67</strong>
252
  </td>
253
- <td><strong>77.75</strong>
254
  </td>
255
- <td><strong>98.82%</strong>
256
  </td>
257
  </tr>
258
  </table>
 
25
  - **Model Developers:** Neural Magic
26
 
27
  Quantized version of [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct).
28
+ It achieves an average score of 82.00 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 82.74.
29
 
30
  ### Model Optimizations
31
 
 
162
 
163
  ## Evaluation
164
 
165
+ The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command.
166
+ A modified version of ARC-C was used for evaluations, in line with Llama 3.1's prompting.
167
  ```
168
  lm_eval \
169
  --model vllm \
 
199
  <tr>
200
  <td>ARC Challenge (25-shot)
201
  </td>
202
+ <td>82.74
203
  </td>
204
+ <td>82.00
205
  </td>
206
+ <td>98.93%
207
  </td>
208
  </tr>
209
  <tr>
 
249
  <tr>
250
  <td><strong>Average</strong>
251
  </td>
252
+ <td><strong>82.74</strong>
253
  </td>
254
+ <td><strong>82.00</strong>
255
  </td>
256
+ <td><strong>99.10%</strong>
257
  </td>
258
  </tr>
259
  </table>