JonahYixMAD commited on
Commit
69bbf22
Β·
verified Β·
1 Parent(s): fb412ec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -15,14 +15,14 @@ This repository contains [`mistralai/Mistral-Small-Instruct-2409`](https://huggi
15
 
16
  2. **Accuracy:** This xMADified model preserves the quality of the full-precision model. In the table below, we present the zero-shot accuracy on popular benchmarks of this xMADified model against the [GPTQ](https://github.com/AutoGPTQ/AutoGPTQ)-quantized model (both w4g128 for a fair comparison). GPTQ fails on the difficult **MMLU** task, while the xMADai model offers significantly higher accuracy.
17
 
18
- | | xMADai Mistral-Small-Instruct-2409 (compared to GPTQ Mistral-Small-Instruct-2409) |
19
  |---|---|
20
- | MMLU | 49.45 β†’ **68.59** |
21
- | Arc Challenge | 56.14 β†’ **57.51** |
22
- | Arc Easy | 80.64 β†’ **82.83** |
23
- | LAMBADA | 75.1 β†’ **77.74** |
24
- | WinoGrande | 77.74 β†’ **79.56** |
25
- | PIQA | 77.48 β†’ **81.34** |
26
 
27
  # How to Run Model
28
 
 
15
 
16
  2. **Accuracy:** This xMADified model preserves the quality of the full-precision model. In the table below, we present the zero-shot accuracy on popular benchmarks of this xMADified model against the [GPTQ](https://github.com/AutoGPTQ/AutoGPTQ)-quantized model (both w4g128 for a fair comparison). GPTQ fails on the difficult **MMLU** task, while the xMADai model offers significantly higher accuracy.
17
 
18
+ | Benchmark | xMADai Mistral-Small-Instruct-2409 (compared to GPTQ Mistral-Small-Instruct-2409) |
19
  |---|---|
20
+ | **MMLU** | 49.45 β†’ **68.59** |
21
+ | **Arc Challenge** | 56.14 β†’ **57.51** |
22
+ | **Arc Easy** | 80.64 β†’ **82.83** |
23
+ | **LAMBADA** | 75.1 β†’ **77.74** |
24
+ | **WinoGrande** | 77.74 β†’ **79.56** |
25
+ | **PIQA** | 77.48 β†’ **81.34** |
26
 
27
  # How to Run Model
28