JonahYixMAD
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -15,14 +15,10 @@ This repository contains [`mistralai/Mistral-Small-Instruct-2409`](https://huggi
|
|
15 |
|
16 |
2. **Accuracy:** This xMADified model preserves the quality of the full-precision model. In the table below, we present the zero-shot accuracy on popular benchmarks of this xMADified model against the [GPTQ](https://github.com/AutoGPTQ/AutoGPTQ)-quantized model (both w4g128 for a fair comparison). GPTQ fails on the difficult **MMLU** task, while the xMADai model offers significantly higher accuracy.
|
17 |
|
18 |
-
|
|
19 |
-
|
20 |
-
|
|
21 |
-
|
|
22 |
-
| **Arc Easy** | 80.64 → **82.83** |
|
23 |
-
| **LAMBADA** | 75.1 → **77.74** |
|
24 |
-
| **WinoGrande** | 77.74 → **79.56** |
|
25 |
-
| **PIQA** | 77.48 → **81.34** |
|
26 |
|
27 |
# How to Run Model
|
28 |
|
|
|
15 |
|
16 |
2. **Accuracy:** This xMADified model preserves the quality of the full-precision model. In the table below, we present the zero-shot accuracy on popular benchmarks of this xMADified model against the [GPTQ](https://github.com/AutoGPTQ/AutoGPTQ)-quantized model (both w4g128 for a fair comparison). GPTQ fails on the difficult **MMLU** task, while the xMADai model offers significantly higher accuracy.
|
17 |
|
18 |
+
| Model | MMLU | Arc Challenge | Arc Easy | LAMBADA | WinoGrande | PIQA |
|
19 |
+
|---|---|---|---|---|---|---|
|
20 |
+
| GPTQ Mistral-Small-Instruct-2409 | 49.45 | 56.14 | 80.64 | 75.1 | 77.74 | 77.48 |
|
21 |
+
| xMADai Mistral-Small-Instruct-2409 | **68.59** | **57.51** | **82.83** | **77.74** | **79.56** | **81.34** |
|
|
|
|
|
|
|
|
|
22 |
|
23 |
# How to Run Model
|
24 |
|