Update README.md
Browse files
README.md
CHANGED
@@ -20,6 +20,8 @@ This repository contains [`mistralai/Mistral-Small-Instruct-2409`](https://huggi
|
|
20 |
| GPTQ Mistral-Small-Instruct-2409 | 12.2 GB | 49.45 | 56.14 | 80.64 | 75.1 | 77.74 | 77.48 |
|
21 |
| xMADified Mistral-Small-Instruct-2409 (this model) | 12.2 GB | **68.59** | **57.51** | **82.83** | **77.74** | **79.56** | **81.34** |
|
22 |
|
|
|
|
|
23 |
# How to Run Model
|
24 |
|
25 |
Loading the model checkpoint of this xMADified model requires less than 12 GiB of VRAM. Hence it can be efficiently run on a 16 GB GPU.
|
|
|
20 |
| GPTQ Mistral-Small-Instruct-2409 | 12.2 GB | 49.45 | 56.14 | 80.64 | 75.1 | 77.74 | 77.48 |
|
21 |
| xMADified Mistral-Small-Instruct-2409 (this model) | 12.2 GB | **68.59** | **57.51** | **82.83** | **77.74** | **79.56** | **81.34** |
|
22 |
|
23 |
+
3. **Fine-tuning**: These models are fine-tunable over the same reduced (12 GB) hardware in mere 3-clicks. Watch our product demo [here](https://www.youtube.com/watch?v=S0wX32kT90s&list=TLGGL9fvmJ-d4xsxODEwMjAyNA)
|
24 |
+
|
25 |
# How to Run Model
|
26 |
|
27 |
Loading the model checkpoint of this xMADified model requires less than 12 GiB of VRAM. Hence it can be efficiently run on a 16 GB GPU.
|