Ichsan2895
commited on
Commit
•
2c1c8e5
1
Parent(s):
78dde89
Update README.md
Browse files
README.md
CHANGED
@@ -47,7 +47,7 @@ They are also compatible with many third party UIs and libraries - please see th
|
|
47 |
### Provided files
|
48 |
|
49 |
| Name | Quant method | Bits | Size | Use case |
|
50 |
-
| ---- | ---- | ---- | ---- |
|
51 |
| [Merak-7B-v3-model-Q2_K.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v3-GGUF/blob/main/Merak-7B-v3-model-q2_k.gguf) | Q2_K | 2 | 3.08 GB| smallest, significant quality loss - not recommended for most purposes |
|
52 |
| [Merak-7B-v3-model-Q3_K_M.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v3-GGUF/blob/main/Merak-7B-v3-model-q3_k_m.gguf) | Q3_K_M | 3 | 3.52 GB| very small, high quality loss |
|
53 |
| [Merak-7B-v3-model-Q4_0.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v3-GGUF/blob/main/Merak-7B-v3-model-q4_0.gguf) | Q4_0 | 4 | 4.11 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
|
|
|
47 |
### Provided files
|
48 |
|
49 |
| Name | Quant method | Bits | Size | Use case |
|
50 |
+
| ---- | ---- | ---- | ---- | ----- |
|
51 |
| [Merak-7B-v3-model-Q2_K.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v3-GGUF/blob/main/Merak-7B-v3-model-q2_k.gguf) | Q2_K | 2 | 3.08 GB| smallest, significant quality loss - not recommended for most purposes |
|
52 |
| [Merak-7B-v3-model-Q3_K_M.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v3-GGUF/blob/main/Merak-7B-v3-model-q3_k_m.gguf) | Q3_K_M | 3 | 3.52 GB| very small, high quality loss |
|
53 |
| [Merak-7B-v3-model-Q4_0.gguf](https://huggingface.co/Ichsan2895/Merak-7B-v3-GGUF/blob/main/Merak-7B-v3-model-q4_0.gguf) | Q4_0 | 4 | 4.11 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
|