legraphista
commited on
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -52,7 +52,7 @@ Link: [here](https://huggingface.co/legraphista/internlm2-math-plus-7b-IMat-GGUF
|
|
52 |
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
|
53 |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
|
54 |
| internlm2-math-plus-7b.Q8_0 | Q8_0 | - | β³ Processing | βͺ Static | -
|
55 |
-
| internlm2-math-plus-7b.Q6_K | Q6_K |
|
56 |
| internlm2-math-plus-7b.Q4_K | Q4_K | - | β³ Processing | π’ IMatrix | -
|
57 |
| internlm2-math-plus-7b.Q3_K | Q3_K | - | β³ Processing | π’ IMatrix | -
|
58 |
| internlm2-math-plus-7b.Q2_K | Q2_K | - | β³ Processing | π’ IMatrix | -
|
|
|
52 |
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
|
53 |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
|
54 |
| internlm2-math-plus-7b.Q8_0 | Q8_0 | - | β³ Processing | βͺ Static | -
|
55 |
+
| [internlm2-math-plus-7b.Q6_K.gguf](https://huggingface.co/legraphista/internlm2-math-plus-7b-IMat-GGUF/blob/main/internlm2-math-plus-7b.Q6_K.gguf) | Q6_K | 6.35GB | β
Available | βͺ Static | π¦ No
|
56 |
| internlm2-math-plus-7b.Q4_K | Q4_K | - | β³ Processing | π’ IMatrix | -
|
57 |
| internlm2-math-plus-7b.Q3_K | Q3_K | - | β³ Processing | π’ IMatrix | -
|
58 |
| internlm2-math-plus-7b.Q2_K | Q2_K | - | β³ Processing | π’ IMatrix | -
|