readme: update quants
Browse files
README.md
CHANGED
@@ -82,10 +82,13 @@ quantize \
|
|
82 |
- q8_0 (later, please use q4_k_m for now) [estimated size: 233.27gb]
|
83 |
- q4_k_m [size: 132gb]
|
84 |
- q2_k [size: 80gb]
|
85 |
-
- iq2_xxs
|
86 |
-
-
|
|
|
87 |
```
|
88 |
|
|
|
|
|
89 |
# Planned Quants (using importance matrix):
|
90 |
```
|
91 |
- q5_k_m
|
@@ -97,8 +100,7 @@ quantize \
|
|
97 |
- iq2_xs
|
98 |
- iq2_s
|
99 |
- iq2_m
|
100 |
-
- iq1_s
|
101 |
-
- iq1_m
|
102 |
```
|
103 |
|
104 |
Note: the model files do not have some DeepSeek v2 specific parameters, will look into adding them
|
|
|
82 |
- q8_0 (later, please use q4_k_m for now) [estimated size: 233.27gb]
|
83 |
- q4_k_m [size: 132gb]
|
84 |
- q2_k [size: 80gb]
|
85 |
+
- iq2_xxs [size: 61.5gb]
|
86 |
+
- iq3_xs (uploading) [size: 89.6gb]
|
87 |
+
- iq1_m [size: 27.3gb]
|
88 |
```
|
89 |
|
90 |
+
Note: Use iMatrix quants only if you can fully offload to GPU, otherwise speed will be affected a lot.
|
91 |
+
|
92 |
# Planned Quants (using importance matrix):
|
93 |
```
|
94 |
- q5_k_m
|
|
|
100 |
- iq2_xs
|
101 |
- iq2_s
|
102 |
- iq2_m
|
103 |
+
- iq1_s (note: for fun only, this quant is likely useless)
|
|
|
104 |
```
|
105 |
|
106 |
Note: the model files do not have some DeepSeek v2 specific parameters, will look into adding them
|