Update README.md
Browse files
README.md
CHANGED
@@ -20,10 +20,12 @@ Using llama.cpp fork: [https://github.com/fairydreaming/llama.cpp/tree/deepseek-
|
|
20 |
- Merged GGUF should appear
|
21 |
|
22 |
# Quants:
|
23 |
-
- bf16 (generating, 85% complete)
|
24 |
-
- f16 (after
|
25 |
-
- f32 (may require some time to upload, after q8_0)
|
26 |
-
- q8_0 (after bf16)
|
27 |
-
- q4_k_m (after q8_0)
|
|
|
|
|
28 |
|
29 |
If quantize.exe supports it I will make RTN quants.
|
|
|
20 |
- Merged GGUF should appear
|
21 |
|
22 |
# Quants:
|
23 |
+
- bf16 (generating, 85% complete) [size: 440gb]
|
24 |
+
- f16 (after q2_k, but just use bf16) [estimated size: ~400gb]
|
25 |
+
- f32 (may require some time to upload, after q8_0) [estimated size: ~800gb]
|
26 |
+
- q8_0 (after bf16) [estimated size: 233.27gb]
|
27 |
+
- q4_k_m (after q8_0) [estimated size: 133.10gb]
|
28 |
+
- q2_k (after q4_k_m) [estimated size: ~65gb]
|
29 |
+
- q3_k_s (low priority) [estimated size: 96.05gb]
|
30 |
|
31 |
If quantize.exe supports it I will make RTN quants.
|