Update README.md
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ base_model: mistralai/Mistral-Nemo-Instruct-2407
|
|
16 |
|
17 |
- needs [#8604](https://github.com/ggerganov/llama.cpp/pull/8604) & latest master with tekken tokenizer fixes applied
|
18 |
- quants done with an importance matrix for improved quantization loss
|
19 |
-
-
|
20 |
- Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
|
21 |
- experimental custom quant types
|
22 |
- `_L` with `--output-tensor-type f16 --token-embedding-type f16` (same as bartowski's)
|
|
|
16 |
|
17 |
- needs [#8604](https://github.com/ggerganov/llama.cpp/pull/8604) & latest master with tekken tokenizer fixes applied
|
18 |
- quants done with an importance matrix for improved quantization loss
|
19 |
+
- Quantized ggufs & imatrix from hf bf16, through bf16. `safetensors bf16 -> gguf bf16 -> quant` for optimal quant loss.
|
20 |
- Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
|
21 |
- experimental custom quant types
|
22 |
- `_L` with `--output-tensor-type f16 --token-embedding-type f16` (same as bartowski's)
|