|
--- |
|
license: gemma |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
tags: |
|
- google |
|
- gemma |
|
- gguf |
|
- imatrix |
|
base_model: google/gemma-2-27b-it |
|
--- |
|
|
|
|
|
# Quant Infos |
|
|
|
## Updated for all recent llama.cpp fixes (final logit soft capping+sliding window+tokenizer) |
|
|
|
- quants done with an importance matrix for improved quantization loss |
|
- Requantized ggufs & imatrix from hf bf16 |
|
- initial version was based on f32 gguf provided by google, which had various issues |
|
- also updated for all recent llama.cpp fixes (final logit soft capping+sliding window+tokenizer) |
|
- Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S |
|
- experimental custom quant types |
|
- `_L` with `--output-tensor-type f16 --token-embedding-type f16` (same as bartowski's) |
|
- Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [5fac350b9cc49d0446fc291b9c4ad53666c77591](https://github.com/ggerganov/llama.cpp/commit/5fac350b9cc49d0446fc291b9c4ad53666c77591) (master from 2024-07-02) |
|
- Imatrix generated with [this](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) multi-purpose dataset by [bartowski](https://huggingface.co/bartowski). |
|
``` |
|
./imatrix -m $model_name-bf16.gguf -f calibration_datav3.txt -o $model_name.imatrix |
|
``` |
|
|
|
# Original Model Card |
|
|
|
TODO |