metadata
base_model: nvidia/Minitron-4B-Base
inference: false
library_name: gguf
license: other
license_link: >-
https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
license_name: nvidia-open-model-license
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- quantized
- GGUF
- quantization
- static
- 16bit
- 8bit
- 6bit
- 5bit
- 4bit
- 3bit
- 2bit
Minitron-4B-Base-GGUF
Llama.cpp static quantization of nvidia/Minitron-4B-Base
Original Model: nvidia/Minitron-4B-Base
Original dtype: BF16
(bfloat16
)
Quantized by: llama.cpp b3600
IMatrix dataset: here
Files
Common Quants
Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
---|---|---|---|---|---|
Minitron-4B-Base.Q8_0 | Q8_0 | - | ⏳ Processing | ⚪ Static | - |
Minitron-4B-Base.Q6_K | Q6_K | - | ⏳ Processing | ⚪ Static | - |
Minitron-4B-Base.Q4_K | Q4_K | - | ⏳ Processing | ⚪ Static | - |
Minitron-4B-Base.Q3_K | Q3_K | - | ⏳ Processing | ⚪ Static | - |
Minitron-4B-Base.Q2_K | Q2_K | - | ⏳ Processing | ⚪ Static | - |
All Quants
Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
---|---|---|---|---|---|
Minitron-4B-Base.BF16 | BF16 | - | ⏳ Processing | ⚪ Static | - |
Minitron-4B-Base.FP16 | F16 | - | ⏳ Processing | ⚪ Static | - |
Minitron-4B-Base.Q8_0 | Q8_0 | - | ⏳ Processing | ⚪ Static | - |
Minitron-4B-Base.Q6_K | Q6_K | - | ⏳ Processing | ⚪ Static | - |
Minitron-4B-Base.Q5_K | Q5_K | - | ⏳ Processing | ⚪ Static | - |
Minitron-4B-Base.Q5_K_S | Q5_K_S | - | ⏳ Processing | ⚪ Static | - |
Minitron-4B-Base.Q4_K | Q4_K | - | ⏳ Processing | ⚪ Static | - |
Minitron-4B-Base.Q4_K_S | Q4_K_S | - | ⏳ Processing | ⚪ Static | - |
Minitron-4B-Base.IQ4_NL | IQ4_NL | - | ⏳ Processing | ⚪ Static | - |
Minitron-4B-Base.IQ4_XS | IQ4_XS | - | ⏳ Processing | ⚪ Static | - |
Minitron-4B-Base.Q3_K | Q3_K | - | ⏳ Processing | ⚪ Static | - |
Minitron-4B-Base.Q3_K_L | Q3_K_L | - | ⏳ Processing | ⚪ Static | - |
Minitron-4B-Base.Q3_K_S | Q3_K_S | - | ⏳ Processing | ⚪ Static | - |
Minitron-4B-Base.IQ3_M | IQ3_M | - | ⏳ Processing | ⚪ Static | - |
Minitron-4B-Base.IQ3_S | IQ3_S | - | ⏳ Processing | ⚪ Static | - |
Minitron-4B-Base.IQ3_XS | IQ3_XS | - | ⏳ Processing | ⚪ Static | - |
Minitron-4B-Base.Q2_K | Q2_K | - | ⏳ Processing | ⚪ Static | - |
Downloading using huggingface-cli
If you do not have hugginface-cli installed:
pip install -U "huggingface_hub[cli]"
Download the specific file you want:
huggingface-cli download legraphista/Minitron-4B-Base-GGUF --include "Minitron-4B-Base.Q8_0.gguf" --local-dir ./
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
huggingface-cli download legraphista/Minitron-4B-Base-GGUF --include "Minitron-4B-Base.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's
Inference
Llama.cpp
llama.cpp/main -m Minitron-4B-Base.Q8_0.gguf --color -i -p "prompt here"
FAQ
Why is the IMatrix not applied everywhere?
According to this investigation, it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
How do I merge a split GGUF?
- Make sure you have
gguf-split
available- To get hold of
gguf-split
, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find
gguf-split
- To get hold of
- Locate your GGUF chunks folder (ex:
Minitron-4B-Base.Q8_0
) - Run
gguf-split --merge Minitron-4B-Base.Q8_0/Minitron-4B-Base.Q8_0-00001-of-XXXXX.gguf Minitron-4B-Base.Q8_0.gguf
- Make sure to point
gguf-split
to the first chunk of the split.
- Make sure to point
Got a suggestion? Ping me @legraphista!