About

weighted/imatrix quants of https://huggingface.co/mlabonne/BigLlama-3.1-1T-Instruct

static quants are available at https://huggingface.co/mradermacher/BigLlama-3.1-1T-Instruct-GGUF

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Link Type Size/GB Notes
P1 P2 P3 P4 P5 i1-IQ1_S 214.1 for the desperate
P1 P2 P3 P4 P5 i1-IQ1_M 234.9 mostly desperate
P1 P2 P3 P4 P5 P6 i1-IQ2_XXS 269.5
P1 P2 P3 P4 P5 P6 P7 i1-IQ2_XS 299.8
P1 P2 P3 P4 P5 P6 P7 i1-IQ2_S 315.4
P1 P2 P3 P4 P5 P6 P7 i1-IQ2_M 343.0
P1 P2 P3 P4 P5 P6 P7 P8 i1-Q2_K 374.5 IQ3_XXS probably better
P1 P2 P3 P4 P5 P6 P7 P8 i1-IQ3_XXS 390.8 lower quality
P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 i1-IQ3_XS 416.4
P1 P2 P3 P4 P5 P6 P7 P8 P9 i1-Q3_K_S 438.8 IQ3_XS probably better
P1 P2 P3 P4 P5 P6 P7 P8 P9 i1-IQ3_S 440.2 beats Q3_K*
P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 i1-IQ3_M 455.9
P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 i1-Q3_K_M 490.0 IQ3_S probably better
P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 i1-Q3_K_L 534.1 IQ3_M probably better
P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 i1-IQ4_XS 543.7
P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 i1-Q4_0 575.9 fast, low quality
P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 i1-Q4_K_S 578.1 optimal size/speed/quality
P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 i1-Q4_K_M 610.5 fast, recommended
P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15 P16 i1-Q5_K_S 700.9
P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15 i1-Q5_K_M 719.8
P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15 P16 P17 i1-Q6_K 836.0 practically like static Q6_K

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for mradermacher/BigLlama-3.1-1T-Instruct-i1-GGUF

Finetuned
(2)
this model