metadata
license: apache-2.0
base_model:
- mistralai/Mistral-Small-24B-Instruct-2501
Mistral-Small-24B-Instruct-2501-GGUF
This repo provides two GGUF quantizations of mistralai/Mistral-Small-24B-Instruct-2501:
Filename | File size | Description | TLDR |
---|---|---|---|
Mistral-Small-24B-Instruct-2501-q8_0-q4_K_S.gguf | 14.05GB | q4_K_S quantization using q8_0 for token embeddings and output tensors | Good quality, smaller size |
Mistral-Small-24B-Instruct-2501-q8_0-q6_K.gguf | 19.67GB | q6_K quantization using q8_0 for token embeddings and output tensors | Practically perfect quality, larger size |