--- language: - code tags: - llama-cpp - Codestral-22B-v0.1 - gguf - Q5_K_M - 22B - 5-bit - Codestral - llama-cpp - mistralai - code - math - chat - roleplay - text-generation - safetensors - nlp - code inference: false license_name: mnpl license_link: https://mistral.ai/licences/MNPL-0.1.md extra_gated_description: If you want to learn more about how we process your personal data, please read our Privacy Policy. base_model: mistralai/Codestral-22B-v0.1 library_name: transformers pipeline_tag: text-generation --- # roleplaiapp/Codestral-22B-v0.1-Q5_K_M-GGUF **Repo:** `roleplaiapp/Codestral-22B-v0.1-Q5_K_M-GGUF` **Original Model:** `Codestral-22B-v0.1` **Organization:** `mistralai` **Quantized File:** `codestral-22b-v0.1-q5_k_m.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q5_K_M` **Use Imatrix:** `False` **Split Model:** `False` ## Overview This is an GGUF Q5_K_M quantized version of [Codestral-22B-v0.1](https://huggingface.co/mistralai/Codestral-22B-v0.1). ## Quantization By I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/)