metadata
language:
- code
tags:
- llama-cpp
- Codestral-22B-v0.1
- gguf
- Q4_K_S
- 22B
- 4-bit
- Codestral
- llama-cpp
- mistralai
- code
- math
- chat
- roleplay
- text-generation
- safetensors
- nlp
- code
inference: false
license_name: mnpl
license_link: https://mistral.ai/licences/MNPL-0.1.md
extra_gated_description: >-
If you want to learn more about how we process your personal data, please read
our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
base_model: mistralai/Codestral-22B-v0.1
library_name: transformers
pipeline_tag: text-generation
roleplaiapp/Codestral-22B-v0.1-Q4_K_S-GGUF
Repo: roleplaiapp/Codestral-22B-v0.1-Q4_K_S-GGUF
Original Model: Codestral-22B-v0.1
Organization: mistralai
Quantized File: codestral-22b-v0.1-q4_k_s.gguf
Quantization: GGUF
Quantization Method: Q4_K_S
Use Imatrix: False
Split Model: False
Overview
This is an GGUF Q4_K_S quantized version of Codestral-22B-v0.1.
Quantization By
I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.
Andrew Webby @ RolePlai