base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B | |
library_name: transformers | |
pipeline_tag: text-generation | |
tags: | |
- llama-cpp | |
- DeepSeek-R1-Distill-Qwen-32B | |
- gguf | |
- Q6_K | |
- 32b | |
- deepseek-r1 | |
- qwen | |
- llama-cpp | |
- deepseek-ai | |
- code | |
- math | |
- chat | |
- roleplay | |
- text-generation | |
- safetensors | |
- nlp | |
- code | |
# roleplaiapp/DeepSeek-R1-Distill-Qwen-32B-Q6_K-GGUF | |
**Repo:** `roleplaiapp/DeepSeek-R1-Distill-Qwen-32B-Q6_K-GGUF` | |
**Original Model:** `DeepSeek-R1-Distill-Qwen-32B` | |
**Organization:** `deepseek-ai` | |
**Quantized File:** `deepseek-r1-distill-qwen-32b-q6_k.gguf` | |
**Quantization:** `GGUF` | |
**Quantization Method:** `Q6_K` | |
**Use Imatrix:** `False` | |
**Split Model:** `False` | |
## Overview | |
This is an GGUF Q6_K quantized version of [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B). | |
## Quantization By | |
I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. | |
I hope the community finds these quantizations useful. | |
Andrew Webby @ [RolePlai](https://roleplai.app/) | |