--- library_name: transformers pipeline_tag: text-generation tags: - 165b - 3-bit - Q3_K_S - brainstorm - deepseek - distill - gguf - llama - llama-cpp - text-generation --- # roleplaiapp/DeepSeek-R1-Distill-Llama-3.1-16.5B-Brainstorm-i1-Q3_K_S-GGUF **Repo:** `roleplaiapp/DeepSeek-R1-Distill-Llama-3.1-16.5B-Brainstorm-i1-Q3_K_S-GGUF` **Original Model:** `DeepSeek-R1-Distill-Llama-3.1-16.5B-Brainstorm-i1` **Quantized File:** `DeepSeek-R1-Distill-Llama-3.1-16.5B-Brainstorm.i1-Q3_K_S.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q3_K_S` ## Overview This is a GGUF Q3_K_S quantized version of DeepSeek-R1-Distill-Llama-3.1-16.5B-Brainstorm-i1 ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).