File size: 857 Bytes
131ed54 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
---
library_name: transformers
pipeline_tag: text-generation
tags:
- 3-bit
- 70b
- Q3_K_L
- deepseek
- distill
- gguf
- llama
- llama-cpp
- text-generation
- uncensored
---
# roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-Q3_K_L-GGUF
**Repo:** `roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-Q3_K_L-GGUF`
**Original Model:** `DeepSeek-R1-Distill-Llama-70B-Uncensored-v2`
**Quantized File:** `DeepSeek-R1-Distill-Llama-70B-Uncensored-v2.Q3_K_L.gguf`
**Quantization:** `GGUF`
**Quantization Method:** `Q3_K_L`
## Overview
This is a GGUF Q3_K_L quantized version of DeepSeek-R1-Distill-Llama-70B-Uncensored-v2
## Quantization By
I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
I hope the community finds these quantizations useful.
Andrew Webby @ [RolePlai](https://roleplai.app/).
|