File size: 832 Bytes
fba1368
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
library_name: transformers
pipeline_tag: text-generation
tags:
- 14b
- 6-bit
- Q6_K
- deepseek
- distill
- gguf
- japanese
- llama-cpp
- qwen
- text-generation
---

# roleplaiapp/DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-Q6_K-GGUF

**Repo:** `roleplaiapp/DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf-Q6_K-GGUF`
**Original Model:** `DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf`
**Quantized File:** `DeepSeek-R1-Distill-Qwen-14B-Japanese-Q6_K.gguf`
**Quantization:** `GGUF`
**Quantization Method:** `Q6_K`  

## Overview
This is a GGUF Q6_K quantized version of DeepSeek-R1-Distill-Qwen-14B-Japanese-gguf
## Quantization By
I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
I hope the community finds these quantizations useful.

Andrew Webby @ [RolePlai](https://roleplai.app/).