• Extracted a 64 Rank Lora from DeepSeek-R1-Distill-Qwen-32B
  • Merged & Quantized into Q4_K_M

Note: The model seems to be somewhat working with the R1's weird template too but it repeats random Chinese characters and the quality seems to be consistently worse.

Maybe try using the R1 tokenizer.

Downloads last month
20
GGUF
Model size
32.8B params
Architecture
qwen2

4-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for Ba2han/qwen-coder-thinker-q4_k_m

Quantized
(49)
this model