Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
nisten
/
deepseek-r1-qwen32b-mlx-6bit
like
14
Text Generation
Transformers
Safetensors
qwen2
code
conversational
text-generation-inference
Inference Endpoints
6-bit
License:
mit
Model card
Files
Files and versions
Community
3
Train
Deploy
Use this model
This is a 6bit quant of deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
This is a 6bit quant of deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
Probably the sweet spot for running o1 at home :)
Downloads last month
143
Safetensors
Model size
6.66B params
Tensor type
FP16
·
U32
·
Inference Examples
Text Generation
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to
Inference Endpoints (dedicated)
instead.
Model tree for
nisten/deepseek-r1-qwen32b-mlx-6bit
Base model
Qwen/Qwen2.5-32B
Finetuned
Qwen/Qwen2.5-Coder-32B
Quantized
(
18
)
this model