Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
ngxson
/
DeepSeek-R1-Distill-Qwen-7B-abliterated-GGUF
like
1
Text Generation
GGUF
Inference Endpoints
conversational
Model card
Files
Files and versions
Community
Deploy
Use this model
DeepSeek-R1-Distill-Qwen-7B-abliterated-GGUF
DeepSeek-R1-Distill-Qwen-7B-abliterated-GGUF
This is made by using
llama-export-lora
Base model:
https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-7B-GGUF
LoRA adapter:
https://huggingface.co/ggml-org/LoRA-Qwen2.5-7B-Instruct-abliterated-v3-F16-GGUF
Downloads last month
107
GGUF
Model size
7.62B params
Architecture
qwen2
16-bit
F16
Inference Examples
Text Generation
Unable to determine this model's library. Check the
docs
.
Model tree for
ngxson/DeepSeek-R1-Distill-Qwen-7B-abliterated-GGUF
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
Quantized
(
46
)
this model