Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

huihui-ai/QwQ-32B-Preview-abliterated

This is an uncensored version of Qwen/QwQ-32B-Preview created with abliteration (see remove-refusals-with-transformers to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.

ollama

You can use huihui_ai/qwq-abliterated directly,

ollama run huihui_ai/qwq-abliterated
Downloads last month
9
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for async0x42/QwQ-32B-Preview-abliterated-exl2_4.65bpw

Base model

Qwen/Qwen2.5-32B
Quantized
(115)
this model