[Request #46]
natong19/Qwen2-7B-Instruct-abliterated

This model is tailored for specific use cases, please read the original page for details.

Prompt formatting:
ChatML

Requester:
"An abliterated version of Qwen-2."

Use with the latest version of KoboldCpp.

image/png

Downloads last month
570
GGUF
Model size
7.62B params
Architecture
qwen2

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.