Edit model card

QuantFactory/glm-4-9b-chat-abliterated-GGUF

This is quantized version of byroneverson/glm-4-9b-chat-abliterated created using llama.cpp

Original Model Card

GLM 4 9B Chat - Abliterated

Check out the jupyter notebook for details of how this model was abliterated from glm-4-9b-chat.

The python package "tiktoken" is required to quantize the model into gguf format. So I had to create a fork of GGUF My Repo (+tiktoken).

Logo

Downloads last month
695
GGUF

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for QuantFactory/glm-4-9b-chat-abliterated-GGUF

Quantized
this model