byroneverson's picture
Update README.md
4984c54 verified
|
raw
history blame
853 Bytes
---
base_model: THUDM/glm-4-9b-chat
pipeline_tag: text-generation
license: other
license_name: glm-4
license_link: https://huggingface.co/THUDM/glm-4-9b-chat/blob/main/LICENSE
language:
- zh
- en
tags:
- glm
- chatglm
- thudm
- chat
- abliterated
library_name: transformers
---
# GLM 4 9B Chat - Abliterated
Check out the <a href="https://huggingface.co/byroneverson/glm-4-9b-chat-abliterated/blob/main/abliterate-glm-4-9b-chat.ipynb">jupyter notebook</a> for details of how this model was abliterated from glm-4-9b-chat.
The python package "tiktoken" is required to quantize the model into gguf format. So I had to create <a href="https://huggingface.co/spaces/byroneverson/gguf-my-repo-plus-tiktoken">a fork of GGUF My Repo (+tiktoken)</a>.
![Logo](https://huggingface.co/byroneverson/internlm2_5-7b-chat-abliterated/resolve/main/logo.png "Logo")