Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,13 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
+
This is 2-bit quantization of [Qwen/Qwen1.5-72B-Chat](https://huggingface.co/Qwen/Qwen1.5-72B-Chat) using QuIP#
|
5 |
+
|
6 |
+
Random samples from RedPajama and Skypile (for Chinese) are used as calibration data.
|
7 |
+
|
8 |
+
### Model loading
|
9 |
+
|
10 |
+
Please follow the instruction of [QuIP-for-all](https://github.com/chu-tianxiang/QuIP-for-all) for usage.
|
11 |
+
|
12 |
+
As an alternative, you can use [Aphrodite engine](https://github.com/PygmalionAI/aphrodite-engine) or my [vLLM branch](https://github.com/chu-tianxiang/vllm-gptq) for faster inference. If you have problem installing `fast-hadamard-transform` from pip, you can also install it from [source](https://github.com/Dao-AILab/fast-hadamard-transform)
|
13 |
+
|