Update README.md
Browse files
README.md
CHANGED
@@ -6,16 +6,26 @@ datasets:
|
|
6 |
|
7 |
[gpt2](https://huggingface.co/openai-community/gpt2) quantized to 4-bit using [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ).
|
8 |
|
9 |
-
To use:
|
10 |
|
11 |
```shell
|
12 |
pip install auto-gptq
|
13 |
```
|
14 |
|
|
|
15 |
```python
|
16 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
17 |
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
|
18 |
|
19 |
model_name = "smpanaro/gpt2-AutoGPTQ-4bit-128g"
|
20 |
model = AutoGPTQForCausalLM.from_quantized(model_name)
|
21 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
|
7 |
[gpt2](https://huggingface.co/openai-community/gpt2) quantized to 4-bit using [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ).
|
8 |
|
9 |
+
To use, first install AutoGPTQ:
|
10 |
|
11 |
```shell
|
12 |
pip install auto-gptq
|
13 |
```
|
14 |
|
15 |
+
Then load the model from the hub:
|
16 |
```python
|
17 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
18 |
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
|
19 |
|
20 |
model_name = "smpanaro/gpt2-AutoGPTQ-4bit-128g"
|
21 |
model = AutoGPTQForCausalLM.from_quantized(model_name)
|
22 |
+
```
|
23 |
+
|
24 |
+
|
25 |
+
|Model|4-Bit Perplexity|16-Bit Perplexity|Delta|
|
26 |
+
|--|--|--|--|
|
27 |
+
|smpanaro/gpt2-AutoGPTQ-4bit-128g|26.5000|25.1875|1.3125|
|
28 |
+
|[smpanaro/gpt2-medium-AutoGPTQ-4bit-128g](https://huggingface.co/smpanaro/gpt2-medium-AutoGPTQ-4bit-128g)|19.1719|18.4739|0.698|
|
29 |
+
|[smpanaro/gpt2-large-AutoGPTQ-4bit-128g](https://huggingface.co/smpanaro/gpt2-large-AutoGPTQ-4bit-128g)|16.6875|16.4541|0.2334|
|
30 |
+
|[smpanaro/gpt2-xl-AutoGPTQ-4bit-128g](https://huggingface.co/smpanaro/gpt2-xl-AutoGPTQ-4bit-128g)|14.9297|14.7951|0.1346|
|
31 |
+
<sub>Wikitext perplexity measured as in the [huggingface docs](https://huggingface.co/docs/transformers/en/perplexity)</sub>
|