Update README.md
Browse files
README.md
CHANGED
@@ -1,17 +1,18 @@
|
|
1 |
---
|
2 |
inference: false
|
3 |
---
|
|
|
|
|
4 |
original model [weblab-10b-instruction-sft](https://huggingface.co/matsuo-lab/weblab-10b-instruction-sft)
|
5 |
|
6 |
This is 4bit GPTQ Version.
|
7 |
|
8 |
The size is smaller and the execution speed is faster, but the inference performance may be a little worse.
|
9 |
|
10 |
-
Benchmark results are in progress.
|
11 |
-
I will upload it at a later date.
|
12 |
-
|
13 |
|
14 |
### sample code
|
|
|
|
|
15 |
```
|
16 |
pip install auto-gptq
|
17 |
```
|
@@ -32,12 +33,30 @@ model = AutoGPTQForCausalLM.from_quantized(
|
|
32 |
device="cuda:0")
|
33 |
|
34 |
prompt = "スタジオジブリの作品を5つ教えてください"
|
35 |
-
prompt_template = f"###
|
36 |
|
37 |
tokens = tokenizer(prompt_template, return_tensors="pt").to("cuda:0").input_ids
|
38 |
output = model.generate(input_ids=tokens, max_new_tokens=100, do_sample=True, temperature=0.8)
|
39 |
print(tokenizer.decode(output[0]))
|
40 |
```
|
41 |
|
42 |
-
### See Also
|
43 |
-
https://github.com/PanQiWei/AutoGPTQ/blob/main/docs/tutorial/01-Quick-Start.md
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
inference: false
|
3 |
---
|
4 |
+
# weblab-10b-instruction-sft-GPTQ
|
5 |
+
|
6 |
original model [weblab-10b-instruction-sft](https://huggingface.co/matsuo-lab/weblab-10b-instruction-sft)
|
7 |
|
8 |
This is 4bit GPTQ Version.
|
9 |
|
10 |
The size is smaller and the execution speed is faster, but the inference performance may be a little worse.
|
11 |
|
|
|
|
|
|
|
12 |
|
13 |
### sample code
|
14 |
+
At least one GPU is currently required due to a limitation of the Accelerate library.
|
15 |
+
|
16 |
```
|
17 |
pip install auto-gptq
|
18 |
```
|
|
|
33 |
device="cuda:0")
|
34 |
|
35 |
prompt = "スタジオジブリの作品を5つ教えてください"
|
36 |
+
prompt_template = f"### 指示: {prompt}\n\n### 応答:"
|
37 |
|
38 |
tokens = tokenizer(prompt_template, return_tensors="pt").to("cuda:0").input_ids
|
39 |
output = model.generate(input_ids=tokens, max_new_tokens=100, do_sample=True, temperature=0.8)
|
40 |
print(tokenizer.decode(output[0]))
|
41 |
```
|
42 |
|
43 |
+
### See Also
|
44 |
+
https://github.com/PanQiWei/AutoGPTQ/blob/main/docs/tutorial/01-Quick-Start.md
|
45 |
+
|
46 |
+
|
47 |
+
### Benchmark
|
48 |
+
|
49 |
+
The results below are preliminary. The blank part is under measurement.
|
50 |
+
|
51 |
+
* **Japanese benchmark**
|
52 |
+
|
53 |
+
- *We used [Stability-AI/lm-evaluation-harness](https://github.com/Stability-AI/lm-evaluation-harness/tree/jp-stable) + gptq patch for evaluation.*
|
54 |
+
- *The 4-task average accuracy is based on results of JCommonsenseQA-1.1, JNLI-1.1, MARC-ja-1.1, and JSQuAD-1.1.*
|
55 |
+
- *model loading is performed with gptq_use_triton=True, and evaluation is performed with template version 0.3 using the few-shot in-context learning.*
|
56 |
+
- *The number of few-shots is 3,3,3,2.*
|
57 |
+
|
58 |
+
| Model | Average | JCommonsenseQA | JNLI | MARC-ja | JSQuAD |
|
59 |
+
| :-- | :-- | :-- | :-- | :-- | :-- |
|
60 |
+
| weblab-10b-instruction-sft | 78.78 | 74.35 | 65.65 | 96.06 | 79.04 |
|
61 |
+
| weblab-10b | 66.38 | 65.86 | 54.19 | 84.49 | 60.98 |
|
62 |
+
| *weblab-10b-instruction-sft-GPTQ* | - | 74.53 | 41.70 | - | 72.69 |
|