Update README.md
Browse files
README.md
CHANGED
@@ -33,25 +33,22 @@ This repo contains 4bit GPTQ models for GPU inference, quantised using [GPTQ-for
|
|
33 |
|
34 |
## How to easily download and use this model in text-generation-webui
|
35 |
|
36 |
-
|
|
|
|
|
|
|
|
|
37 |
|
38 |
1. Click the **Model tab**.
|
39 |
2. Under **Download custom model or LoRA**, enter `TheBloke/wizardLM-7B-GPTQ`.
|
40 |
3. Click **Download**.
|
41 |
-
4.
|
42 |
-
5.
|
43 |
-
6. In the **Model
|
44 |
-
7.
|
45 |
-
8.
|
46 |
-
|
47 |
-
|
48 |
-
11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
|
49 |
-
|
50 |
-
## GIBBERISH OUTPUT IN `text-generation-webui`?
|
51 |
-
|
52 |
-
Please read the Provided Files section below. You should use `wizardLM-7B-GPTQ-4bit-128g.compat.no-act-order.safetensors` unless you are able to use the latest GPTQ-for-LLaMa code.
|
53 |
-
|
54 |
-
If you're using a text-generation-webui one click installer, you MUST use `wizardLM-7B-GPTQ-4bit-128g.compat.no-act-order.safetensors`.
|
55 |
|
56 |
## Provided files
|
57 |
|
@@ -69,44 +66,6 @@ The 'compat' file will be used by default in text-generation-webui so you don't
|
|
69 |
```
|
70 |
CUDA_VISIBLE_DEVICES=0 python3 llama.py wizardLM-7B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors wizardLM-7B-GPTQ-4bit-128g.no-act-order.safetensors
|
71 |
```
|
72 |
-
* `wizardLM-7B-GPTQ-4bit-128g.latest.act-order.safetensors`
|
73 |
-
* Only works with recent GPTQ-for-LLaMa code
|
74 |
-
* **Does not** work with text-generation-webui one-click-installers
|
75 |
-
* Parameters: Groupsize = 128g. act-order.
|
76 |
-
* Offers highest quality quantisation, but requires recent GPTQ-for-LLaMa code
|
77 |
-
* Command used to create the GPTQ:
|
78 |
-
```
|
79 |
-
CUDA_VISIBLE_DEVICES=0 python3 llama.py wizardLM-7B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors wizardLM-7B-GPTQ-4bit-128g.act-order.safetensors
|
80 |
-
```
|
81 |
-
|
82 |
-
## How to install manually in `text-generation-webui` and update GPTQ-for-LLaMa if necessary
|
83 |
-
|
84 |
-
File `wizardLM-7B-GPTQ-4bit-128g.compat.no-act-order.safetensors` can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
|
85 |
-
|
86 |
-
[Instructions on using GPTQ 4bit files in text-generation-webui are here](https://github.com/oobabooga/text-generation-webui/wiki/GPTQ-models-\(4-bit-mode\)).
|
87 |
-
|
88 |
-
The other `safetensors` model file was created using `--act-order` to give the maximum possible quantisation quality, but this means it requires that the latest GPTQ-for-LLaMa is used inside the UI.
|
89 |
-
|
90 |
-
If you want to use the act-order `safetensors` files and need to update the Triton branch of GPTQ-for-LLaMa, here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
|
91 |
-
```
|
92 |
-
# Clone text-generation-webui, if you don't already have it
|
93 |
-
git clone https://github.com/oobabooga/text-generation-webui
|
94 |
-
# Make a repositories directory
|
95 |
-
mkdir text-generation-webui/repositories
|
96 |
-
cd text-generation-webui/repositories
|
97 |
-
# Clone the latest GPTQ-for-LLaMa code inside text-generation-webui
|
98 |
-
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
|
99 |
-
```
|
100 |
-
|
101 |
-
Then install this model into `text-generation-webui/models` and launch the UI as follows:
|
102 |
-
```
|
103 |
-
cd text-generation-webui
|
104 |
-
python server.py --model wizardLM-7B-GPTQ --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
|
105 |
-
```
|
106 |
-
|
107 |
-
The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
|
108 |
-
|
109 |
-
If you can't update GPTQ-for-LLaMa or don't want to, you can use `wizardLM-7B-GPTQ-4bit-128g.compat.no-act-order.safetensors` as mentioned above, which should work without any upgrades to text-generation-webui.
|
110 |
|
111 |
<!-- footer start -->
|
112 |
## Discord
|
|
|
33 |
|
34 |
## How to easily download and use this model in text-generation-webui
|
35 |
|
36 |
+
Make sure text-generation-webui is updated to the latest version.
|
37 |
+
|
38 |
+
## How to easily download and use this model in text-generation-webui
|
39 |
+
|
40 |
+
Please make sure you're using the latest version of text-generation-webui
|
41 |
|
42 |
1. Click the **Model tab**.
|
43 |
2. Under **Download custom model or LoRA**, enter `TheBloke/wizardLM-7B-GPTQ`.
|
44 |
3. Click **Download**.
|
45 |
+
4. The model will start downloading. Once it's finished it will say "Done"
|
46 |
+
5. In the top left, click the refresh icon next to **Model**.
|
47 |
+
6. In the **Model** dropdown, choose the model you just downloaded: `wizardLM-7B-GPTQ`
|
48 |
+
7. The model will automatically load, and is now ready for use!
|
49 |
+
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
|
50 |
+
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
|
51 |
+
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
|
53 |
## Provided files
|
54 |
|
|
|
66 |
```
|
67 |
CUDA_VISIBLE_DEVICES=0 python3 llama.py wizardLM-7B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors wizardLM-7B-GPTQ-4bit-128g.no-act-order.safetensors
|
68 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
69 |
|
70 |
<!-- footer start -->
|
71 |
## Discord
|