Update README.md
Browse files
README.md
CHANGED
@@ -35,17 +35,19 @@ This model works best with the following prompt template:
|
|
35 |
|
36 |
## How to easily download and use this model in text-generation-webui
|
37 |
|
38 |
-
|
39 |
|
40 |
1. Click the **Model tab**.
|
41 |
-
2. Under **Download custom model or LoRA**, enter
|
42 |
3. Click **Download**.
|
43 |
4. Wait until it says it's finished downloading.
|
44 |
-
5.
|
45 |
-
6.
|
46 |
-
7.
|
47 |
-
8.
|
48 |
-
9.
|
|
|
|
|
49 |
|
50 |
## Provided files
|
51 |
|
|
|
35 |
|
36 |
## How to easily download and use this model in text-generation-webui
|
37 |
|
38 |
+
Open the text-generation-webui UI as normal.
|
39 |
|
40 |
1. Click the **Model tab**.
|
41 |
+
2. Under **Download custom model or LoRA**, enter `TheBloke/stable-vicuna-13B-GPTQ`.
|
42 |
3. Click **Download**.
|
43 |
4. Wait until it says it's finished downloading.
|
44 |
+
5. Click the **Refresh** icon next to **Model** in the top left.
|
45 |
+
6. In the **Model drop-down**: choose the model you just downloaded,`stable-vicuna-13B-GPTQ`.
|
46 |
+
7. If you see an error in the bottom right, ignore it - it's temporary.
|
47 |
+
8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
|
48 |
+
9. Click **Save settings for this model** in the top right.
|
49 |
+
10. Click **Reload the Model** in the top right.
|
50 |
+
11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
|
51 |
|
52 |
## Provided files
|
53 |
|