Update README.md
Browse files
README.md
CHANGED
@@ -15,11 +15,20 @@ It is the result of first merging the deltas from the above repository with the
|
|
15 |
* [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/stable-vicuna-13B-GGML).
|
16 |
* [Unquantised 16bit model in HF format](https://huggingface.co/TheBloke/stable-vicuna-13B-HF).
|
17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
## GIBBERISH OUTPUT IN `text-generation-webui`?
|
19 |
|
20 |
-
Please read the Provided Files section below. You should use `
|
21 |
|
22 |
-
If you're using a text-generation-webui one click installer, you MUST use `
|
23 |
|
24 |
## Provided files
|
25 |
|
|
|
15 |
* [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/stable-vicuna-13B-GGML).
|
16 |
* [Unquantised 16bit model in HF format](https://huggingface.co/TheBloke/stable-vicuna-13B-HF).
|
17 |
|
18 |
+
## PROMPT TEMPLATE
|
19 |
+
|
20 |
+
This model works best with the following prompt template:
|
21 |
+
|
22 |
+
```
|
23 |
+
### USER: your prompt here
|
24 |
+
### ASSISTANT:
|
25 |
+
```
|
26 |
+
|
27 |
## GIBBERISH OUTPUT IN `text-generation-webui`?
|
28 |
|
29 |
+
Please read the Provided Files section below. You should use `stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors` unless you are able to use the latest GPTQ-for-LLaMa code.
|
30 |
|
31 |
+
If you're using a text-generation-webui one click installer, you MUST use `stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors`.
|
32 |
|
33 |
## Provided files
|
34 |
|