Update README.md
Browse files
README.md
CHANGED
@@ -29,38 +29,18 @@ GGML files are for CPU inference using [llama.cpp](https://github.com/ggerganov/
|
|
29 |
## Provided files
|
30 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
31 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
32 |
-
`h2ogptq-oasst1-512-30B.ggml.q4_0.bin` | q4_0 | 4bit |
|
33 |
-
`h2ogptq-oasst1-512-30B.ggml.
|
34 |
-
`h2ogptq-oasst1-512-30B.ggml.q5_0.bin` | q5_0 | 5bit |
|
35 |
-
`h2ogptq-oasst1-512-30B.ggml.q5_1.bin` | q5_1 | 5bit |
|
36 |
-
|
37 |
-
* The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
|
38 |
-
* The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
|
39 |
-
* The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0.
|
40 |
-
* The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1.
|
41 |
-
|
42 |
-
## q4_2 compatibility
|
43 |
-
|
44 |
-
q4_2 is a relatively new 4bit quantisation method offering improved quality. However they are still under development and their formats are subject to change.
|
45 |
-
|
46 |
-
In order to use these files you will need to use recent llama.cpp code. And it's possible that future updates to llama.cpp could require that these files are re-generated.
|
47 |
-
|
48 |
-
If and when the q4_2 file no longer works with recent versions of llama.cpp I will endeavour to update it.
|
49 |
-
|
50 |
-
If you want to ensure guaranteed compatibility with a wide range of llama.cpp versions, use the q4_0 file.
|
51 |
-
|
52 |
-
## q5_0 and q5_1 compatibility
|
53 |
-
|
54 |
-
These new methods were released to llama.cpp on 26th April. You will need to pull the latest llama.cpp code and rebuild to be able to use them.
|
55 |
-
|
56 |
-
Don't expect any third-party UIs/tools to support them yet.
|
57 |
|
58 |
## How to run in `llama.cpp`
|
59 |
|
60 |
I use the following command line; adjust for your tastes and needs:
|
61 |
|
62 |
```
|
63 |
-
./main -t
|
64 |
### Instruction:
|
65 |
Write a story about llamas
|
66 |
### Response:"
|
@@ -71,12 +51,9 @@ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argumen
|
|
71 |
|
72 |
## How to run in `text-generation-webui`
|
73 |
|
74 |
-
|
75 |
-
|
76 |
-
Note: at this time text-generation-webui will not support the new q5 quantisation methods.
|
77 |
-
|
78 |
-
**Thireus** has written a [great guide on how to update it to the latest llama.cpp code](https://huggingface.co/TheBloke/wizardLM-7B-GGML/discussions/5) so that these files can be used in the UI.
|
79 |
|
|
|
80 |
|
81 |
# Original h2oGPT Model Card
|
82 |
## Summary
|
|
|
29 |
## Provided files
|
30 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
31 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
32 |
+
`h2ogptq-oasst1-512-30B.ggml.q4_0.bin` | q4_0 | 4bit | 20.3GB | 25GB | 4-bit. |
|
33 |
+
`h2ogptq-oasst1-512-30B.ggml.q4_1.bin` | q4_1 | 4bit | 24.4GB | 26GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
34 |
+
`h2ogptq-oasst1-512-30B.ggml.q5_0.bin` | q5_0 | 5bit | 22.4GB | 25GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
35 |
+
`h2ogptq-oasst1-512-30B.ggml.q5_1.bin` | q5_1 | 5bit | 24.4GB | 26GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference.|
|
36 |
+
`h2ogptq-oasst1-512-30B.ggml.q8_0.bin` | q8_0 | 8bit | 36.6GB | 39GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
|
38 |
## How to run in `llama.cpp`
|
39 |
|
40 |
I use the following command line; adjust for your tastes and needs:
|
41 |
|
42 |
```
|
43 |
+
./main -t 8 -m h2ogptq-oasst1-512-30B.ggml.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
44 |
### Instruction:
|
45 |
Write a story about llamas
|
46 |
### Response:"
|
|
|
51 |
|
52 |
## How to run in `text-generation-webui`
|
53 |
|
54 |
+
GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.
|
|
|
|
|
|
|
|
|
55 |
|
56 |
+
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
57 |
|
58 |
# Original h2oGPT Model Card
|
59 |
## Summary
|