TheBloke commited on
Commit
931924f
1 Parent(s): 57d474d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -29,10 +29,10 @@ This model requires the following prompt template:
29
  ## Provided files
30
  | Name | Quant method | Bits | Size | RAM required | Use case |
31
  | ---- | ---- | ---- | ---- | ---- | ----- |
32
- `OpenAssistant-Llama30B-epoch7.ggml.q4_0.bin` | q4_0 | 4bit | 19GB | 21GB | Maximum compatibility |
33
- `OpenAssistant-Llama30B-epoch7.ggml.q4_2.bin` | q4_2 | 4bit | 19GB | 21GB | Best compromise between resources, speed and quality |
34
- `OpenAssistant-Llama30B-epoch7.ggml.q5_0.bin` | q5_0 | 5bit | 21GB | 23GB | Brand new 5bit method. Potentially higher quality than 4bit, at cost of slightly higher resources. |
35
- `OpenAssistant-Llama30B-epoch7.ggml.q5_1.bin` | q5_1 | 5bit | 23GB | 25GB | Brand new 5bit method. Slightly higher resource usage than q5_0.|
36
 
37
  * The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
38
  * The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
@@ -60,7 +60,7 @@ Don't expect any third-party UIs/tools to support them yet.
60
  I use the following command line; adjust for your tastes and needs:
61
 
62
  ```
63
- ./main -t 18 -m OpenAssistant-Llama30B-epoch7.ggml.q4_2.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|prompter|>Write a very story about llamas <|assistant|>:"
64
  ```
65
 
66
  Change `-t 18` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
 
29
  ## Provided files
30
  | Name | Quant method | Bits | Size | RAM required | Use case |
31
  | ---- | ---- | ---- | ---- | ---- | ----- |
32
+ `OpenAssistant-30B-epoch7.ggml.q4_0.bin` | q4_0 | 4bit | 19GB | 21GB | Maximum compatibility |
33
+ `OpenAssistant-30B-epoch7.ggml.q4_2.bin` | q4_2 | 4bit | 19GB | 21GB | Best compromise between resources, speed and quality |
34
+ `OpenAssistant-30B-epoch7.ggml.q5_0.bin` | q5_0 | 5bit | 21GB | 23GB | Brand new 5bit method. Potentially higher quality than 4bit, at cost of slightly higher resources. |
35
+ `OpenAssistant-30B-epoch7.ggml.q5_1.bin` | q5_1 | 5bit | 23GB | 25GB | Brand new 5bit method. Slightly higher resource usage than q5_0.|
36
 
37
  * The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
38
  * The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
 
60
  I use the following command line; adjust for your tastes and needs:
61
 
62
  ```
63
+ ./main -t 18 -m OpenAssistant-30B-epoch7.ggml.q4_2.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|prompter|>Write a very story about llamas <|assistant|>:"
64
  ```
65
 
66
  Change `-t 18` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.