Initial GGUF model commit
Browse files
README.md
CHANGED
@@ -67,9 +67,9 @@ The clients and libraries below are expecting to add GGUF support shortly:
|
|
67 |
## Prompt template: Chat
|
68 |
|
69 |
```
|
70 |
-
A chat
|
71 |
USER: {prompt}
|
72 |
-
ASSISTANT:
|
73 |
|
74 |
```
|
75 |
|
@@ -125,7 +125,7 @@ Make sure you are using `llama.cpp` from commit [6381d4e110bd0ec02843a60bbeb8b6f
|
|
125 |
For compatibility with older versions of llama.cpp, or for use with third-party clients and libaries, please use GGML files instead.
|
126 |
|
127 |
```
|
128 |
-
./main -t 10 -ngl 32 -m airoboros-c34b-2.1.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat
|
129 |
```
|
130 |
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
131 |
|
|
|
67 |
## Prompt template: Chat
|
68 |
|
69 |
```
|
70 |
+
A chat.
|
71 |
USER: {prompt}
|
72 |
+
ASSISTANT:
|
73 |
|
74 |
```
|
75 |
|
|
|
125 |
For compatibility with older versions of llama.cpp, or for use with third-party clients and libaries, please use GGML files instead.
|
126 |
|
127 |
```
|
128 |
+
./main -t 10 -ngl 32 -m airoboros-c34b-2.1.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat.\nUSER: Write a story about llamas\nASSISTANT: \n"
|
129 |
```
|
130 |
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
131 |
|