Update README.md
Browse files
README.md
CHANGED
@@ -41,8 +41,7 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
|
|
41 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/orca_mini_7B-GGML)
|
42 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/psmathur/orca_mini_7b)
|
43 |
|
44 |
-
## Prompt template:
|
45 |
-
|
46 |
|
47 |
```
|
48 |
### System:
|
@@ -51,7 +50,7 @@ You are an AI assistant that follows instruction extremely well. Help as much as
|
|
51 |
### User:
|
52 |
prompt
|
53 |
|
54 |
-
### Response
|
55 |
```
|
56 |
or
|
57 |
```
|
@@ -61,10 +60,10 @@ You are an AI assistant that follows instruction extremely well. Help as much as
|
|
61 |
### User:
|
62 |
prompt
|
63 |
|
64 |
-
### Input
|
65 |
input
|
66 |
|
67 |
-
### Response
|
68 |
```
|
69 |
|
70 |
<!-- compatibility_ggml start -->
|
|
|
41 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/orca_mini_7B-GGML)
|
42 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/psmathur/orca_mini_7b)
|
43 |
|
44 |
+
## Prompt template:
|
|
|
45 |
|
46 |
```
|
47 |
### System:
|
|
|
50 |
### User:
|
51 |
prompt
|
52 |
|
53 |
+
### Response:
|
54 |
```
|
55 |
or
|
56 |
```
|
|
|
60 |
### User:
|
61 |
prompt
|
62 |
|
63 |
+
### Input:
|
64 |
input
|
65 |
|
66 |
+
### Response:
|
67 |
```
|
68 |
|
69 |
<!-- compatibility_ggml start -->
|