Update README.md
Browse files
README.md
CHANGED
@@ -73,7 +73,7 @@ Please note that these GGMLs are **not compatible with llama.cpp, or currently w
|
|
73 |
|
74 |
## A note regarding context length: 8K
|
75 |
|
76 |
-
It is confirmed that the 8K context of this model works in KoboldCpp, if you manually set max context to 8K by adjusting the text box above the slider:
|
77 |
![.](https://s3.amazonaws.com/moonup/production/uploads/63cd4b6d1c8a5d1d7d76a778/LcoIOa7YdDZa-R-R4BWYw.png)
|
78 |
|
79 |
(set it to 8192 at most)
|
|
|
73 |
|
74 |
## A note regarding context length: 8K
|
75 |
|
76 |
+
It is confirmed that the 8K context of this model works in [KoboldCpp](https://github.com/LostRuins/koboldcpp), if you manually set max context to 8K by adjusting the text box above the slider:
|
77 |
![.](https://s3.amazonaws.com/moonup/production/uploads/63cd4b6d1c8a5d1d7d76a778/LcoIOa7YdDZa-R-R4BWYw.png)
|
78 |
|
79 |
(set it to 8192 at most)
|