Update README.md
Browse files
README.md
CHANGED
@@ -6,6 +6,8 @@ GGUF [llama.cpp](https://github.com/ggerganov/llama.cpp) quantized version of:
|
|
6 |
- Model creator: [Meta](https://huggingface.co/meta-llama)
|
7 |
- [License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE)
|
8 |
|
|
|
|
|
9 |
<span style="color: red">Update:</span> Use the -imatrix versions (they use [imatrix](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) and the **bpe-llama tokenizer** which should theoretically improve the output)
|
10 |
|
11 |
## Recommended Prompt Format (Llama3)
|
|
|
6 |
- Model creator: [Meta](https://huggingface.co/meta-llama)
|
7 |
- [License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE)
|
8 |
|
9 |
+
<span style="color: red">Update(24/07/27):</span> Latest fixes to use the full 128k context window are included in -ropefix versions. **Requirement** to run them and used version: [b3472](https://github.com/ggerganov/llama.cpp/releases/tag/b3472)
|
10 |
+
|
11 |
<span style="color: red">Update:</span> Use the -imatrix versions (they use [imatrix](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) and the **bpe-llama tokenizer** which should theoretically improve the output)
|
12 |
|
13 |
## Recommended Prompt Format (Llama3)
|