Update README.md
Browse files
README.md
CHANGED
@@ -7,9 +7,7 @@ base_model:
|
|
7 |
---
|
8 |
# PLLuM-8x7B-chat GGUF Quantizations by Nondzu
|
9 |
|
10 |
-
DISCLAIMER: This is
|
11 |
-
|
12 |
-
This repository contains GGUF quantized versions of the [PLLuM-8x7B-chat](https://huggingface.co/CYFRAGOVPL/PLLuM-8x7B-chat) model. All quantizations were performed using the [llama.cpp](https://github.com/ggerganov/llama.cpp) (release [b4765](https://github.com/ggml-org/llama.cpp/releases/tag/b4765)). These quantized models can be run in [LM Studio](https://lmstudio.ai/) or any other llama.cpp–based project.
|
13 |
|
14 |
## Prompt Format
|
15 |
|
|
|
7 |
---
|
8 |
# PLLuM-8x7B-chat GGUF Quantizations by Nondzu
|
9 |
|
10 |
+
DISCLAIMER: This is a quantized version of an existing model [PLLuM-8x7B-chat](https://huggingface.co/CYFRAGOVPL/PLLuM-8x7B-chat). I am not the author of the original model. I am only hosting the quantized models. I do not take any responsibility for the models.
|
|
|
|
|
11 |
|
12 |
## Prompt Format
|
13 |
|