Update README.md
Browse files
README.md
CHANGED
@@ -21,47 +21,53 @@ license: other
|
|
21 |
|
22 |
These files are GGML format model files for [Jon Durbin's Airoboros MPT 30B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-five-epochs).
|
23 |
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
* [ctransformers](https://github.com/marella/ctransformers)
|
30 |
|
31 |
## Repositories available
|
32 |
|
33 |
-
* [4-bit GPTQ models for GPU inference](https://huggingface.co/none)
|
34 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-mpt-30b-gpt4-1p4-GGML)
|
35 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-five-epochs)
|
36 |
|
37 |
-
|
38 |
-
|
|
|
|
|
|
|
|
|
39 |
|
40 |
-
|
41 |
|
42 |
-
|
|
|
43 |
|
44 |
-
|
45 |
|
46 |
-
|
47 |
|
48 |
-
|
|
|
49 |
|
50 |
-
|
51 |
|
52 |
-
|
|
|
|
|
|
|
|
|
|
|
53 |
|
54 |
-
|
55 |
-
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
|
56 |
-
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
|
57 |
-
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
|
58 |
-
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
|
59 |
-
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
|
60 |
-
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
|
61 |
|
62 |
-
|
63 |
-
<!-- compatibility_ggml end -->
|
64 |
|
|
|
|
|
|
|
|
|
65 |
## Provided files
|
66 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
67 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
@@ -73,25 +79,6 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
73 |
|
74 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
75 |
|
76 |
-
## How to run in `llama.cpp`
|
77 |
-
|
78 |
-
I use the following command line; adjust for your tastes and needs:
|
79 |
-
|
80 |
-
```
|
81 |
-
./main -t 10 -ngl 32 -m mpt-30b-chat.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
|
82 |
-
```
|
83 |
-
If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
|
84 |
-
|
85 |
-
If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
|
86 |
-
|
87 |
-
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
88 |
-
|
89 |
-
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
90 |
-
|
91 |
-
## How to run in `text-generation-webui`
|
92 |
-
|
93 |
-
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
94 |
-
|
95 |
<!-- footer start -->
|
96 |
## Discord
|
97 |
|
|
|
21 |
|
22 |
These files are GGML format model files for [Jon Durbin's Airoboros MPT 30B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-five-epochs).
|
23 |
|
24 |
+
Please note that these GGMLs are **not compatible with llama.cpp, or currently with text-generation-webui**. Please see below for a list of tools known to work with these model files.
|
25 |
+
|
26 |
+
[KoboldCpp](https://github.com/LostRuins/koboldcpp) just added GPU accelerated (OpenCL) support for MPT models, so that is the client I recommend using for these models.
|
27 |
+
|
28 |
+
**Note**: Please make sure you're using KoboldCpp version 1.32.3 or later, as a number of MPT-related bugs are fixed.
|
|
|
29 |
|
30 |
## Repositories available
|
31 |
|
|
|
32 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-mpt-30b-gpt4-1p4-GGML)
|
33 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-mpt-30b-gpt4-1p4-five-epochs)
|
34 |
|
35 |
+
## Prompt template
|
36 |
+
|
37 |
+
```
|
38 |
+
USER: prompt
|
39 |
+
ASSISTANT:
|
40 |
+
```
|
41 |
|
42 |
+
## A note regarding context length: 8K
|
43 |
|
44 |
+
The base model has an 8K context length. [KoboldCpp](https://github.com/LostRuins/koboldcpp) supports 8K context if you manually set it to 8K by adjusting the text box above the slider:
|
45 |
+

|
46 |
|
47 |
+
It is currently unknown as to increased context is compatible with other MPT GGML clients.
|
48 |
|
49 |
+
If you have feedback on this, please let me know.
|
50 |
|
51 |
+
<!-- compatibility_ggml start -->
|
52 |
+
## Compatibilty
|
53 |
|
54 |
+
These files are **not** compatible with text-generation-webui, llama.cpp, or llama-cpp-python.
|
55 |
|
56 |
+
Currently they can be used with:
|
57 |
+
* KoboldCpp, a powerful inference engine based on llama.cpp, with good UI and GPU accelerated support for MPT models: [KoboldCpp](https://github.com/LostRuins/koboldcpp)
|
58 |
+
* The ctransformers Python library, which includes LangChain support: [ctransformers](https://github.com/marella/ctransformers)
|
59 |
+
* The LoLLMS Web UI which uses ctransformers: [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
|
60 |
+
* [rustformers' llm](https://github.com/rustformers/llm)
|
61 |
+
* The example `mpt` binary provided with [ggml](https://github.com/ggerganov/ggml)
|
62 |
|
63 |
+
As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!)
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
|
65 |
+
## Tutorial for using LoLLMS Web UI
|
|
|
66 |
|
67 |
+
* [Text tutorial, written by **Lucas3DCG**](https://huggingface.co/TheBloke/MPT-7B-Storywriter-GGML/discussions/2#6475d914e9b57ce0caa68888)
|
68 |
+
* [Video tutorial, by LoLLMS Web UI's author **ParisNeo**](https://www.youtube.com/watch?v=ds_U0TDzbzI)
|
69 |
+
|
70 |
+
<!-- compatibility_ggml end -->
|
71 |
## Provided files
|
72 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
73 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
|
|
79 |
|
80 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
81 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
82 |
<!-- footer start -->
|
83 |
## Discord
|
84 |
|