TheBloke commited on
Commit
8a9ce34
1 Parent(s): 195b187

Initial GGML model commit

Browse files
Files changed (1) hide show
  1. README.md +58 -0
README.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: other
4
+ ---
5
+
6
+ # Vigogne Instruct 13B - A French instruction-following LLaMa model GGML
7
+
8
+ These files are GGML format model files for [Vigogne Instruct 13B - A French instruction-following LLaMa model](https://huggingface.co/bofenghuang/vigogne-instruct-13b).
9
+
10
+ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
11
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
12
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp)
13
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
14
+ * [ctransformers](https://github.com/marella/ctransformers)
15
+
16
+ ## Other repositories available
17
+
18
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Vigogne-Instruct-13B-GPTQ)
19
+ * [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Vigogne-Instruct-13B-GGML)
20
+ * [Original unquantised fp16 model in HF format](https://huggingface.co/bofenghuang/vigogne-instruct-13b)
21
+
22
+ ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
23
+
24
+ llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
25
+
26
+ I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.
27
+
28
+ ## Provided files
29
+ | Name | Quant method | Bits | Size | RAM required | Use case |
30
+ | ---- | ---- | ---- | ---- | ---- | ----- |
31
+ | Vigogne-Instruct-13B.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | 4-bit. |
32
+ | Vigogne-Instruct-13B.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
33
+ | Vigogne-Instruct-13B.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
34
+ | Vigogne-Instruct-13B.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | 5-bit. Even higher accuracy, resource usage and slower inference. |
35
+ | Vigogne-Instruct-13B.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |
36
+
37
+
38
+ ## How to run in `llama.cpp`
39
+
40
+ I use the following command line; adjust for your tastes and needs:
41
+
42
+ ```
43
+ ./main -t 12 -m Vigogne-Instruct-13B.v3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
44
+ ### Instruction:
45
+ Write a story about llamas
46
+ ### Response:"
47
+ ```
48
+ Change `-t 12` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
49
+
50
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
51
+
52
+ ## How to run in `text-generation-webui`
53
+
54
+ Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
55
+
56
+ Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.
57
+
58
+ # Original model card: Vigogne Instruct 13B - A French instruction-following LLaMa model