Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,130 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
tags:
|
5 |
+
- causal-lm
|
6 |
+
- llama
|
7 |
+
inference: false
|
8 |
---
|
9 |
+
# Wizard-Vicuna-13B-GGML
|
10 |
+
|
11 |
+
This is GGML format quantised 4bit and 5bit models of [junlee's wizard-vicuna 13B](https://huggingface.co/junelee/wizard-vicuna-13b).
|
12 |
+
|
13 |
+
It is the result of quantising to 4bit and 5bit GGML for CPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
|
14 |
+
|
15 |
+
## Provided files
|
16 |
+
| Name | Quant method | Bits | Size | RAM required | Use case |
|
17 |
+
| ---- | ---- | ---- | ---- | ---- | ----- |
|
18 |
+
`wizard-vicuna-13B.ggml.q4_0.bin` | q4_0 | 4bit | 8.14GB | 10.5GB | Maximum compatibility |
|
19 |
+
`wizard-vicuna-13B.ggml.q4_2.bin` | q4_2 | 4bit | 8.14GB | 10.5GB | Best compromise between resources, speed and quality |
|
20 |
+
`wizard-vicuna-13B.ggml.q5_0.bin` | q5_0 | 5bit | 8.95GB | 11.0GB | Brand new 5bit method. Potentially higher quality than 4bit, at cost of slightly higher resources. |
|
21 |
+
`wizard-vicuna-13B.ggml.q5_1.bin` | q5_1 | 5bit | 9.76GB | 12.25GB | Brand new 5bit method. Slightly higher resource usage than q5_0.|
|
22 |
+
|
23 |
+
* The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
|
24 |
+
* The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
|
25 |
+
* The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0.
|
26 |
+
* The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1.
|
27 |
+
|
28 |
+
## q4_2 compatibility
|
29 |
+
|
30 |
+
q4_2 is a relatively new 4bit quantisation method offering improved quality. However they are still under development and their formats are subject to change.
|
31 |
+
|
32 |
+
In order to use these files you will need to use recent llama.cpp code. And it's possible that future updates to llama.cpp could require that these files are re-generated.
|
33 |
+
|
34 |
+
If and when the q4_2 file no longer works with recent versions of llama.cpp I will endeavour to update it.
|
35 |
+
|
36 |
+
If you want to ensure guaranteed compatibility with a wide range of llama.cpp versions, use the q4_0 file.
|
37 |
+
|
38 |
+
## q5_0 and q5_1 compatibility
|
39 |
+
|
40 |
+
These new methods were released to llama.cpp on 26th April. You will need to pull the latest llama.cpp code and rebuild to be able to use them.
|
41 |
+
|
42 |
+
Don't expect any third-party UIs/tools to support them yet.
|
43 |
+
|
44 |
+
## How to run in `llama.cpp`
|
45 |
+
|
46 |
+
I use the following command line; adjust for your tastes and needs:
|
47 |
+
|
48 |
+
```
|
49 |
+
./main -t 18 -m wizard-vicuna-13B.ggml.q4_2.bi --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: write a story about llamas ### Response:"
|
50 |
+
```
|
51 |
+
|
52 |
+
Change `-t 18` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
53 |
+
|
54 |
+
## How to run in `text-generation-webui`
|
55 |
+
|
56 |
+
GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.
|
57 |
+
|
58 |
+
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
59 |
+
|
60 |
+
Note: at this time text-generation-webui may not support the new q5 quantisation methods.
|
61 |
+
|
62 |
+
**Thireus** has written a [great guide on how to update it to the latest llama.cpp code](https://huggingface.co/TheBloke/wizardLM-7B-GGML/discussions/5) so that these files can be used in the UI.
|
63 |
+
|
64 |
+
# Original wizard-vicuna-13B model card
|
65 |
+
|
66 |
+
# WizardVicunaLM
|
67 |
+
### Wizard's dataset + ChatGPT's conversation extension + Vicuna's tuning method
|
68 |
+
I am a big fan of the ideas behind WizardLM and VicunaLM. I particularly like the idea of WizardLM handling the dataset itself more deeply and broadly, as well as VicunaLM overcoming the limitations of single-turn conversations by introducing multi-round conversations. As a result, I combined these two ideas to create WizardVicunaLM. This project is highly experimental and designed for proof of concept, not for actual usage.
|
69 |
+
|
70 |
+
|
71 |
+
## Benchmark
|
72 |
+
### Approximately 7% performance improvement over VicunaLM
|
73 |
+
![](https://user-images.githubusercontent.com/21379657/236088663-3fa212c9-0112-4d44-9b01-f16ea093cb67.png)
|
74 |
+
|
75 |
+
|
76 |
+
### Detail
|
77 |
+
|
78 |
+
The questions presented here are not from rigorous tests, but rather, I asked a few questions and requested GPT-4 to score them. The models compared were ChatGPT 3.5, WizardVicunaLM, VicunaLM, and WizardLM, in that order.
|
79 |
+
|
80 |
+
| | gpt3.5 | wizard-vicuna-13b | vicuna-13b | wizard-7b | link |
|
81 |
+
|-----|--------|-------------------|------------|-----------|----------|
|
82 |
+
| Q1 | 95 | 90 | 85 | 88 | [link](https://sharegpt.com/c/YdhIlby) |
|
83 |
+
| Q2 | 95 | 97 | 90 | 89 | [link](https://sharegpt.com/c/YOqOV4g) |
|
84 |
+
| Q3 | 85 | 90 | 80 | 65 | [link](https://sharegpt.com/c/uDmrcL9) |
|
85 |
+
| Q4 | 90 | 85 | 80 | 75 | [link](https://sharegpt.com/c/XBbK5MZ) |
|
86 |
+
| Q5 | 90 | 85 | 80 | 75 | [link](https://sharegpt.com/c/AQ5tgQX) |
|
87 |
+
| Q6 | 92 | 85 | 87 | 88 | [link](https://sharegpt.com/c/eVYwfIr) |
|
88 |
+
| Q7 | 95 | 90 | 85 | 92 | [link](https://sharegpt.com/c/Kqyeub4) |
|
89 |
+
| Q8 | 90 | 85 | 75 | 70 | [link](https://sharegpt.com/c/M0gIjMF) |
|
90 |
+
| Q9 | 92 | 85 | 70 | 60 | [link](https://sharegpt.com/c/fOvMtQt) |
|
91 |
+
| Q10 | 90 | 80 | 75 | 85 | [link](https://sharegpt.com/c/YYiCaUz) |
|
92 |
+
| Q11 | 90 | 85 | 75 | 65 | [link](https://sharegpt.com/c/HMkKKGU) |
|
93 |
+
| Q12 | 85 | 90 | 80 | 88 | [link](https://sharegpt.com/c/XbW6jgB) |
|
94 |
+
| Q13 | 90 | 95 | 88 | 85 | [link](https://sharegpt.com/c/JXZb7y6) |
|
95 |
+
| Q14 | 94 | 89 | 90 | 91 | [link](https://sharegpt.com/c/cTXH4IS) |
|
96 |
+
| Q15 | 90 | 85 | 88 | 87 | [link](https://sharegpt.com/c/GZiM0Yt) |
|
97 |
+
| | 91 | 88 | 82 | 80 | |
|
98 |
+
|
99 |
+
|
100 |
+
## Principle
|
101 |
+
|
102 |
+
We adopted the approach of WizardLM, which is to extend a single problem more in-depth. However, instead of using individual instructions, we expanded it using Vicuna's conversation format and applied Vicuna's fine-tuning techniques.
|
103 |
+
|
104 |
+
Turning a single command into a rich conversation is what we've done [here](https://sharegpt.com/c/6cmxqq0).
|
105 |
+
|
106 |
+
After creating the training data, I later trained it according to the Vicuna v1.1 [training method](https://github.com/lm-sys/FastChat/blob/main/scripts/train_vicuna_13b.sh).
|
107 |
+
|
108 |
+
|
109 |
+
## Detailed Method
|
110 |
+
|
111 |
+
First, we explore and expand various areas in the same topic using the 7K conversations created by WizardLM. However, we made it in a continuous conversation format instead of the instruction format. That is, it starts with WizardLM's instruction, and then expands into various areas in one conversation using ChatGPT 3.5.
|
112 |
+
|
113 |
+
After that, we applied the following model using Vicuna's fine-tuning format.
|
114 |
+
|
115 |
+
## Training Process
|
116 |
+
|
117 |
+
Trained with 8 A100 GPUs for 35 hours.
|
118 |
+
|
119 |
+
## Weights
|
120 |
+
You can see the [dataset](https://huggingface.co/datasets/junelee/wizard_vicuna_70k) we used for training and the [13b model](https://huggingface.co/junelee/wizard-vicuna-13b) in the huggingface.
|
121 |
+
|
122 |
+
## Conclusion
|
123 |
+
If we extend the conversation to gpt4 32K, we can expect a dramatic improvement, as we can generate 8x more, more accurate and richer conversations.
|
124 |
+
|
125 |
+
## License
|
126 |
+
The model is licensed under the LLaMA model, and the dataset is licensed under the terms of OpenAI because it uses ChatGPT. Everything else is free.
|
127 |
+
|
128 |
+
## Author
|
129 |
+
|
130 |
+
[JUNE LEE](https://github.com/melodysdreamj) - He is active in Songdo Artificial Intelligence Study and GDG Songdo.
|