File size: 7,825 Bytes
4bc0304
 
18df7ce
4bc0304
97d56a1
2d989ba
 
 
 
 
97d56a1
2d989ba
 
97d56a1
2d989ba
 
97d56a1
18df7ce
 
 
 
 
 
 
 
 
fdc9906
 
 
 
 
 
18df7ce
2180bec
 
0091d6b
2180bec
 
0091d6b
2180bec
 
0091d6b
 
 
 
 
 
 
2180bec
18df7ce
 
2180bec
18df7ce
2180bec
18df7ce
 
 
2180bec
18df7ce
2180bec
18df7ce
2180bec
2d989ba
c5c455d
18df7ce
 
 
 
 
 
 
c5c455d
18df7ce
 
 
f352f2b
18df7ce
 
 
 
 
2180bec
18df7ce
c5c455d
18df7ce
 
 
f352f2b
18df7ce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e9e7d2c
18df7ce
 
 
 
c5c455d
18df7ce
97d56a1
 
2d989ba
97d56a1
2d989ba
97d56a1
2d989ba
97d56a1
2d989ba
97d56a1
 
 
 
 
 
 
 
 
2d989ba
97d56a1
 
 
 
 
18df7ce
 
 
 
 
ecf4344
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
---
license: other
inference: false
---
<!-- header start -->
<div style="width: 100%;">
    <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
    <div style="display: flex; flex-direction: column; align-items: flex-start;">
        <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
    </div>
    <div style="display: flex; flex-direction: column; align-items: flex-end;">
        <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
    </div>
</div>
<!-- header end -->

# WizardLM: An Instruction-following LLM Using Evol-Instruct

These files are the result of merging the [delta weights](https://huggingface.co/victor123/WizardLM) with the original Llama7B model.

The code for merging is provided in the [WizardLM official Github repo](https://github.com/nlpxucan/WizardLM).

## WizardLM-7B 4bit GPTQ

This repo contains 4bit GPTQ models for GPU inference, quantised using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).

## Other repositories available

* [4bit GGML models for CPU inference](https://huggingface.co/TheBloke/wizardLM-7B-GGML)
* [Unquantised model in HF format](https://huggingface.co/TheBloke/wizardLM-7B-HF)

## How to easily download and use this model in text-generation-webui

Open the text-generation-webui UI as normal.

1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/wizardLM-7B-GPTQ`.
3. Click **Download**.
4. Wait until it says it's finished downloading.
5. Click the **Refresh** icon next to **Model** in the top left.
6. In the **Model drop-down**: choose the model you just downloaded,`wizardLM-7B-GPTQg`.
7. If you see an error in the bottom right, ignore it - it's temporary.
8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
9. Click **Save settings for this model** in the top right.
10. Click **Reload the Model** in the top right.
11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!

## GIBBERISH OUTPUT IN `text-generation-webui`?

Please read the Provided Files section below. You should use `wizardLM-7B-GPTQ-4bit-128g.compat.no-act-order.safetensors` unless you are able to use the latest GPTQ-for-LLaMa code.

If you're using a text-generation-webui one click installer, you MUST use `wizardLM-7B-GPTQ-4bit-128g.compat.no-act-order.safetensors`.

## Provided files

Two files are provided. **The 'latest' file will not work unless you use a recent version of GPTQ-for-LLaMa**

Specifically, the 'latest' file uses `--act-order` for maximum quantisation quality and will not work with oobabooga's fork of GPTQ-for-LLaMa. Therefore at this time it will also not work with `text-generation-webui` one-click installers.

The 'compat' file will be used by default in text-generation-webui so you don't need to do anything special to use it.  If you want to use the 'latest' file, please remove the 'cmopat' file - but only do this if you are able to use the latest GPTQ-for-LLaMa code.

* `wizardLM-7B-GPTQ-4bit-128g.compat.no-act-order.safetensors`
  * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
  * Works with text-generation-webui one-click-installers
  * Parameters: Groupsize = 128g. No act-order.
  * Command used to create the GPTQ:
    ```
    CUDA_VISIBLE_DEVICES=0 python3 llama.py wizardLM-7B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors wizardLM-7B-GPTQ-4bit-128g.no-act-order.safetensors
    ```
* `wizardLM-7B-GPTQ-4bit-128g.latest.act-order.safetensors`
  * Only works with recent GPTQ-for-LLaMa code
  * **Does not** work with text-generation-webui one-click-installers
  * Parameters: Groupsize = 128g. act-order.
  * Offers highest quality quantisation, but requires recent GPTQ-for-LLaMa code
  * Command used to create the GPTQ:
    ```
    CUDA_VISIBLE_DEVICES=0 python3 llama.py wizardLM-7B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors wizardLM-7B-GPTQ-4bit-128g.act-order.safetensors
    ```

## How to install manually in `text-generation-webui` and update GPTQ-for-LLaMa if necessary

File `wizardLM-7B-GPTQ-4bit-128g.compat.no-act-order.safetensors` can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).

[Instructions on using GPTQ 4bit files in text-generation-webui are here](https://github.com/oobabooga/text-generation-webui/wiki/GPTQ-models-\(4-bit-mode\)).

The other `safetensors` model file was created using `--act-order` to give the maximum possible quantisation quality, but this means it requires that the latest GPTQ-for-LLaMa is used inside the UI.

If you want to use the act-order `safetensors` files and need to update the Triton branch of GPTQ-for-LLaMa, here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
```
# Clone text-generation-webui, if you don't already have it
git clone https://github.com/oobabooga/text-generation-webui
# Make a repositories directory
mkdir text-generation-webui/repositories
cd text-generation-webui/repositories
# Clone the latest GPTQ-for-LLaMa code inside text-generation-webui
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
```

Then install this model into `text-generation-webui/models` and launch the UI as follows:
```
cd text-generation-webui
python server.py --model wizardLM-7B-GPTQ --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
```

The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.

If you can't update GPTQ-for-LLaMa or don't want to, you can use `wizardLM-7B-GPTQ-4bit-128g.compat.no-act-order.safetensors` as mentioned above, which should work without any upgrades to text-generation-webui.

<!-- footer start -->
## Discord

For further support, and discussions on these models and AI in general, join us at:

[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)

## Thanks, and how to contribute.

Thanks to the [chirper.ai](https://chirper.ai) team!

I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.

If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.

Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.

* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI

**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.

Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model info

Overview of Evol-Instruct
Evol-Instruct is a novel method using LLMs instead of humans to automatically mass-produce open-domain instructions of various difficulty levels and skills range, to improve the performance of LLMs.

![info](https://github.com/nlpxucan/WizardLM/raw/main/imgs/git_running.png)