license: other
inference: false
WizardLM: An Instruction-following LLM Using Evol-Instruct
These files are the result of merging the delta weights with the original Llama7B model.
The code for merging is provided in the WizardLM official Github repo.
WizardLM-7B 4bit GPTQ
This repo contains 4bit GPTQ models for GPU inference, quantised using GPTQ-for-LLaMa.
Other repositories available
PERFORMANCE ISSUES
For reasons I can't yet understand, there are performance problems with these 4bit GPTQs that I have not experienced with any other GPTQ 7B or 13B models.
I have re-made the GPTQs several times, trying various versions of GPTQ-for-LLaMa code. But I currently can't resolve it.
Using the act-order.safetensors file on Triton code performs acceptably for me, testing on a 4090 - eg 10-13 tokens/s. But the no-act-order.safetensor file, tested on the older CUDA oobabooga GPTQ-for-LLaMa code, returns only 4 tokens/s.
I will keep investigating and trying to work out what's happening here. But for the moment, if you're not able to use Triton GPTQ-for-LLaMa, you may want to try another 7B GPTQ model.
GIBBERISH OUTPUT IN text-generation-webui
?
Please read the Provided Files section below. You should use wizardLM-7B-GPTQ-4bit-128g.no-act-order.safetensors
unless you are able to use the latest GPTQ-for-LLaMa code.
If you're using a text-generation-webui one click installer, you MUST use wizardLM-7B-GPTQ-4bit-128g.no-act-order.safetensors
.
Provided files
Two files are provided. The second file will not work unless you use a recent version of GPTQ-for-LLaMa
Specifically, the second file uses --act-order
for maximum quantisation quality and will not work with oobabooga's fork of GPTQ-for-LLaMa. Therefore at this time it will also not work with text-generation-webui
one-click installers.
Unless you are able to use the latest GPTQ-for-LLaMa code, please use wizardLM-7B-GPTQ-4bit-128g.no-act-order.safetensors
.
wizardLM-7B-GPTQ-4bit-128g.no-act-order.safetensors
- Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
- Works with text-generation-webui one-click-installers
- Works on Windows
- Parameters: Groupsize = 128g. No act-order.
- Command used to create the GPTQ:
CUDA_VISIBLE_DEVICES=0 python3 llama.py wizardLM-7B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors wizardLM-7B-GPTQ-4bit-128g.no-act-order.safetensors
wizardLM-7B-GPTQ-4bit-128g.act-order.safetensors
- Only works with recent GPTQ-for-LLaMa code
- Does not work with text-generation-webui one-click-installers
- Parameters: Groupsize = 128g. act-order.
- Offers highest quality quantisation, but requires recent GPTQ-for-LLaMa code
- Command used to create the GPTQ:
CUDA_VISIBLE_DEVICES=0 python3 llama.py wizardLM-7B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors wizardLM-7B-GPTQ-4bit-128g.act-order.safetensors
How to run in text-generation-webui
File wizardLM-7B-GPTQ-4bit-128g.no-act-order.safetensors
can be loaded the same as any other GPTQ file, without requiring any updates to oobaboogas text-generation-webui.
Instructions on using GPTQ 4bit files in text-generation-webui are here.
The other safetensors
model file was created using --act-order
to give the maximum possible quantisation quality, but this means it requires that the latest GPTQ-for-LLaMa is used inside the UI.
If you want to use the act-order safetensors
files and need to update the Triton branch of GPTQ-for-LLaMa, here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
# Clone text-generation-webui, if you don't already have it
git clone https://github.com/oobabooga/text-generation-webui
# Make a repositories directory
mkdir text-generation-webui/repositories
cd text-generation-webui/repositories
# Clone the latest GPTQ-for-LLaMa code inside text-generation-webui
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
Then install this model into text-generation-webui/models
and launch the UI as follows:
cd text-generation-webui
python server.py --model wizardLM-7B-GPTQ --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
If you can't update GPTQ-for-LLaMa or don't want to, you can use wizardLM-7B-GPTQ-4bit-128g.no-act-order.safetensors
as mentioned above, which should work without any upgrades to text-generation-webui.
Original model info
Overview of Evol-Instruct Evol-Instruct is a novel method using LLMs instead of humans to automatically mass-produce open-domain instructions of various difficulty levels and skills range, to improve the performance of LLMs.