|
--- |
|
license: apache-2.0 |
|
tags: |
|
- AWQ |
|
inference: false |
|
--- |
|
|
|
# Falcon-7B-Instruct (4-bit 64g AWQ Quantized) |
|
[Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and finetuned on a mixture of chat/instruct datasets. |
|
|
|
This model is a 4-bit 64 group size AWQ quantized model. For more information about AWQ quantization, please click [here](https://github.com/mit-han-lab/llm-awq). |
|
|
|
## Model Date |
|
|
|
July 5, 2023 |
|
|
|
## Model License |
|
|
|
Please refer to original Falcon model license ([link](https://huggingface.co/tiiuae/falcon-7b-instruct)). |
|
|
|
Please refer to the AWQ quantization license ([link](https://github.com/llm-awq/blob/main/LICENSE)). |
|
|
|
## CUDA Version |
|
|
|
This model was successfully tested on CUDA driver v530.30.02 and runtime v11.7 with Python v3.10.11. Please note that AWQ requires NVIDIA GPUs with compute capability of `8.0` or higher. |
|
|
|
## How to Use |
|
|
|
```bash |
|
git clone https://github.com/mit-han-lab/llm-awq \ |
|
&& cd llm-awq \ |
|
&& git checkout ce4a6bb1c238c014a06672cb74f6865573494d66 \ |
|
&& pip install -e . \ |
|
&& cd awq/kernels \ |
|
&& python setup.py install |
|
``` |
|
|
|
```python |
|
import time |
|
import torch |
|
from awq.quantize.quantizer import real_quantize_model_weight |
|
from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer, TextStreamer |
|
from accelerate import init_empty_weights, load_checkpoint_and_dispatch |
|
from huggingface_hub import snapshot_download |
|
|
|
model_name = "abhinavkulkarni/tiiuae-falcon-7b-instruct-w4-g64-awq" |
|
|
|
# Config |
|
config = AutoConfig.from_pretrained(model_name, trust_remote_code=True) |
|
|
|
# Tokenizer |
|
try: |
|
tokenizer = AutoTokenizer.from_pretrained(config.tokenizer_name, trust_remote_code=True) |
|
except: |
|
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False, trust_remote_code=True) |
|
streamer = TextStreamer(tokenizer, skip_special_tokens=True) |
|
|
|
# Model |
|
w_bit = 4 |
|
q_config = { |
|
"zero_point": True, |
|
"q_group_size": 64, |
|
} |
|
|
|
load_quant = snapshot_download(model_name) |
|
|
|
with init_empty_weights(): |
|
model = AutoModelForCausalLM.from_config(config=config, |
|
torch_dtype=torch.float16, trust_remote_code=True) |
|
|
|
real_quantize_model_weight(model, w_bit=w_bit, q_config=q_config, init_only=True) |
|
model.tie_weights() |
|
|
|
model = load_checkpoint_and_dispatch(model, load_quant, device_map="balanced") |
|
|
|
# Inference |
|
prompt = f'''What is the difference between nuclear fusion and fission? |
|
###Response:''' |
|
|
|
input_ids = tokenizer(prompt, return_tensors='pt').input_ids.cuda() |
|
output = model.generate( |
|
inputs=input_ids, |
|
temperature=0.7, |
|
max_new_tokens=512, |
|
top_p=0.15, |
|
top_k=0, |
|
repetition_penalty=1.1, |
|
eos_token_id=tokenizer.eos_token_id, |
|
streamer=streamer) |
|
``` |
|
|
|
## Evaluation |
|
|
|
This evaluation was done using [LM-Eval](https://github.com/EleutherAI/lm-evaluation-harness). |
|
|
|
[Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) |
|
|
|
| Task |Version| Metric | Value | |Stderr| |
|
|--------|------:|---------------|------:|---|------| |
|
|wikitext| 1|word_perplexity|14.5069| | | |
|
| | |byte_perplexity| 1.6490| | | |
|
| | |bits_per_byte | 0.7216| | | |
|
|
|
[Falcon-7B-Instruct (4-bit 64-group AWQ)](https://huggingface.co/abhinavkulkarni/tiiuae-falcon-7b-instruct-w4-g64-awq) |
|
|
|
| Task |Version| Metric | Value | |Stderr| |
|
|--------|------:|---------------|------:|---|------| |
|
|wikitext| 1|word_perplexity|14.8667| | | |
|
| | |byte_perplexity| 1.6566| | | |
|
| | |bits_per_byte | 0.7282| | | |
|
|
|
|
|
## Acknowledgements |
|
|
|
*Paper coming soon* ๐. In the meanwhile, you can use the following information to cite: |
|
``` |
|
@article{falcon40b, |
|
title={{Falcon-40B}: an open large language model with state-of-the-art performance}, |
|
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme}, |
|
year={2023} |
|
} |
|
``` |
|
|
|
|
|
The model was quantized with AWQ technique. If you find AWQ useful or relevant to your research, please kindly cite the paper: |
|
|
|
``` |
|
@article{lin2023awq, |
|
title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration}, |
|
author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Dang, Xingyu and Han, Song}, |
|
journal={arXiv}, |
|
year={2023} |
|
} |
|
``` |
|
|
|
|