--- license: apache-2.0 tags: - AWQ inference: false --- # Falcon-40b-Instruct (4-bit 128g AWQ Quantized) [Falcon-40b-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) is a 40B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-40B](https://huggingface.co/tiiuae/falcon-7b) and finetuned on a mixture of chat/instruct datasets. This model is a 4-bit 128 group size AWQ quantized model. For more information about AWQ quantization, please click [here](https://github.com/mit-han-lab/llm-awq). ## Model Date July 5, 2023 ## Model License Please refer to original Falcon model license ([link](https://huggingface.co/tiiuae/falcon-40b-instruct)). Please refer to the AWQ quantization license ([link](https://github.com/llm-awq/blob/main/LICENSE)). ## CUDA Version This model was successfully tested on CUDA driver v530.30.02 and runtime v11.7 with Python v3.10.11. Please note that AWQ requires NVIDIA GPUs with compute capability of `8.0` or higher. ## How to Use ```bash git clone https://github.com/mit-han-lab/llm-awq \ && cd llm-awq \ && git checkout f084f40bd996f3cf3a0633c1ad7d9d476c318aaa \ && pip install -e . \ && cd awq/kernels \ && python setup.py install ``` ```python import time import torch from awq.quantize.quantizer import real_quantize_model_weight from awq.utils.utils import simple_dispatch_model from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer, TextStreamer, TextStreamer from accelerate import init_empty_weights, load_checkpoint_and_dispatch from huggingface_hub import snapshot_download model_name = "abhinavkulkarni/tiiuae-falcon-40b-instruct-w4-g128-awq" # Config config = AutoConfig.from_pretrained(model_name, trust_remote_code=True) # Tokenizer try: tokenizer = AutoTokenizer.from_pretrained(config.tokenizer_name, trust_remote_code=True) except: tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_special_tokens=True) streamer = TextStreamer(tokenizer, skip_special_tokens=True) # Model w_bit = 4 q_config = { "zero_point": True, "q_group_size": 128, } # Initialize empty model with init_empty_weights(): model = AutoModelForCausalLM.from_config(config=config, torch_dtype=torch.float16, trust_remote_code=True) real_quantize_model_weight(model, w_bit=w_bit, q_config=q_config, init_only=True) model.tie_weights() model.tie_weights() # Infer device_map device_map = infer_auto_device_map( model, no_split_module_classes=[ "OPTDecoderLayer", "LlamaDecoderLayer", "BloomBlock", "MPTBlock", "DecoderLayer"] ) # Load weights load_checkpoint_in_model( model, checkpoint=snapshot_download(model_name), device_map=device_map, offload_state_dict=True, ) model = simple_dispatch_model(model, device_map=device_map) # Inference prompt = f'''What is the difference between nuclear fusion and fission? ###Response:''' input_ids = tokenizer(prompt, return_tensors='pt').input_ids.cuda() output = model.generate( inputs=input_ids, temperature=0.7, max_new_tokens=512, top_p=0.15, top_k=0, repetition_penalty=1.1, eos_token_id=tokenizer.eos_token_id, streamer=streamer, streamer=streamer, ) ``` ## Evaluation This evaluation was done using [LM-Eval](https://github.com/EleutherAI/lm-evaluation-harness). [Falcon-40b-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) | Task |Version| Metric |Value | |Stderr| |--------|------:|---------------|-----:|---|------| |wikitext| 1|word_perplexity|8.8219| | | | | |byte_perplexity|1.5025| | | | | |bits_per_byte |0.5874| | | [Falcon-40b-Instruct (4-bit 128-group AWQ)](https://huggingface.co/abhinavkulkarni/tiiuae-falcon-40b-instruct-w4-g128-awq) | Task |Version| Metric |Value | |Stderr| |--------|------:|---------------|-----:|---|------| |wikitext| 1|word_perplexity|8.9237| | | | | |byte_perplexity|1.5058| | | | | |bits_per_byte |0.5905| | | ## Acknowledgements *Paper coming soon* 😊. In the meanwhile, you can use the following information to cite: ``` @article{falcon40b, title={{Falcon-40B}: an open large language model with state-of-the-art performance}, author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme}, year={2023} } ``` The model was quantized with AWQ technique. If you find AWQ useful or relevant to your research, please kindly cite the paper: ``` @article{lin2023awq, title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration}, author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Dang, Xingyu and Han, Song}, journal={arXiv}, year={2023} } ```