--- license: cc-by-sa-3.0 language: - en tags: - AWQ inference: false --- # VMware/open-llama-7B-v2-open-instruct (4-bit 128g AWQ Quantized) Instruction-tuned version of the fully trained Open LLama 7B v2 model. The model is open for COMMERCIAL USE.
This model is a 4-bit 128 group size AWQ quantized model. For more information about AWQ quantization, please click [here](https://github.com/mit-han-lab/llm-awq). ## Model Date July 12, 2023 ## Model License Please refer to original OpenLLaMa model license ([link](https://huggingface.co/VMware/open-llama-7b-v2-open-instruct)). Please refer to the AWQ quantization license ([link](https://github.com/llm-awq/blob/main/LICENSE)). ## CUDA Version This model was successfully tested on CUDA driver v530.30.02 and runtime v11.7 with Python v3.10.11. Please note that AWQ requires NVIDIA GPUs with compute capability of `8.0` or higher. For Docker users, the `nvcr.io/nvidia/pytorch:23.06-py3` image is runtime v12.1 but otherwise the same as the configuration above and has also been verified to work. ## How to Use ```bash git clone https://github.com/mit-han-lab/llm-awq \ && cd llm-awq \ && git checkout ce4a6bb1c238c014a06672cb74f6865573494d66 \ && pip install -e . \ && cd awq/kernels \ && python setup.py install ``` ```python import torch from awq.quantize.quantizer import real_quantize_model_weight from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer, TextStreamer from accelerate import init_empty_weights, load_checkpoint_and_dispatch from huggingface_hub import snapshot_download model_name = "abhinavkulkarni/VMware-open-llama-7b-v2-open-instruct" # Config config = AutoConfig.from_pretrained(model_name, trust_remote_code=True) # Tokenizer try: tokenizer = AutoTokenizer.from_pretrained(config.tokenizer_name, trust_remote_code=True) except: tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_special_tokens=True) # Model w_bit = 4 q_config = { "zero_point": True, "q_group_size": 128, } load_quant = snapshot_download(model_name) with init_empty_weights(): model = AutoModelForCausalLM.from_config(config=config, torch_dtype=torch.float16, trust_remote_code=True) real_quantize_model_weight(model, w_bit=w_bit, q_config=q_config, init_only=True) model.tie_weights() model = load_checkpoint_and_dispatch(model, load_quant, device_map="balanced") # Inference prompt = f'''What is the difference between nuclear fusion and fission? ###Response:''' input_ids = tokenizer(prompt, return_tensors='pt').input_ids.cuda() output = model.generate( inputs=input_ids, temperature=0.7, max_new_tokens=512, top_p=0.15, top_k=0, repetition_penalty=1.1, eos_token_id=tokenizer.eos_token_id, streamer=streamer) ``` ## Evaluation This evaluation was done using [LM-Eval](https://github.com/EleutherAI/lm-evaluation-harness). [Open-LLaMA-7B-v2-Instruct](https://huggingface.co/VMware/open-llama-7b-v2-open-instruct) | Task |Version| Metric | Value | |Stderr| |--------|------:|---------------|------:|---|------| |wikitext| 1|word_perplexity|16.6822| | | | | |byte_perplexity| 1.6927| | | | | |bits_per_byte | 0.7593| | | [Open-LLaMA-7B-v2-Instruct (4-bit 128-group AWQ)](https://huggingface.co/abhinavkulkarni/VMware-open-llama-7b-v2-open-instruct-w4-g128-awq) | Task |Version| Metric | Value | |Stderr| |--------|------:|---------------|------:|---|------| |wikitext| 1|word_perplexity|17.1546| | | | | |byte_perplexity| 1.7015| | | | | |bits_per_byte | 0.7668| | | ## Acknowledgements If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX: ``` @software{openlm2023openllama, author = {Geng, Xinyang and Liu, Hao}, title = {OpenLLaMA: An Open Reproduction of LLaMA}, month = May, year = 2023, url = {https://github.com/openlm-research/open_llama} } ``` ``` @software{together2023redpajama, author = {Together Computer}, title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset}, month = April, year = 2023, url = {https://github.com/togethercomputer/RedPajama-Data} } ``` ``` @article{touvron2023llama, title={Llama: Open and efficient foundation language models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } ``` The model was quantized with AWQ technique. If you find AWQ useful or relevant to your research, please kindly cite the paper: ``` @article{lin2023awq, title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration}, author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Dang, Xingyu and Han, Song}, journal={arXiv}, year={2023} } ```