abhinavkulkarni commited on
Commit
e885f32
1 Parent(s): c59e771

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -18,7 +18,7 @@ July 5, 2023
18
 
19
  ## Model License
20
 
21
- Please refer to original MPT model license ([link](https://huggingface.co/VMware/open-llama-7b-open-instruct)).
22
 
23
  Please refer to the AWQ quantization license ([link](https://github.com/llm-awq/blob/main/LICENSE)).
24
 
@@ -61,7 +61,7 @@ q_config = {
61
  "q_group_size": 128,
62
  }
63
 
64
- load_quant = hf_hub_download('abhinavkulkarni/open-llama-7b-open-instruct-w4-g128-awq', 'pytorch_model.bin')
65
 
66
  with init_empty_weights():
67
  model = AutoModelForCausalLM.from_config(config=config,
@@ -100,7 +100,7 @@ This evaluation was done using [LM-Eval](https://github.com/EleutherAI/lm-evalua
100
  | | |byte_perplexity| 1.5853| | |
101
  | | |bits_per_byte | 0.6648| | |
102
 
103
- [Open-LLaMA-7B-Instruct (4-bit 128-group AWQ)](https://huggingface.co/abhinavkulkarni/open-llama-7b-open-instruct-w4-g128-awq)
104
 
105
  | Task |Version| Metric | Value | |Stderr|
106
  |--------|------:|---------------|------:|---|------|
 
18
 
19
  ## Model License
20
 
21
+ Please refer to original OpenLLaMa model license ([link](https://huggingface.co/VMware/open-llama-7b-open-instruct)).
22
 
23
  Please refer to the AWQ quantization license ([link](https://github.com/llm-awq/blob/main/LICENSE)).
24
 
 
61
  "q_group_size": 128,
62
  }
63
 
64
+ load_quant = hf_hub_download('abhinavkulkarni/VMWare-open-llama-7b-open-instruct-w4-g128-awq', 'pytorch_model.bin')
65
 
66
  with init_empty_weights():
67
  model = AutoModelForCausalLM.from_config(config=config,
 
100
  | | |byte_perplexity| 1.5853| | |
101
  | | |bits_per_byte | 0.6648| | |
102
 
103
+ [Open-LLaMA-7B-Instruct (4-bit 128-group AWQ)](https://huggingface.co/abhinavkulkarni/VMware-open-llama-7b-open-instruct-w4-g128-awq)
104
 
105
  | Task |Version| Metric | Value | |Stderr|
106
  |--------|------:|---------------|------:|---|------|