TheBloke commited on
Commit
286b640
1 Parent(s): e0b8ec7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -68,7 +68,7 @@ Each separate quant is in a different branch. See below for instructions on fet
68
 
69
  | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
70
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
71
- | main | 4 | 128 | False | 35332232264.00 GB | False | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
72
  | gptq-4bit-32g-actorder_True | 4 | 32 | True | 40.66 GB | False | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
73
  | gptq-4bit-64g-actorder_True | 4 | 64 | True | 37.99 GB | False | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
74
  | gptq-4bit-128g-actorder_True | 4 | 128 | True | 36.65 GB | False | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
@@ -101,7 +101,7 @@ pip3 install git+https://github.com/huggingface/transformers
101
  ExLlama is not currently compatible with Llama 2 70B but support is expected soon.
102
 
103
  1. Click the **Model tab**.
104
- 2. Under **Download custom model or LoRA**, enter `%%REPO_GPTQ`.
105
  - To download from a specific branch, enter for example `TheBloke/Llama-2-70B-chat-GPTQ:gptq-4bit-32g-actorder_True`
106
  - see Provided Files above for the list of branches for each option.
107
  3. Click **Download**.
 
68
 
69
  | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
70
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
71
+ | main | 4 | 128 | False | 35.33 GB | False | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
72
  | gptq-4bit-32g-actorder_True | 4 | 32 | True | 40.66 GB | False | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
73
  | gptq-4bit-64g-actorder_True | 4 | 64 | True | 37.99 GB | False | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
74
  | gptq-4bit-128g-actorder_True | 4 | 128 | True | 36.65 GB | False | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
 
101
  ExLlama is not currently compatible with Llama 2 70B but support is expected soon.
102
 
103
  1. Click the **Model tab**.
104
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/Llama-2-70B-chat-GPTQ`.
105
  - To download from a specific branch, enter for example `TheBloke/Llama-2-70B-chat-GPTQ:gptq-4bit-32g-actorder_True`
106
  - see Provided Files above for the list of branches for each option.
107
  3. Click **Download**.