TheBloke commited on
Commit
d2d87ff
1 Parent(s): cfe23a2

Upload new GPTQs with varied parameters

Browse files
Files changed (1) hide show
  1. README.md +69 -39
README.md CHANGED
@@ -1,8 +1,9 @@
1
  ---
2
- inference: false
3
- license: other
4
  datasets:
5
  - bavest/fin-llama-dataset
 
 
 
6
  tags:
7
  - finance
8
  - llm
@@ -16,7 +17,7 @@ tags:
16
  </div>
17
  <div style="display: flex; justify-content: space-between; width: 100%;">
18
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
19
- <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
20
  </div>
21
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
22
  <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
@@ -26,42 +27,64 @@ tags:
26
 
27
  # Bavest's Fin Llama 33B GPTQ
28
 
29
- These files are GPTQ 4bit model files for [Bavest's Fin Llama 33B](https://huggingface.co/bavest/fin-llama-33b-merged).
 
 
30
 
31
- It is the result of quantising to 4bit using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ).
32
 
33
  ## Repositories available
34
 
35
- * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/fin-llama-33B-GPTQ)
36
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/fin-llama-33B-GGML)
37
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/bavest/fin-llama-33b-merged)
38
 
39
- ## Prompt template
40
-
41
- Standard Alpaca prompting:
42
 
43
  ```
44
- A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's question.
45
- ### Instruction: prompt
46
 
47
- ### Response:
48
- ```
49
- or
50
  ```
51
- A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's question.
52
- ### Instruction: prompt
53
 
54
- ### Input:
55
 
56
- ### Response:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
  ```
 
58
 
59
- ## How to easily download and use this model in text-generation-webui
60
 
61
- Please make sure you're using the latest version of text-generation-webui
 
 
62
 
63
  1. Click the **Model tab**.
64
  2. Under **Download custom model or LoRA**, enter `TheBloke/fin-llama-33B-GPTQ`.
 
 
65
  3. Click **Download**.
66
  4. The model will start downloading. Once it's finished it will say "Done"
67
  5. In the top left, click the refresh icon next to **Model**.
@@ -75,14 +98,13 @@ Please make sure you're using the latest version of text-generation-webui
75
 
76
  First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
77
 
78
- `pip install auto-gptq`
79
 
80
  Then try the following example code:
81
 
82
  ```python
83
  from transformers import AutoTokenizer, pipeline, logging
84
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
85
- import argparse
86
 
87
  model_name_or_path = "TheBloke/fin-llama-33B-GPTQ"
88
  model_basename = "fin-llama-33b-GPTQ-4bit--1g.act.order"
@@ -92,16 +114,32 @@ use_triton = False
92
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
93
 
94
  model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
95
- model_basename=model_basename,
96
  use_safetensors=True,
97
  trust_remote_code=False,
98
  device="cuda:0",
99
  use_triton=use_triton,
100
  quantize_config=None)
101
 
 
 
 
 
 
 
 
 
 
 
 
 
102
  prompt = "Tell me about AI"
103
- prompt_template=f'''### Instruction: {prompt}
104
- ### Response:'''
 
 
 
 
105
 
106
  print("\n\n*** Generate:")
107
 
@@ -128,26 +166,18 @@ pipe = pipeline(
128
  print(pipe(prompt_template)[0]['generated_text'])
129
  ```
130
 
131
- ## Provided files
132
-
133
- **fin-llama-33b-GPTQ-4bit--1g.act.order.safetensors**
134
-
135
- This will work with AutoGPTQ and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
136
 
137
- It was created without group_size to lower VRAM requirements, and with --act-order (desc_act) to boost inference accuracy as much as possible.
138
 
139
- * `fin-llama-33b-GPTQ-4bit--1g.act.order.safetensors`
140
- * Works with AutoGPTQ in CUDA or Triton modes.
141
- * Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
142
- * Works with text-generation-webui, including one-click-installers.
143
- * Parameters: Groupsize = -1. Act Order / desc_act = True.
144
 
145
  <!-- footer start -->
146
  ## Discord
147
 
148
  For further support, and discussions on these models and AI in general, join us at:
149
 
150
- [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
151
 
152
  ## Thanks, and how to contribute.
153
 
@@ -162,9 +192,9 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
162
  * Patreon: https://patreon.com/TheBlokeAI
163
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
164
 
165
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
166
 
167
- **Patreon special mentions**: Oscar Rangel, Eugene Pentland, Talal Aujan, Cory Kujawski, Luke, Asp the Wyvern, Ai Maven, Pyrater, Alps Aficionado, senxiiz, Willem Michiel, Junyu Yang, trip7s trip, Sebastain Graf, Joseph William Delisle, Lone Striker, Jonathan Leane, Johann-Peter Hartmann, David Flickinger, Spiking Neurons AB, Kevin Schuppel, Mano Prime, Dmitriy Samsonov, Sean Connelly, Nathan LeClaire, Alain Rossmann, Fen Risland, Derek Yates, Luke Pendergrass, Nikolai Manek, Khalefa Al-Ahmad, Artur Olbinski, John Detwiler, Ajan Kanaga, Imad Khwaja, Trenton Dambrowitz, Kalila, vamX, webtim, Illia Dulskyi.
168
 
169
  Thank you to all my generous patrons and donaters!
170
 
 
1
  ---
 
 
2
  datasets:
3
  - bavest/fin-llama-dataset
4
+ inference: false
5
+ license: other
6
+ model_type: llama
7
  tags:
8
  - finance
9
  - llm
 
17
  </div>
18
  <div style="display: flex; justify-content: space-between; width: 100%;">
19
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
20
+ <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
21
  </div>
22
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
23
  <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
 
27
 
28
  # Bavest's Fin Llama 33B GPTQ
29
 
30
+ These files are GPTQ model files for [Bavest's Fin Llama 33B](https://huggingface.co/bavest/fin-llama-33b-merged).
31
+
32
+ Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
33
 
34
+ These models were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate).
35
 
36
  ## Repositories available
37
 
38
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/fin-llama-33B-GPTQ)
39
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/fin-llama-33B-GGML)
40
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/bavest/fin-llama-33b-merged)
41
 
42
+ ## Prompt template: Alpaca
 
 
43
 
44
  ```
45
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
 
46
 
47
+ ### Instruction: {prompt}
48
+
49
+ ### Response:
50
  ```
 
 
51
 
52
+ ## Provided files
53
 
54
+ Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
55
+
56
+ Each separate quant is in a different branch. See below for instructions on fetching from different branches.
57
+
58
+ | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
59
+ | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
60
+ | main | 4 | None | True | 16.94 GB | True | GPTQ-for-LLaMa | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
61
+ | gptq-4bit-32g-actorder_True | 4 | 32 | True | 19.44 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
62
+ | gptq-4bit-64g-actorder_True | 4 | 64 | True | 18.18 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
63
+ | gptq-4bit-128g-actorder_True | 4 | 128 | True | 17.55 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
64
+ | gptq-8bit--1g-actorder_True | 8 | None | True | 32.99 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
65
+ | gptq-8bit-128g-actorder_False | 8 | 128 | False | 33.73 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
66
+ | gptq-3bit--1g-actorder_True | 3 | None | True | 12.92 GB | False | AutoGPTQ | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
67
+ | gptq-3bit-128g-actorder_False | 3 | 128 | False | 13.51 GB | False | AutoGPTQ | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
68
+
69
+ ## How to download from branches
70
+
71
+ - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/fin-llama-33B-GPTQ:gptq-4bit-32g-actorder_True`
72
+ - With Git, you can clone a branch with:
73
+ ```
74
+ git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/fin-llama-33B-GPTQ`
75
  ```
76
+ - In Python Transformers code, the branch is the `revision` parameter; see below.
77
 
78
+ ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
79
 
80
+ Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
81
+
82
+ It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
83
 
84
  1. Click the **Model tab**.
85
  2. Under **Download custom model or LoRA**, enter `TheBloke/fin-llama-33B-GPTQ`.
86
+ - To download from a specific branch, enter for example `TheBloke/fin-llama-33B-GPTQ:gptq-4bit-32g-actorder_True`
87
+ - see Provided Files above for the list of branches for each option.
88
  3. Click **Download**.
89
  4. The model will start downloading. Once it's finished it will say "Done"
90
  5. In the top left, click the refresh icon next to **Model**.
 
98
 
99
  First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
100
 
101
+ `GITHUB_ACTIONS=true pip install auto-gptq`
102
 
103
  Then try the following example code:
104
 
105
  ```python
106
  from transformers import AutoTokenizer, pipeline, logging
107
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
 
108
 
109
  model_name_or_path = "TheBloke/fin-llama-33B-GPTQ"
110
  model_basename = "fin-llama-33b-GPTQ-4bit--1g.act.order"
 
114
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
115
 
116
  model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
117
+ model_basename=model_basename
118
  use_safetensors=True,
119
  trust_remote_code=False,
120
  device="cuda:0",
121
  use_triton=use_triton,
122
  quantize_config=None)
123
 
124
+ """
125
+ To download from a specific branch, use the revision parameter, as in this example:
126
+
127
+ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
128
+ revision="gptq-4bit-32g-actorder_True",
129
+ model_basename=model_basename,
130
+ use_safetensors=True,
131
+ trust_remote_code=False,
132
+ device="cuda:0",
133
+ quantize_config=None)
134
+ """
135
+
136
  prompt = "Tell me about AI"
137
+ prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
138
+
139
+ ### Instruction: {prompt}
140
+
141
+ ### Response:
142
+ '''
143
 
144
  print("\n\n*** Generate:")
145
 
 
166
  print(pipe(prompt_template)[0]['generated_text'])
167
  ```
168
 
169
+ ## Compatibility
 
 
 
 
170
 
171
+ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
172
 
173
+ ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
 
 
 
 
174
 
175
  <!-- footer start -->
176
  ## Discord
177
 
178
  For further support, and discussions on these models and AI in general, join us at:
179
 
180
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
181
 
182
  ## Thanks, and how to contribute.
183
 
 
192
  * Patreon: https://patreon.com/TheBlokeAI
193
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
194
 
195
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
196
 
197
+ **Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
198
 
199
  Thank you to all my generous patrons and donaters!
200