fin-llama-33B-GPTQ / README.md
TheBloke's picture
Update README.md
0f0b8c3
|
raw
history blame
No virus
11.8 kB
metadata
inference: false
license: other
datasets:
  - bavest/fin-llama-dataset
tags:
  - finance
  - llm
  - llama
  - trading
TheBlokeAI

Bavest's Fin Llama 33B GPTQ

These files are GPTQ 4bit model files for Bavest's Fin Llama 33B.

It is the result of quantising to 4bit using AutoGPTQ.

Repositories available

Prompt template

Standard Alpaca prompting:

A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's question.
### Instruction: prompt

### Response: 

or

A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's question.
### Instruction: prompt

### Input:

### Response:

How to easily download and use this model in text-generation-webui

Please make sure you're using the latest version of text-generation-webui

  1. Click the Model tab.
  2. Under Download custom model or LoRA, enter TheBloke/fin-llama-33B-GPTQ.
  3. Click Download.
  4. The model will start downloading. Once it's finished it will say "Done"
  5. In the top left, click the refresh icon next to Model.
  6. In the Model dropdown, choose the model you just downloaded: fin-llama-33B-GPTQ
  7. The model will automatically load, and is now ready for use!
  8. If you want any custom settings, set them and then click Save settings for this model followed by Reload the Model in the top right.
  • Note that you do not need to set GPTQ parameters any more. These are set automatically from the file quantize_config.json.
  1. Once you're ready, click the Text Generation tab and enter a prompt to get started!

How to use this GPTQ model from Python code

First make sure you have AutoGPTQ installed:

pip install auto-gptq

Then try the following example code:

from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import argparse

model_name_or_path = "TheBloke/fin-llama-33B-GPTQ"
model_basename = "fin-llama-33b-GPTQ-4bit--1g.act.order"

use_triton = False

tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)

model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
        model_basename=model_basename,
        use_safetensors=True,
        trust_remote_code=False,
        device="cuda:0",
        use_triton=use_triton,
        quantize_config=None)

prompt = "Tell me about AI"
prompt_template=f'''### Instruction: {prompt}
### Response:'''

print("\n\n*** Generate:")

input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))

# Inference can also be done using transformers' pipeline

# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)

print("*** Pipeline:")
pipe = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    max_new_tokens=512,
    temperature=0.7,
    top_p=0.95,
    repetition_penalty=1.15
)

print(pipe(prompt_template)[0]['generated_text'])

Provided files

fin-llama-33b-GPTQ-4bit--1g.act.order.safetensors

This will work with AutoGPTQ and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.

It was created without group_size to lower VRAM requirements, and with --act-order (desc_act) to boost inference accuracy as much as possible.

  • fin-llama-33b-GPTQ-4bit--1g.act.order.safetensors
    • Works with AutoGPTQ in CUDA or Triton modes.
    • Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
    • Works with text-generation-webui, including one-click-installers.
    • Parameters: Groupsize = -1. Act Order / desc_act = True.

Discord

For further support, and discussions on these models and AI in general, join us at:

TheBloke AI's Discord server

Thanks, and how to contribute.

Thanks to the chirper.ai team!

I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.

If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.

Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.

Special thanks to: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.

Patreon special mentions: Oscar Rangel, Eugene Pentland, Talal Aujan, Cory Kujawski, Luke, Asp the Wyvern, Ai Maven, Pyrater, Alps Aficionado, senxiiz, Willem Michiel, Junyu Yang, trip7s trip, Sebastain Graf, Joseph William Delisle, Lone Striker, Jonathan Leane, Johann-Peter Hartmann, David Flickinger, Spiking Neurons AB, Kevin Schuppel, Mano Prime, Dmitriy Samsonov, Sean Connelly, Nathan LeClaire, Alain Rossmann, Fen Risland, Derek Yates, Luke Pendergrass, Nikolai Manek, Khalefa Al-Ahmad, Artur Olbinski, John Detwiler, Ajan Kanaga, Imad Khwaja, Trenton Dambrowitz, Kalila, vamX, webtim, Illia Dulskyi.

Thank you to all my generous patrons and donaters!

Original model card: Bavest's Fin Llama 33B

FIN-LLAMA

Efficient Finetuning of Quantized LLMs for Finance

Adapter Weights | Dataset

Installation

To load models in 4bits with transformers and bitsandbytes, you have to install accelerate and transformers from source and make sure you have the latest version of the bitsandbytes library (0.39.0).

pip3 install -r requirements.txt

Other dependencies

If you want to finetune the model on a new instance. You could run the setup.sh to install the python and cuda package.

bash scripts/setup.sh

Finetuning

bash script/finetune.sh

Usage

Quantization parameters are controlled from the BitsandbytesConfig

  • Loading in 4 bits is activated through load_in_4bit
  • The datatype used for the linear layer computations with bnb_4bit_compute_dtype
  • Nested quantization is activated through bnb_4bit_use_double_quant
  • The datatype used for qunatization is specified with bnb_4bit_quant_type. Note that there are two supported quantization datatypes fp4 (four bit float) and nf4 (normal four bit float). The latter is theoretically optimal for normally distributed weights and we recommend using nf4.
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig

pretrained_model_name_or_path = "bavest/fin-llama-33b-merge"
model = AutoModelForCausalLM.from_pretrained(
    pretrained_model_name_or_path=pretrained_model_name_or_path,
    load_in_4bit=True,
    device_map='auto',
    torch_dtype=torch.bfloat16,
    quantization_config=BitsAndBytesConfig(
        load_in_4bit=True,
        bnb_4bit_compute_dtype=torch.bfloat16,
        bnb_4bit_use_double_quant=True,
        bnb_4bit_quant_type='nf4'
    ),
)

tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path)

question = "What is the market cap of apple?"
input = "" # context if needed

prompt = f"""
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's question.
'### Instruction:\n{question}\n\n### Input:{input}\n""\n\n### Response: 
"""

input_ids = tokenizer.encode(prompt, return_tensors="pt").to('cuda:0')

with torch.no_grad():
    generated_ids = model.generate(
        input_ids,
        do_sample=True,
        top_p=0.9,
        temperature=0.8,
        max_length=128
    )

generated_text = tokenizer.decode(
    [el.item() for el in generated_ids[0]], skip_special_tokens=True
)

Dataset for FIN-LLAMA

The dataset is released under bigscience-openrail-m. You can find the dataset used to train FIN-LLAMA models on HF at bavest/fin-llama-dataset.

Known Issues and Limitations

Here a list of known issues and bugs. If your issue is not reported here, please open a new issue and describe the problem. See QLORA for any other limitations.

  1. 4-bit inference is slow. Currently, our 4-bit inference implementation is not yet integrated with the 4-bit matrix multiplication
  2. Currently, using bnb_4bit_compute_type='fp16' can lead to instabilities.
  3. Make sure that tokenizer.bos_token_id = 1 to avoid generation issues.

Acknowledgements

We also thank Meta for releasing the LLaMA models without which this work would not have been possible.

This repo builds on the Stanford Alpaca , QLORA, Chinese-Guanaco and LMSYS FastChat repos.

License and Intended Use

We release the resources associated with QLoRA finetuning in this repository under GLP3 license. In addition, we release the FIN-LLAMA model family for base LLaMA model sizes of 7B, 13B, 33B, and 65B. These models are intended for purposes in line with the LLaMA license and require access to the LLaMA models.

Prompts

Act as an Accountant

I want you to act as an accountant and come up with creative ways to manage finances. You'll need to consider budgeting, investment strategies and risk management when creating a financial plan for your client. In some cases, you may also need to provide advice on taxation laws and regulations in order to help them maximize their profits. My first suggestion request is “Create a financial plan for a small business that focuses on cost savings and long-term investments".

Paged Optimizer

You can access the paged optimizer with the argument --optim paged_adamw_32bit

Cite

@misc{Fin-LLAMA,
  author = {William Todt, Ramtin Babaei, Pedram Babaei},
  title = {Fin-LLAMA: Efficient Finetuning of Quantized LLMs for Finance},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/Bavest/fin-llama}},
}