license: other
inference: false
OpenAssistant LLaMA 30B SFT 7 GGML
This is a repo of GGML format models for OpenAssistant's LLaMA 30B SFT 7.
It is the result of merging the XORs from the above repo with the original Llama 30B weights, and then quantising to 4bit and 5bit GGML for CPU inference using llama.cpp.
This is epoch 7 of OpenAssistant's training of their Llama 30B model.
Repositories available
- 4bit GPTQ models for GPU inference.
- 4bit and 5bit GGML models for CPU inference.
- Unquantised 16bit model in HF format.
PROMPT TEMPLATE
This model requires the following prompt template:
<|prompter|> prompt goes here
<|assistant|>:
Provided files
Name | Quant method | Bits | Size | RAM required | Use case |
---|---|---|---|---|---|
OpenAssistant-30B-epoch7.ggml.q4_0.bin |
q4_0 | 4bit | 20.3GB | 23GB | Maximum compatibility |
OpenAssistant-30B-epoch7.ggml.q4_2.bin |
q4_2 | 4bit | 20.3GB | 23GB | Best compromise between resources, speed and quality |
OpenAssistant-30B-epoch7.ggml.q5_0.bin |
q5_0 | 5bit | 22.4GB | 25GB | Brand new 5bit method. Potentially higher quality than 4bit, at cost of slightly higher resources. |
OpenAssistant-30B-epoch7.ggml.q5_1.bin |
q5_1 | 5bit | 24.4GB | 27GB | Brand new 5bit method. Slightly higher resource usage than q5_0. |
- The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
- The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
- The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0.
- The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1.
q4_2 compatibility
q4_2 is a relatively new 4bit quantisation method offering improved quality. However they are still under development and their formats are subject to change.
In order to use these files you will need to use recent llama.cpp code. And it's possible that future updates to llama.cpp could require that these files are re-generated.
If and when the q4_2 file no longer works with recent versions of llama.cpp I will endeavour to update it.
If you want to ensure guaranteed compatibility with a wide range of llama.cpp versions, use the q4_0 file.
q5_0 and q5_1 compatibility
These new methods were released to llama.cpp on 26th April. You will need to pull the latest llama.cpp code and rebuild to be able to use them.
Don't expect any third-party UIs/tools to support them yet.
How to run in llama.cpp
I use the following command line; adjust for your tastes and needs:
./main -t 18 -m OpenAssistant-30B-epoch7.ggml.q4_2.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|prompter|>Write a very story about llamas <|assistant|>:"
Change -t 18
to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use -t 8
.
How to run in text-generation-webui
GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.
Further instructions here: text-generation-webui/docs/llama.cpp-models.md.
Note: at this time text-generation-webui will not support the new q5 quantisation methods.
Thireus has written a great guide on how to update it to the latest llama.cpp code so that these files can be used in the UI.
Original model card
llama-30b-sft-7:
dtype: fp16
log_dir: "llama_log_30b"
learning_rate: 1e-5
model_name: /home/ubuntu/Open-Assistant/model/model_training/.saved/llama-30b-super-pretrain/checkpoint-3500
#model_name: OpenAssistant/llama-30b-super-pretrain
output_dir: llama_model_30b
deepspeed_config: configs/zero3_config_sft.json
weight_decay: 0.0
residual_dropout: 0.0
max_length: 2048
use_flash_attention: true
warmup_steps: 20
gradient_checkpointing: true
gradient_accumulation_steps: 12
per_device_train_batch_size: 2
per_device_eval_batch_size: 3
eval_steps: 101
save_steps: 485
num_train_epochs: 4
save_total_limit: 3
use_custom_sampler: true
sort_by_length: false
#save_strategy: steps
save_strategy: epoch
datasets:
- oasst_export:
lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk"
input_file_path: 2023-04-12_oasst_release_ready_synth.jsonl.gz
val_split: 0.05
- vicuna:
val_split: 0.05
max_val_set: 800
fraction: 1.0
- dolly15k:
val_split: 0.05
max_val_set: 300
- grade_school_math_instructions:
val_split: 0.05
- code_alpaca:
val_split: 0.05
max_val_set: 250
- OASST dataset paper: https://arxiv.org/abs/2304.07327