inference: false
license: other

Jon Durbin's Airoboros MPT 30B GPT4 1.4 GGML
These files are GGML format model files for Jon Durbin's Airoboros MPT 30B GPT4 1.4.
Please note that these GGMLs are not compatible with llama.cpp, or currently with text-generation-webui. Please see below for a list of tools known to work with these model files.
KoboldCpp just added GPU accelerated (OpenCL) support for MPT models, so that is the client I recommend using for these models.
Note: Please make sure you're using KoboldCpp version 1.32.3 or later, as a number of MPT-related bugs are fixed.
Repositories available
- 2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference
- Unquantised fp16 model in pytorch format, for GPU inference and for further conversions
Prompt template
USER: prompt
ASSISTANT:
A note regarding context length: 8K
The base model has an 8K context length. KoboldCpp supports 8K context if you manually set it to 8K by adjusting the text box above the slider:
It is currently unknown as to increased context is compatible with other MPT GGML clients.
If you have feedback on this, please let me know.
Compatibilty
These files are not compatible with text-generation-webui, llama.cpp, or llama-cpp-python.
Currently they can be used with:
- KoboldCpp, a powerful inference engine based on llama.cpp, with good UI and GPU accelerated support for MPT models: KoboldCpp
- The ctransformers Python library, which includes LangChain support: ctransformers
- The LoLLMS Web UI which uses ctransformers: LoLLMS Web UI
- rustformers' llm
- The example
mpt
binary provided with ggml
As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!)
Tutorial for using LoLLMS Web UI
Provided files
Name | Quant method | Bits | Size | Max RAM required | Use case |
---|---|---|---|---|---|
airoboros-mpt-30b-gpt4.ggmlv0.q4_0.bin | q4_0 | 4 | 16.85 GB | 19.35 GB | Original llama.cpp quant method, 4-bit. |
airoboros-mpt-30b-gpt4.ggmlv0.q4_1.bin | q4_1 | 4 | 18.73 GB | 21.23 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
airoboros-mpt-30b-gpt4.ggmlv0.q5_0.bin | q5_0 | 5 | 20.60 GB | 23.10 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
airoboros-mpt-30b-gpt4.ggmlv0.q5_1.bin | q5_1 | 5 | 22.47 GB | 24.97 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
airoboros-mpt-30b-gpt4.ggmlv0.q8_0.bin | q8_0 | 8 | 31.83 GB | 34.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
Note: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
Discord
For further support, and discussions on these models and AI in general, join us at:
Thanks, and how to contribute.
Thanks to the chirper.ai team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
- Patreon: https://patreon.com/TheBlokeAI
- Ko-Fi: https://ko-fi.com/TheBlokeAI
Special thanks to: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
Patreon special mentions: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire.
Thank you to all my generous patrons and donaters!
Original model card: Jon Durbin's Airoboros MPT 30B GPT4 1.4
Overview
This is a test of qlora fine-tuning of the mpt-30b model, with 5 epochs.
qlora compatible model: https://huggingface.co/jondurbin/mpt-30b-qlora-compatible
My fork of qlora with mpt-30b support: https://github.com/jondurbin/qlora
Differences in the qlora scripts:
- requires adding
--mpt True
for mpt-based models - uses
--num_train_epochs
instead of--max_steps
- uses airoboros prompt format (mostly 1:1 with vicuna) rather than alpaca, and expects an input file in JSONL format with "instruction" and "response"
I think there's a bug in gradient accumulation, so if you try this, maybe set gradient accumulation steps to 1
See the mpt-30b-qlora-compatible model card for training details.
This is not as high quality as the llama-33b versions unfortunately, but I don't have a great answer as to why. Perhaps there are fewer forward layers that can be tuned?
License and usage
This is a real gray area, here's why:
- the dataset was generated with gpt-4, via https://github.com/jondurbin/airoboros
- the ToS for openai API usage has a clause preventing the output from being used to train a model that competes with OpenAI
- what does compete actually mean here?
- a 30b parameter model isn't anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant of copyrighted or otherwise unallowable licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly not placing a license on here because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially, especially since it didn't perform quite as well as expected using qlora.