Load 4bit models 4x faster
Collection
Native bitsandbytes 4bit pre quantized models
β’
25 items
β’
Updated
β’
50
We have a Google Colab Tesla T4 notebook for Gemma 7b here: https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing
All notebooks are beginner friendly! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
Unsloth supports | Free Notebooks | Performance | Memory use |
---|---|---|---|
Gemma 7b | βΆοΈ Start on Colab | 2.4x faster | 58% less |
Mistral 7b | βΆοΈ Start on Colab | 2.2x faster | 62% less |
Llama-2 7b | βΆοΈ Start on Colab | 2.2x faster | 43% less |
TinyLlama | βΆοΈ Start on Colab | 3.9x faster | 74% less |
CodeLlama 34b A100 | βΆοΈ Start on Colab | 1.9x faster | 27% less |
Mistral 7b 1xT4 | βΆοΈ Start on Kaggle | 5x faster* | 62% less |
DPO - Zephyr | βΆοΈ Start on Colab | 1.9x faster | 19% less |