---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- unsloth
- transformers
- tinyllama
---
# Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
A reupload from https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
We have a Google Colab Tesla T4 notebook for TinyLlama with 4096 max sequence length RoPE Scaling here: https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing
[
](https://discord.gg/u54VK8m8tk)
[
](https://ko-fi.com/unsloth)
[
](https://github.com/unslothai/unsloth)