metadata
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- LoRA
- LoRA Adapter
- PEFT
base_model: unsloth/mistral-7b-bnb-4bit
datasets:
- liyucheng/ShareGPT90K
Uploaded model
Developed by: pacozaa
License: apache-2.0
Finetuned from model : unsloth/mistral-7b-bnb-4bit
This is LoRA Adapter : Train with liyucheng/ShareGPT90K - the step of training is increasing over time since I am fine-tuning in Colab. Right now it's at 550 step.
Run on Ollama with
ollama run pacozaa/mistralsharegpt90
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.