File size: 2,571 Bytes
f3eb8ee 65e87b0 f3eb8ee 9e309cc bead9c8 9e309cc d79adf9 f3eb8ee 4530d70 cb9c95f 825c267 f3eb8ee |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
---
base_model: Finnish-NLP/Ahma-7B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
- fi
datasets:
- mpasila/LumiOpenInstruct-GrypheSlimOrca-Mix
- LumiOpen/instruction-collection-fin
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
---
In case I can't upload the newest step here you can check out this site [models.minipasila.net](https://models.minipasila.net/).
(Updated to 3000th step)
So this is only the 3000th step (out of 3922) trained on Google Colab because I'm a little low on money but at least that's free.. While testing the LoRA it seems to perform fairly well. The only real issue with this base model is that it only has 2048 token context size.
This will receive safetensor files after it's fully trained.
The trained formatting should be ChatML but it seemed to work better with Mistral's formatting for some reason (could be just due to me not having merged the model yet).
Dataset used was [a mix](https://huggingface.co/datasets/mpasila/LumiOpenInstruct-GrypheSlimOrca-Mix) of these:
[LumiOpen/instruction-collection-fin](https://huggingface.co/datasets/LumiOpen/instruction-collection-fin)
[Gryphe/Sonnet3.5-SlimOrcaDedupCleaned](https://huggingface.co/datasets/Gryphe/Sonnet3.5-SlimOrcaDedupCleaned)
LoRA: [mpasila/Ahma-SlimInstruct-LoRA-V0.1-7B](https://huggingface.co/mpasila/Ahma-SlimInstruct-LoRA-V0.1-7B)
After I'm done training this I will probably try do continued pre-training on Gemma 2 2B. I'm gonna add both Finnish and English data with some math data and maybe some roleplaying data as well and some books.
Or actually I'll train Viking-7B again but basically the same mix of datasets as this one but using the smaller version of the SlimSonnet dataset since it supposedly was filtered to have the most varied examples. Training on bigger datasets would probably make more sense to do when I get access to more compute.
Actually scratch all of that, since there was [a new actually multilingual model](https://huggingface.co/utter-project/EuroLLM-9B-Instruct) released recently I'll probably try fine-tuning that model instead.
# Uploaded Ahma-SlimInstruct-LoRA-V0.1-7B model
- **Developed by:** mpasila
- **License:** apache-2.0
- **Finetuned from model :** Finnish-NLP/Ahma-7B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |