|
--- |
|
license: apache-2.0 |
|
tags: |
|
- unsloth |
|
--- |
|
|
|
This modelfile is the LoRA adapter for [Andy-3.5-small](https://huggingface.co/Sweaterdog/Andy-3.5) |
|
|
|
# Why this exists |
|
|
|
This Repo exists because I wanted to make Andy-3.5, and its derivatives, such as Andy-3.5-small-reasoning fully open-source. Via Unsloth, you are able to continue fine tuning where I left off, so if you made your own dataset, you can continue tuning Andy-3.5 for your exact use case. |
|
|
|
# What if I fine tune off of Andy-3.5? |
|
|
|
If you fine tune Andy-3.5 or Andy-3.5-small-reasoning on your dataset, my dataset, **or any other dataset**, you **have** to provide credit to me for making the base model, which is Andy-3.5, if you wish, you may call the model Andy-3.5-base |
|
|
|
# Why would I want to fine tune off of Andy-3.5? |
|
|
|
Andy-3.5 and Andy-3.5-Small has a significant amount of knowledge regarding Minecraft and MindCraft, but not unlimited. Andy-3.5 or Andy-3.5-Small can be trained further on Minecraft knowledge to make the model better, and if you strive for maximum efficiency, it would be best to continue fine-tuning a model based on similar data to help it. |
|
|
|
# What should I call my model if I do tune it? |
|
|
|
You may name it whatever you'd like, but if I may suggest, I would recommend a name that clearly references the fact it originated from Andy-3.5-Small. |
|
|
|
If you'd like an example, if I trained Andy-3.5-small-reasoning on speedrunning tactics, I would call the model **Andy-3.5-Speedrun-S-&-R** or something similar. |