Sweaterdog commited on
Commit
4a7e524
·
verified ·
1 Parent(s): 0c37eb3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -3
README.md CHANGED
@@ -1,3 +1,23 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ This modelfile is the LoRA adapter for [Andy-3.5-small](https://huggingface.co/Sweaterdog/Andy-3.5)
6
+
7
+ # Why this exists
8
+
9
+ This Repo exists because I wanted to make Andy-3.5, and its derivatives, such as Andy-3.5-small-reasoning fully open-source. Via Unsloth, you are able to continue fine tuning where I left off, so if you made your own dataset, you can continue tuning Andy-3.5 for your exact use case.
10
+
11
+ # What if I fine tune off of Andy-3.5?
12
+
13
+ If you fine tune Andy-3.5 or Andy-3.5-small-reasoning on your dataset, my dataset, **or any other dataset**, you **have** to provide credit to me for making the base model, which is Andy-3.5, if you wish, you may call the model Andy-3.5-base
14
+
15
+ # Why would I want to fine tune off of Andy-3.5?
16
+
17
+ Andy-3.5 and Andy-3.5-Small has a significant amount of knowledge regarding Minecraft and MindCraft, but not unlimited. Andy-3.5 or Andy-3.5-Small can be trained further on Minecraft knowledge to make the model better, and if you strive for maximum efficiency, it would be best to continue fine-tuning a model based on similar data to help it.
18
+
19
+ # What should I call my model if I do tune it?
20
+
21
+ You may name it whatever you'd like, but if I may suggest, I would recommend a name that clearly references the fact it originated from Andy-3.5-Small.
22
+
23
+ If you'd like an example, if I trained Andy-3.5-small-reasoning on speedrunning tactics, I would call the model **Andy-3.5-Speedrun-S-&-R** or something similar.