Uploaded models

  • Developed by: Sweaterdog
  • License: apache-2.0
  • Finetuned from model : unsloth/Qwen2.5-7B-bnb-4bit

The MindCraft LLM tuning CSV file can be found here, this can be tweaked as needed. MindCraft-LLM

This is a very very early access Beta Model

This model is NOT a final version, but instead is a test to see how well models can be with a small dataset. This dataset is also a test of how smaller models can be improved from extremely high quality, and as close to real-world scenarios as possible.

This model listed here (Andy-3.5-beta-10) is NOT the final model, but instead a preview for the new training method, this model performs well at playing Minecraft and can even play with no instructions other than history. That all being said, this model was trained on a small dataset, meaning it doesn't have every single example it may need, the final version will have a much larger dataset. Also, if you want to use this model, you can test Modelfile, or Modelfile 2, I haven't had a chance to dive deep into which performs better, but the model is alright, it isn't the best, but better than a non-tuned model.

Where data came from

The storing memory parts are real examples from in-game interactions

The coding is artifical and was generated by GPT-o1, with the instruction to include reasoning and thinking in the comments of the code

The playing is artificial and was generated by me, a human, and used prompts focusing on points where some models fail, such as mining.

This model should not be a reflection on how smaller models play Minecraft, if it performs well, and better than Andy-v2-qwen, then Yay! If not, I wasn't expecting it to be better, (And neither should you!)

You are totally allowed to test the beta model.

I hope this model performs well for you!

ALSO

The models are going to change, I am changing hyperparameters on tuning to (hopefully) increase performance and decrease hallucinations

BTW, if you want to download this model, I suggest using llama.cpp to make a quantization of it, I would have done it during tuning but I ran out of GPU time on google colab

Downloads last month
1,203
GGUF
Model size
7.62B params
Architecture
qwen2

2-bit

4-bit

5-bit

8-bit

16-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Sweaterdog/Andy-v3.5-Beta

Base model

Qwen/Qwen2.5-7B
Quantized
(3)
this model

Dataset used to train Sweaterdog/Andy-v3.5-Beta