Safetensors
MineMA-8B / README.md
Aiwensile2's picture
Update README.md
30f296a verified
metadata
license: cc-by-4.0
datasets:
  - Aiwensile2/Minecraft_QA-pairs_Instruction_Dataset

MineMA: Fine-Tuned Models for Minecraft Q&A

Overview

In this repository, we present the MineMA series of models, fine-tuned specifically for Minecraft-related Q&A tasks. Utilizing the LoRA method for efficient model fine-tuning, we have adapted pre-trained LLaMA models to respond accurately and effectively to Minecraft-related instructions and queries. Our fine-tuning process leverages the specially generated Minecraft dataset to ensure relevance and accuracy in the Q&A responses.

Models

The MineMA series includes several models fine-tuned on different base models from the LLaMA series. Below is the list of the fine-tuned models provided in this repository:

  • MineMA-8B(v1, v2, v3, v4), derived from the base model LLaMA-3-8B-Instruct.
  • MineMA-13B(v1, v2), derived from the base model LLaMA-2-13B-Chat.
  • MineMA-70B(v1, v2), derived from the base model LLaMA-3-70B-Instruct.

These models have been fine-tuned by using the Minecraft_QA-pairs_Instruction_Dataset. We have only released four MineMA-8B models and two MineMA-70B models for the time being, and we will supplement more models in the future. Considering the relatively large size of the full model weights, our MineMA-70B series models are provided as LoRA models, which need to be combined with the base model LLaMA-3-70B-Instruct to use.

Fine-Tuning Methodology

LoRA Method for Fine-Tuning

We employed the LoRA (Low-Rank Adaptation) method for fine-tuning our models. LoRA is a parameter-efficient training technique that introduces small, trainable low-rank matrices to adapt a pre-trained neural network, allowing for targeted updates without the need for retraining the entire model. This method strikes a balance between computational efficiency and training effectiveness.

Training Parameters

Here are the specific training parameters:

Model lora_r lora_alpha lora_dropout learning_rate weight_decay Single Round?
MineMA-13B-v1 64 128 0.1 1E-04 1E-04 False
MineMA-13B-v2 128 256 0.1 1E-04 1E-04 False
MineMA-8B-v1 64 128 0.1 1E-04 1E-04 True
MineMA-8B-v2 32 64 0.1 1E-04 1E-04 False
MineMA-8B-v3 64 128 0.1 1E-04 1E-04 False
MineMA-8B-v4 128 256 0.1 1E-04 1E-04 False
MineMA-70B-v1 16 32 0.1 1E-04 1E-04 True
MineMA-70B-v2 64 128 0.1 1E-04 1E-04 False

Dataset

We used the Minecraft_QA-pairs_Instruction_Dataset for fine-tuning all the models in the MineMA series. This dataset has 390,317 instruction entries specifically designed for Minecraft-related Q&A tasks. You can access the dataset via the following link:

Minecraft_QA-pairs_Instruction_Dataset

Use

Prompts

We recommend using the following prompts:

System message: You are a Large Language Model, and your task is to answer questions posed by users about Minecraft. Utilize your knowledge and understanding of the game to provide detailed, accurate, and helpful responses. Use your capabilities to assist users in solving problems, understanding game mechanics, and enhancing their Minecraft experience.

User message: [A question about Minecraft]

Example Code

Example code for reference usage guidelines can be found at: "Model usage method.ipynb".

Environment Setup

Environment requirements needed to use the model are written in "requirements.txt".

Run the following command to install all dependencies:

pip install -r requirements.txt

Details

License

These models are made available under the Creative Commons Attribution 4.0 International License.

DOI

10.57967/hf/2488