genaforvena's picture
Update README.md
0c6100a verified
|
raw
history blame
4.09 kB
metadata
base_model:
  - unsloth/Llama-3.2-1B-Instruct
library_name: peft
language:
  - en
license: cc0-1.0

A !!!!!disclaimer uh. for now, the experimentation does not lead me anywhere due to limit resources that I have and do not recommend to download this model. Working on working on it.

PEFT Finnegan-tuned LLaMA 3.2-1B-instruct on part of Finnegans Wake dataset for text generation in the style of James Joyce.

Space: https://huggingface.co/spaces/genaforvena/huivam_finnegans_spaceship

Iteration 2:

Dataset: same (forgot to save config with new dataset).

finnetune.yaml:

# The ID of the dataset you created
dataset: huivam-finnegans-2

# Configuration for text completion fine-tuning
text_completion:
  # How the fields of the JSON dataset should be formatted into the input text
  input_template: "### GIVEN THE CONTEXT: {context}  ### INSTRUCTION: {instruction}  ### RESPONSE IS: "

  # How the fields of the JSON dataset should be formatted into the output text
  output_template: "ANSWER: {response}"

# The Fireworks model name of the base model
base_model: accounts/fireworks/models/llama-v3p2-1b-instruct

Finne-tuning commands used:

./firectl create dataset huivam-finnegans-2 .\finnegans_wake_dataset_2.jsonl
./firectl create fine-tuning-job --settings-file finnetune.yaml --epochs=3 --learning-rate=2e-5 --batch-size=8

New params used to finne-tune:

Text Completion:
  Input Template: ### GIVEN THE CONTEXT: {context}  ### INSTRUCTION: {instruction}  ### RESPONSE IS:
  Output Template: ANSWER: {response}
Base Model: accounts/fireworks/models/llama-v3p2-1b-instruct
Epochs: 3
Learning Rate: 2e-05
Lora Rank: 8
Batch Size: 8
Evaluation Split: 0

Spent: $0.08 Time: 5 mins

Iteration 1:

Dataset I prepared like that:

# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)

# Load the text
with open(INPUT_FILE, "r", encoding="utf-8") as file:
    text = file.read()

# Tokenize the text
tokens = tokenizer.encode(text, truncation=False, add_special_tokens=False)

# Split tokens into chunks
chunks = [tokens[i:i + CHUNK_SIZE] for i in range(0, len(tokens), CHUNK_SIZE)]

# Prepare dataset
dataset = []
for chunk in chunks:
    chunk_text = tokenizer.decode(chunk, skip_special_tokens=True)
    
    # Split the chunk into three parts randomly
    split_points = sorted(random.sample(range(len(chunk_text)), 2))  # Two random split points
    context = chunk_text[:split_points[0]]
    instruction = chunk_text[split_points[0]:split_points[1]]
    response = chunk_text[split_points[1]:]
    
    # Add to dataset
    dataset.append({
        "context": context,
        "instruction": instruction,
        "response": response,
    })

# Save dataset locally as a .jsonl file
with open(OUTPUT_FILE, "w", encoding="utf-8") as file:
    for item in dataset:
        json.dump(item, file)
        file.write("\n")

print(f"Dataset saved locally to {OUTPUT_FILE}")

Example of dataset entry:

{"context": "riverrun, past Eve and Adam's, from swerve of shore to bend of bay...", "instruction": "Sir Tristram, violer d'amores, fr'over the short sea...", "response": "O here here how hoth sprowled met the duskt the father of fornicationists..."}

fine-tuned on 1/10th of text on fireworks.ai with params:

dataset: finnegans_wake_dataset

text_completion:
  # How the fields of the JSON dataset should be formatted into the input text
  input_template: "### GIVEN THE CONTEXT: {context}  ### INSTRUCTION: {instruction}  ### RESPONSE IS: "

  # How the fields of the JSON dataset should be formatted into the output text
  output_template: "ANSWER: {response}"

# The Fireworks model name of the base model
base_model: accounts/fireworks/models/llama-v3p2-1b

# Hyperparameters for fine-tuning (should be passed as args and removed from here)
hyperparameters:
  learning_rate: 1e-5  # Learning rate for the optimizer
  epochs: 1            # Number of epochs to train
  batch_size: 4        # Batch size for training

Spent: $0.01 Time: 2 mins

Result: Seemingly not enough data to affect model output.