YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Dante-Zero Fine-tuned Model
This model was fine-tuned using Reinforcement Learning with Generative Pre-trained Transformer Optimization (GRPO) to generate Dante-style poetry in endecasillabi (11-syllable lines).
Model Details
- Base Model: PleIAs/Pleias-350m-Preview
- Training Method: GRPO (Generative Pre-trained Transformer Optimization)
- Training Data: 1,000 chunks from Dante's Divine Comedy
- Epochs: 10
- Trained By: ruggsea
- Date: 2025-03-07
- Run Name: dante-zero-20250307-Pleias-350m-Preview
Model Description
This model is specialized in generating Italian poetry in the style of Dante Alighieri's Divine Comedy. It has been trained to:
- Generate proper endecasillabi (11-syllable lines)
- Follow the structure of Dante's poetry
- Avoid repetition
- Create original content (not plagiarize the Divine Comedy)
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("ruggsea/dante-zero-20250307-Pleias-350m-Preview")
tokenizer = AutoTokenizer.from_pretrained("ruggsea/dante-zero-20250307-Pleias-350m-Preview", padding_side="left")
# Ensure proper tokenizer settings
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
# Generate poetry
prompt = "Nel mezzo del cammin di nostra vita"
inputs = tokenizer(prompt, return_tensors="pt", padding_side="left")
outputs = model.generate(
inputs.input_ids,
max_new_tokens=200,
do_sample=True,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.2
)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
Reward Functions
The model was trained using several reward functions:
- Endecasillabo Checker: Rewards proper 11-syllable lines
- Plagiarism Checker: Penalizes copying from the Divine Comedy
- Verse Structure Checker: Encourages verse-like structure
- Repetition Penalty: Discourages repetitive text
- Downloads last month
- 3
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.