Model Card for Model ID

Fine-tuned LLM (GPT-Neo) to complete mythological stories.

Model Details

Model Description

GPT-Neo Mythology Storyteller is a fine-tuned autoregressive language model based on EleutherAI's GPT-Neo 125M. It is designed to generate mythological narratives by completing an incomplete story excerpt with detailed contextual information. The model produces an output that includes the chapter (Parv), key event, section, and a full story continuation, making it a valuable tool for creative writing, interactive storytelling, and narrative exploration.

  • Developed by: Samurai719214
  • Model type: Autoregressive Language Model (GPT-Neo)
  • Language(s) (NLP): English (focused on mythological narratives)
  • License: MIT
  • Finetuned from model: EleutherAI/gpt-neo-125M

Model Sources

Uses

Direct Use

This model is intended for generating complete mythological narratives. When provided with an incomplete story excerpt, it produces a full narrative that includes the chapter (Parv), key event, section, and story continuation. It is well-suited for creative writing, narrative generation in games, and educational storytelling applications.

Downstream Use

The model can be integrated into creative writing assistants, interactive storytelling platforms, and educational tools where mythological content is desired. Developers may further fine-tune or adapt it for specific stylistic or domain-specific applications.

Out-of-Scope Use

  • Factual Reporting: The model is not designed for generating historically or factually accurate content.
  • Critical Decision-Making: It should not be used for applications where errors could have serious consequences.
  • Sensitive Cultural Content: While it deals with mythological themes, the output may reflect biases inherent in the training data and should be used with cultural sensitivity.

Bias, Risks, and Limitations

The model has been fine-tuned on mythological summaries and narratives, which may carry inherent cultural biases and stereotypical representations. It is designed for creative purposes and may generate imaginative, yet sometimes inaccurate or culturally insensitive content. Users should verify the content before using it in critical or educational contexts.

Recommendations

  • User Discretion: Review the generated content for cultural and contextual accuracy.
  • Further Fine-Tuning: Consider additional fine-tuning for applications requiring stricter controls on bias or style.
  • Feedback Loop: Encourage users to report inaccuracies or biases to improve future iterations of the model.

How to Get Started with the Model

Install the Hugging Face Transformers library and load the model using the identifier Samurai719214/gptneo-mythology-storyteller. A Gradio demo is available on the model's Hugging Face Space for interactive testing.

Example code:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Samurai719214/gptneo-mythology-storyteller")
tokenizer = AutoTokenizer.from_pretrained("Samurai719214/gptneo-mythology-storyteller")

Training Details

The model was fine-tuned on mythological datasets sourced from Kaggle, including the Mahabharata Summary dataset. The training data consists of structured narrative text with fields for Parv (chapter), Key Event, Section, and full story summaries. Data augmentation techniques were employed to expand the dataset size given its limited nature.

Training Data

Kaggle's dataset- The Mahabharata Summary

Training Procedure

Preprocessing

Data was preprocessed by extracting and combining the contextual columns (Parv, Key Event, Section) with the narrative text. During fine-tuning, each training example was constructed so that the input consisted of an instruction followed by an incomplete story excerpt, and the target was the complete narrative (header plus full story).

Training Hyperparameters

  • Training regime: Fine-tuning using Hugging Face Transformers Trainer with mixed precision (fp16).
  • Epochs: 5
  • Learning Rate: 3e-5
  • Batch Size: 2 per device
  • Warmup Steps: 50
  • Optimizer: AdamW

Speeds, Sizes, Times

  • Hardware: Fine-tuning was conducted on an NVIDIA T4 GPU (Google Colab).
  • Model Size: ~500MB (comparable to GPT-Neo 125M)

Evaluation

image/png

image/png

Testing Data, Factors & Metrics

Testing Data

Evaluation was performed on held-out mythological narrative excerpts derived from the training datasets.

Factors

Evaluation considered narrative coherence, contextual relevance, and creativity in storytelling.

Metrics

The model was evaluated using ROUGE scores (ROUGE-1, ROUGE-2, and ROUGE-L) to measure n-gram overlap between generated text and reference narratives. Qualitative assessments were also conducted.

Results

image/png

The model achieved moderate ROUGE scores on test examples (e.g., ROUGE-1 โ‰ˆ 0.19, ROUGE-L โ‰ˆ 0.09). These scores, along with qualitative feedback, indicate that while the model captures key narrative elements, further refinement may enhance output quality.

Summary

The GPT-Neo Mythology Storyteller generates mythological narratives that include essential contextual details. Users should consider the creative nature of the output and validate its accuracy for downstream applications.

Glossary

  • Parv: The chapter or major division of the narrative (e.g., Sabha Parva).
  • Key Event: A concise summary highlighting a major event within a section.
  • Section: A specific segment or event in the narrative.
  • Story: The full narrative text, including both the prompt and the generated continuation.

Model Card Authors

Samurai719214

Downloads last month
39
Safetensors
Model size
125M params
Tensor type
F32
ยท
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Samurai719214/gptneo-mythology-storyteller

Finetuned
(151)
this model

Space using Samurai719214/gptneo-mythology-storyteller 1