Model Card for Mistral-7B for Story Generation
Model Description
This model is a fine-tuned Mistral-7B model on stories from the WritingPrompts dataset.
- Language(s) (NLP): English
- Finetuned from model: m-elio/Mistral-Gutenberg
- Dataset used for fine-tuning: WritingPrompts
Example of Usage
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.trainer_utils import set_seed
set_seed(42)
model_id = "m-elio/Mistral-Gutenberg-Writing-Prompts"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
instruction_text = "Write a story for the writing prompt provided as input"
input_text = "A story about a dancer who tries to win the National championship."
prompt = "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n" \
f"### Instruction:\nWrite a story for the writing prompt provided as input\n\n" \
f"### Input:\n{input_text}\n\n" \
f"### Answer:\n"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
outputs = model.generate(input_ids=input_ids, top_k=0, top_p=0.92, do_sample=True, max_new_tokens=2048)
print(tokenizer.batch_decode(outputs.detach().cpu().numpy()[:, input_ids.shape[1]:], skip_special_tokens=True)[0])
- Downloads last month
- 2