Description
This model was trained by fine-tuning the facebook/bart-large-xsum model using these parameters and the samsum dataset.
Development
- Jupyter Notebook: Text Summarization With BART
Usage
from transformers import pipeline
model = pipeline("summarization", model="adedamolade26/bart-finetuned-samsum")
conversation = '''Jack: Cocktails later?
May: YES!!!
May: You read my mind...
Jack: Possibly a little tightly strung today?
May: Sigh... without question.
Jack: Thought so.
May: A little drink will help!
Jack: Maybe two!
'''
model(conversation)
Training Parameters
evaluation_strategy = "epoch",
save_strategy = 'epoch',
load_best_model_at_end = True,
metric_for_best_model = 'eval_loss',
seed = 42,
learning_rate=2e-5,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
gradient_accumulation_steps=2,
weight_decay=0.01,
save_total_limit=2,
num_train_epochs=4,
predict_with_generate=True,
fp16=True,
report_to="none"
References
Model Training process was adapted from Luis Fernando Torres's Kaggle Notebook: π Text Summarization with Large Language Models
- Downloads last month
- 113
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Dataset used to train adedamola26/bart-finetuned-samsum
Space using adedamola26/bart-finetuned-samsum 1
Evaluation results
- Validation ROUGE-1 on SamSumself-reported53.616
- Validation ROUGE-2 on SamSumself-reported28.914
- Validation ROUGE-L on SamSumself-reported44.144
- Validation ROUGE-L Sum on SamSumself-reported49.300