metadata
pipeline_tag: summarization
datasets:
- samsum
language:
- en
metrics:
- rouge
library_name: transformers
widget:
- text: >
Rita: I'm so bloody tired. Falling asleep at work. :-(
Tina: I know what you mean.
Tina: I keep on nodding off at my keyboard hoping that the boss doesn't
notice..
Rita: The time just keeps on dragging on and on and on....
Rita: I keep on looking at the clock and there's still 4 hours of this
drudgery to go.
Tina: Times like these I really hate my work.
Rita: I'm really not cut out for this level of boredom.
Tina: Neither am I.
- text: >
Beatrice: I am in town, shopping. They have nice scarfs in the shop next
to the church. Do you want one?
Leo: No, thanks
Beatrice: But you don't have a scarf.
Leo: Because I don't need it.
Beatrice: Last winter you had a cold all the time. A scarf could help.
Leo: I don't like them.
Beatrice: Actually, I don't care. You will get a scarf.
Leo: How understanding of you!
Beatrice: You were complaining the whole winter that you're going to die.
I've had enough.
Leo: Eh.
- text: |
Jack: Cocktails later?
May: YES!!!
May: You read my mind...
Jack: Possibly a little tightly strung today?
May: Sigh... without question.
Jack: Thought so.
May: A little drink will help!
Jack: Maybe two!
model-index:
- name: bart-finetuned-samsum
results:
- task:
name: Text Summarization
type: summarization
dataset:
name: SamSum
type: samsum
metrics:
- name: Validation ROUGE-1
type: rouge-1
value: 53.6163
- name: Validation ROUGE-2
type: rouge-2
value: 28.914
- name: Validation ROUGE-L
type: rougeL
value: 44.1443
- name: Validation ROUGE-L Sum
type: rougeLsum
value: 49.2995
Description
This model was trained by fine-tuning the facebook/bart-large-xsum model using these parameters and the samsum dataset.
Development
- Jupyter Notebook: Text Summarization With BART
Usage
from transformers import pipeline
model = pipeline("summarization", model="adedamolade26/bart-finetuned-samsum")
conversation = '''Jack: Cocktails later?
May: YES!!!
May: You read my mind...
Jack: Possibly a little tightly strung today?
May: Sigh... without question.
Jack: Thought so.
May: A little drink will help!
Jack: Maybe two!
'''
model(conversation)
Training Parameters
evaluation_strategy = "epoch",
save_strategy = 'epoch',
load_best_model_at_end = True,
metric_for_best_model = 'eval_loss',
seed = 42,
learning_rate=2e-5,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
gradient_accumulation_steps=2,
weight_decay=0.01,
save_total_limit=2,
num_train_epochs=4,
predict_with_generate=True,
fp16=True,
report_to="none"
References
Model Training process was adapted from Luis Fernando Torres's Kaggle Notebook: 📝 Text Summarization with Large Language Models