Edit model card

bart-large-cnn-samsum

If you want to use the model you should try a newer fine-tuned FLAN-T5 version philschmid/flan-t5-base-samsum out socring the BART version with +6 on ROGUE1 achieving 47.24.

TRY philschmid/flan-t5-base-samsum

This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container.

For more information look at:

Hyperparameters

{
    "dataset_name": "samsum",
    "do_eval": true,
    "do_predict": true,
    "do_train": true,
    "fp16": true,
    "learning_rate": 5e-05,
    "model_name_or_path": "facebook/bart-large-cnn",
    "num_train_epochs": 3,
    "output_dir": "/opt/ml/model",
    "per_device_eval_batch_size": 4,
    "per_device_train_batch_size": 4,
    "predict_with_generate": true,
    "seed": 7
}

Usage

from transformers import pipeline
summarizer = pipeline("summarization", model="philschmid/bart-large-cnn-samsum")

conversation = '''Jeff: Can I train a πŸ€— Transformers model on Amazon SageMaker? 
Philipp: Sure you can use the new Hugging Face Deep Learning Container. 
Jeff: ok.
Jeff: and how can I get started? 
Jeff: where can I find documentation? 
Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face                                           
'''
summarizer(conversation)

Results

key value
eval_rouge1 42.621
eval_rouge2 21.9825
eval_rougeL 33.034
eval_rougeLsum 39.6783
test_rouge1 41.3174
test_rouge2 20.8716
test_rougeL 32.1337
test_rougeLsum 38.4149
Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train ajaydahiya8822/ajayd1

Evaluation results

  • Validation ROGUE-1 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
    self-reported
    42.621
  • Validation ROGUE-2 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
    self-reported
    21.983
  • Validation ROGUE-L on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
    self-reported
    33.034
  • Test ROGUE-1 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
    self-reported
    41.317
  • Test ROGUE-2 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
    self-reported
    20.872
  • Test ROGUE-L on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
    self-reported
    32.134
  • ROUGE-1 on samsum
    test set self-reported
    41.328
  • ROUGE-2 on samsum
    test set self-reported
    20.875
  • ROUGE-L on samsum
    test set self-reported
    32.135
  • ROUGE-LSUM on samsum
    test set self-reported
    38.401