Bert2Bert Summarization with 🤗EncoderDecoder Framework This model is a warm-started BERT2BERT model fine-tuned on the CNN/Dailymail summarization dataset.
The model achieves a 18.22 ROUGE-2 score on CNN/Dailymail's test dataset.
For more details on how the model was fine-tuned, please refer to this notebook.
- Downloads last month
- 541
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Dataset used to train patrickvonplaten/bert2bert_cnn_daily_mail
Spaces using patrickvonplaten/bert2bert_cnn_daily_mail 2
Evaluation results
- ROUGE-1 on cnn_dailymailtest set self-reported41.281
- ROUGE-2 on cnn_dailymailtest set self-reported18.685
- ROUGE-L on cnn_dailymailtest set self-reported28.191
- ROUGE-LSUM on cnn_dailymailtest set self-reported38.087
- loss on cnn_dailymailtest set self-reported2.345
- gen_len on cnn_dailymailtest set self-reported73.833