|
--- |
|
license: mit |
|
base_model: facebook/bart-large-xsum |
|
tags: |
|
- generated_from_trainer |
|
metrics: |
|
- rouge |
|
- bleu |
|
model-index: |
|
- name: bart_samsum |
|
results: [] |
|
datasets: |
|
- samsum |
|
pipeline_tag: summarization |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# bart_samsum |
|
|
|
This model is a fine-tuned version of [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) on the [samsum](https://huggingface.co/datasets/samsum) dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 1.4947 |
|
- Rouge1: 53.3294 |
|
- Rouge2: 28.6009 |
|
- Rougel: 44.2008 |
|
- Rougelsum: 49.2031 |
|
- Bleu: 0.0 |
|
- Meteor: 0.4887 |
|
- Gen Len: 30.1209 |
|
|
|
### Framework versions |
|
|
|
- Transformers 4.40.0 |
|
- Pytorch 2.2.1+cu121 |
|
- Datasets 2.19.0 |
|
- Tokenizers 0.19.1 |