bart-large-xsum / README.md
lewtun's picture
lewtun HF staff
Add evaluation results on the 3.0.0 config of cnn_dailymail (#6)
4691813
|
raw
history blame
1.04 kB
---
tags:
- summarization
language:
- en
license: mit
model-index:
- name: facebook/bart-large-xsum
results:
- task:
type: summarization
name: Summarization
dataset:
name: cnn_dailymail
type: cnn_dailymail
config: 3.0.0
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 25.2697
verified: true
- name: ROUGE-2
type: rouge
value: 7.6638
verified: true
- name: ROUGE-L
type: rouge
value: 17.1808
verified: true
- name: ROUGE-LSUM
type: rouge
value: 21.7933
verified: true
- name: loss
type: loss
value: 3.5042972564697266
verified: true
- name: gen_len
type: gen_len
value: 27.4462
verified: true
---
### Bart model finetuned on xsum
docs: https://huggingface.co/transformers/model_doc/bart.html
finetuning: examples/seq2seq/ (as of Aug 20, 2020)
Metrics: ROUGE > 22 on xsum.
variants: search for distilbart
paper: https://arxiv.org/abs/1910.13461