license: apache-2.0
tags:
- generated_from_trainer
- stacked summaries
- xsum
datasets:
- stacked-summaries/stacked-xsum-1024
model-index:
- name: flan-t5-large-stacked-XSUM-1024-WIP-2p8-850-stacked-xsum-1024-evaluated
results: []
language:
- en
library_name: transformers
pipeline_tag: summarization
flan-t5-large-stacked-XSUM-1024
This model is a fine-tuned version of google/flan-t5-large on the stacked-summaries/stacked-xsum-1024 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.3314
- eval_rouge1: 46.5061
- eval_rouge2: 22.0588
- eval_rougeL: 37.5235
- eval_rougeLsum: 39.0234
- eval_gen_len: 46.1807
- eval_runtime: 9456.3608
- eval_samples_per_second: 1.896
- eval_steps_per_second: 0.119
Note that the evaluation set is
stacked-summaries/stacked-xsum-1024
and notxsum
itself
Model description
This model card presents a model trained on a stacked dataset, which aims to improve summarization by testing the benefits of "task-oriented pretraining." The model is designed to learn how to effectively condense and distill information from text by stacking summaries and separating them into independent concepts. By doing so, the model can learn to identify essential information without simply mimicking the style of the dataset summaries.
Intended uses & limitations
- max input length (in tokens): 1024
Training and evaluation data
Refer to stacked-summaries/stacked-xsum-1024
Trained for approx 3 epochs before ROUGE scores stabilized on most recent run: