metadata
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- multi_news
metrics:
- rouge
model-index:
- name: mt5-small-multi-news
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: multi_news
type: multi_news
config: default
split: validation
args: default
metrics:
- name: Rouge1
type: rouge
value: 22.03
- name: Rouge2
type: rouge
value: 6.95
- name: Rougel
type: rouge
value: 18.41
- name: Rougelsum
type: rouge
value: 18.72
language:
- en
mt5-small-multi-news
This model is a fine-tuned version of google/mt5-small on the multi_news dataset. It achieves the following results on the evaluation set:
- Loss: 3.2170
- Rouge1: 22.03
- Rouge2: 6.95
- Rougel: 18.41
- Rougelsum: 18.72
Intended uses & limitations
Text summarization is the inteded use of this model. With further training the model could achieve better results.
Training and evaluation data
For the training data we used 10000 samples from the multi-news train dataset. For the evaluation data we used 500 samples from the multi-news evaluation dataset.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
---|---|---|---|---|---|---|---|
5.2732 | 1.0 | 1250 | 3.2170 | 22.03 | 6.95 | 18.41 | 18.72 |
Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3