Edit model card

bart-large-finetuned-billsum

This model is a fine-tuned version of facebook/bart-large-xsum on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1947
  • Rouge1: 35.1575
  • Rouge2: 27.7021
  • Rougel: 32.9801
  • Rougelsum: 33.6194
  • Gen Len: 31.9873

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 5
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.5279 0.4221 1000 1.3638 34.2853 26.1627 31.897 32.6399 31.999
1.3237 0.8442 2000 1.2357 34.7055 26.7936 32.3811 33.0823 31.9973
1.1594 1.2664 3000 1.2246 34.6975 27.0964 32.5326 33.1883 31.982
1.1029 1.6885 4000 1.2092 34.4969 26.9107 32.3644 33.0481 31.9987
1.0461 2.1106 5000 1.1769 35.2419 27.6038 33.0339 33.6849 31.9903
0.9535 2.5327 6000 1.1958 34.7138 27.2185 32.5573 33.2043 31.9947
0.9373 2.9548 7000 1.1600 35.1741 27.6199 32.9618 33.6181 31.9783
0.8506 3.3770 8000 1.1940 34.8976 27.4455 32.7581 33.4013 31.99
0.8341 3.7991 9000 1.1716 35.1191 27.6856 32.9822 33.6221 31.9853
0.8083 4.2212 10000 1.1916 35.1839 27.7013 32.995 33.6131 31.988
0.7749 4.6433 11000 1.1947 35.1575 27.7021 32.9801 33.6194 31.9873

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
406M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for luluw/bart-large-finetuned-billsum

Finetuned
(50)
this model