ldos's picture
End of training
5e09fbc
|
raw
history blame
No virus
5.44 kB
metadata
license: mit
base_model: facebook/bart-large-xsum
tags:
  - generated_from_trainer
metrics:
  - rouge
model-index:
  - name: text_shortening_model_v37
    results: []

text_shortening_model_v37

This model is a fine-tuned version of facebook/bart-large-xsum on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.9472
  • Rouge1: 0.4923
  • Rouge2: 0.2809
  • Rougel: 0.4462
  • Rougelsum: 0.4468
  • Bert precision: 0.8731
  • Bert recall: 0.8773
  • Average word count: 9.1021
  • Max word count: 15
  • Min word count: 5
  • Average token count: 16.8198
  • % shortened texts with length > 12: 8.7087

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Bert precision Bert recall Average word count Max word count Min word count Average token count % shortened texts with length > 12
1.5911 1.0 73 1.8586 0.4823 0.2756 0.4416 0.4423 0.8661 0.8758 8.9399 21 4 16.9489 7.8078
0.9246 2.0 146 2.2274 0.4039 0.2049 0.3771 0.3764 0.8526 0.855 8.0991 13 4 14.6006 0.6006
0.7574 3.0 219 1.8752 0.4463 0.2263 0.4072 0.4071 0.8629 0.8654 8.3934 14 5 14.3303 3.003
0.6131 4.0 292 1.8338 0.4896 0.2691 0.4451 0.4456 0.8747 0.8711 7.982 13 4 13.9249 0.3003
0.4422 5.0 365 1.8257 0.492 0.2727 0.4499 0.4504 0.8734 0.875 8.5165 16 5 14.4595 3.003
0.4227 6.0 438 2.1249 0.4666 0.2475 0.418 0.4178 0.8657 0.8697 9.3874 16 4 16.9399 8.4084
0.3714 7.0 511 2.1010 0.4838 0.274 0.436 0.4364 0.869 0.8754 9.4264 16 5 14.9369 9.009
0.2638 8.0 584 2.0803 0.489 0.2799 0.4404 0.4404 0.8701 0.8751 8.976 15 4 15.5736 8.4084
0.2103 9.0 657 2.1093 0.4888 0.2722 0.4381 0.438 0.872 0.8751 9.1952 16 5 16.7447 9.9099
0.1475 10.0 730 2.3159 0.4684 0.2597 0.4243 0.4244 0.8632 0.8721 9.4234 15 5 16.8288 11.7117
0.122 11.0 803 2.4090 0.4845 0.2729 0.4421 0.4427 0.8721 0.8748 8.8018 16 5 16.4264 5.7057
0.0915 12.0 876 2.6598 0.4838 0.2691 0.4376 0.437 0.8698 0.8742 9.1652 16 5 16.9009 10.2102
0.073 13.0 949 2.5266 0.4973 0.2861 0.4479 0.4495 0.8743 0.8776 9.0631 16 5 16.5796 8.4084
0.0526 14.0 1022 2.7673 0.4955 0.2821 0.4464 0.4463 0.8716 0.8791 9.4685 16 5 17.2012 10.5105
0.042 15.0 1095 2.9472 0.4923 0.2809 0.4462 0.4468 0.8731 0.8773 9.1021 15 5 16.8198 8.7087

Framework versions

  • Transformers 4.33.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3