ldos's picture
End of training
f8d409c
|
raw
history blame
7.06 kB
metadata
license: apache-2.0
base_model: t5-base
tags:
  - generated_from_trainer
model-index:
  - name: text_shortening_model_v80
    results: []

text_shortening_model_v80

This model is a fine-tuned version of t5-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1772
  • Bert precision: 0.8996
  • Bert recall: 0.9009
  • Bert f1-score: 0.8998
  • Average word count: 6.8393
  • Max word count: 16
  • Min word count: 3
  • Average token count: 11.092
  • % shortened texts with length > 12: 0.9816

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 25

Training results

Training Loss Epoch Step Validation Loss Bert precision Bert recall Bert f1-score Average word count Max word count Min word count Average token count % shortened texts with length > 12
1.3549 1.0 30 1.0184 0.8861 0.887 0.886 7.016 18 2 11.2061 2.6994
0.9772 2.0 60 0.9395 0.889 0.8903 0.8892 6.9436 16 2 11.1276 1.8405
0.8398 3.0 90 0.9211 0.8904 0.8916 0.8906 6.9534 16 2 11.119 2.3313
0.7412 4.0 120 0.9235 0.8926 0.8945 0.8931 6.9239 16 2 11.1926 1.5951
0.6652 5.0 150 0.9173 0.8936 0.8968 0.8947 7.0442 16 3 11.4135 1.5951
0.5992 6.0 180 0.9270 0.8962 0.8982 0.8968 6.9485 16 3 11.2209 1.8405
0.5381 7.0 210 0.9565 0.8948 0.8962 0.8951 6.8209 16 2 11.1043 1.3497
0.4899 8.0 240 0.9812 0.8956 0.8984 0.8966 7.0098 16 2 11.2282 1.9632
0.4528 9.0 270 0.9842 0.8954 0.8979 0.8962 6.9791 16 3 11.2773 1.7178
0.4233 10.0 300 1.0057 0.897 0.8977 0.8969 6.8294 16 2 11.0589 1.5951
0.3971 11.0 330 1.0276 0.8967 0.8976 0.8967 6.8761 16 2 11.1411 1.1043
0.3713 12.0 360 1.0316 0.8962 0.8958 0.8955 6.7583 16 2 10.9816 1.1043
0.3428 13.0 390 1.0775 0.898 0.8982 0.8977 6.838 16 2 11.092 1.1043
0.3256 14.0 420 1.0831 0.8987 0.8993 0.8985 6.8552 16 2 11.1141 1.227
0.3116 15.0 450 1.0982 0.8979 0.899 0.898 6.8638 16 2 11.119 1.1043
0.2958 16.0 480 1.1273 0.8965 0.8991 0.8974 6.9546 16 3 11.238 1.5951
0.2838 17.0 510 1.1205 0.8984 0.9003 0.8989 6.9583 16 3 11.227 1.4724
0.2683 18.0 540 1.1435 0.8978 0.8991 0.898 6.8847 16 2 11.1178 1.227
0.2594 19.0 570 1.1495 0.899 0.8986 0.8983 6.7939 16 2 11.0307 0.8589
0.2522 20.0 600 1.1621 0.8993 0.8992 0.8988 6.7767 16 3 11.0294 0.7362
0.2457 21.0 630 1.1693 0.8991 0.9017 0.9 6.9006 16 3 11.2 0.9816
0.2442 22.0 660 1.1728 0.8986 0.9008 0.8992 6.8773 16 3 11.1644 0.9816
0.235 23.0 690 1.1740 0.8986 0.9002 0.899 6.8564 16 3 11.1178 0.9816
0.2319 24.0 720 1.1751 0.8995 0.9008 0.8997 6.8417 16 3 11.0908 0.9816
0.2315 25.0 750 1.1772 0.8996 0.9009 0.8998 6.8393 16 3 11.092 0.9816

Framework versions

  • Transformers 4.33.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3