Edit model card

text_shortening_model_v38

This model is a fine-tuned version of facebook/bart-large-xsum on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 32.2806
  • Rouge1: 0.0
  • Rouge2: 0.0
  • Rougel: 0.0
  • Rougelsum: 0.0
  • Bert precision: 0.6712
  • Bert recall: 0.6737
  • Average word count: 1.0
  • Max word count: 1
  • Min word count: 1
  • Average token count: 62.0
  • % shortened texts with length > 12: 0.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Bert precision Bert recall Average word count Max word count Min word count Average token count % shortened texts with length > 12
2.9479 1.0 145 6.0655 0.1154 0.0035 0.0998 0.0997 0.6949 0.7234 7.8649 46 2 47.0901 8.4084
3.2977 2.0 290 7.9855 0.0026 0.0 0.0026 0.0026 0.6628 0.6805 3.0 3 3 62.0 0.0
2.7673 3.0 435 18.0330 0.0 0.0 0.0 0.0 0.6716 0.677 1.0 1 1 62.0 0.0
2.7007 4.0 580 16.7534 0.0 0.0 0.0 0.0 0.6617 0.6651 1.0 1 1 62.0 0.0
2.6519 5.0 725 19.3665 0.0 0.0 0.0 0.0 0.6636 0.6599 1.0 1 1 62.0 0.0
2.6334 6.0 870 19.0112 0.0 0.0 0.0 0.0 0.6583 0.6639 1.0 1 1 62.0 0.0
2.5888 7.0 1015 20.8393 0.0 0.0 0.0 0.0 0.6602 0.6737 1.0 1 1 62.0 0.0
2.5665 8.0 1160 20.7588 0.0 0.0 0.0 0.0 0.6503 0.6688 1.0 1 1 62.0 0.0
2.546 9.0 1305 23.6869 0.0 0.0 0.0 0.0 0.6646 0.6703 1.0 1 1 62.0 0.0
2.5334 10.0 1450 26.1563 0.0 0.0 0.0 0.0 0.6693 0.6685 1.0 1 1 62.0 0.0
2.5194 11.0 1595 26.2698 0.0 0.0 0.0 0.0 0.6682 0.6743 1.0 1 1 62.0 0.0
2.5152 12.0 1740 30.3763 0.0 0.0 0.0 0.0 0.6582 0.6645 1.0 1 1 62.0 0.0
2.5005 13.0 1885 26.7690 0.0 0.0 0.0 0.0 0.6693 0.6597 1.0 1 1 62.0 0.0
2.4942 14.0 2030 26.8399 0.0 0.0 0.0 0.0 0.6655 0.6674 1.0 1 1 62.0 0.0
2.4766 15.0 2175 26.8788 0.0 0.0 0.0 0.0 0.6689 0.671 1.0 1 1 62.0 0.0
2.4712 16.0 2320 29.2279 0.0 0.0 0.0 0.0 0.6693 0.6669 1.0 1 1 62.0 0.0
2.46 17.0 2465 31.1020 0.0 0.0 0.0 0.0 0.6675 0.6655 1.0 1 1 62.0 0.0
2.4493 18.0 2610 31.4642 0.0 0.0 0.0 0.0 0.6655 0.6737 1.0 1 1 62.0 0.0
2.4419 19.0 2755 31.2733 0.0 0.0 0.0 0.0 0.6593 0.6629 1.0 1 1 62.0 0.0
2.4323 20.0 2900 32.2806 0.0 0.0 0.0 0.0 0.6712 0.6737 1.0 1 1 62.0 0.0

Framework versions

  • Transformers 4.33.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ldos/text_shortening_model_v38

Finetuned
(50)
this model