Tran commited on
Commit
e4831ea
·
verified ·
1 Parent(s): efb35a4

End of training

Browse files
README.md CHANGED
@@ -4,8 +4,6 @@ tags:
4
  - generated_from_trainer
5
  datasets:
6
  - pubmed-summarization
7
- metrics:
8
- - rouge
9
  model-index:
10
  - name: lsg-bart-base-16384-pubmed-finetuned-pubmed-16394
11
  results: []
@@ -14,16 +12,21 @@ model-index:
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/thanhkt27507-vsu/huggingface/runs/uh54ybef)
18
  # lsg-bart-base-16384-pubmed-finetuned-pubmed-16394
19
 
20
  This model is a fine-tuned version of [ccdv/lsg-bart-base-16384-pubmed](https://huggingface.co/ccdv/lsg-bart-base-16384-pubmed) on the pubmed-summarization dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.3006
23
- - Rouge1: 0.406
24
- - Rouge2: 0.1651
25
- - Rougel: 0.2662
26
- - Rougelsum: 0.3547
 
 
 
 
 
27
 
28
  ## Model description
29
 
@@ -42,7 +45,7 @@ More information needed
42
  ### Training hyperparameters
43
 
44
  The following hyperparameters were used during training:
45
- - learning_rate: 8e-05
46
  - train_batch_size: 2
47
  - eval_batch_size: 2
48
  - seed: 42
@@ -51,25 +54,7 @@ The following hyperparameters were used during training:
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
  - lr_scheduler_warmup_steps: 500
54
- - num_epochs: 6
55
-
56
- ### Training results
57
-
58
- | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
59
- |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
60
- | 12.4753 | 0.48 | 30 | 9.7449 | 0.3279 | 0.1346 | 0.2161 | 0.2938 |
61
- | 7.5448 | 0.96 | 60 | 4.3875 | 0.3249 | 0.1325 | 0.215 | 0.2898 |
62
- | 3.8253 | 1.44 | 90 | 2.4496 | 0.3388 | 0.1393 | 0.2243 | 0.301 |
63
- | 2.2909 | 1.92 | 120 | 1.3377 | 0.3446 | 0.1424 | 0.2263 | 0.3069 |
64
- | 1.1711 | 2.4 | 150 | 0.5844 | 0.3476 | 0.1447 | 0.2284 | 0.3093 |
65
- | 0.4808 | 2.88 | 180 | 0.3227 | 0.3677 | 0.1532 | 0.2395 | 0.3284 |
66
- | 0.2757 | 3.36 | 210 | 0.2896 | 0.3705 | 0.1465 | 0.2385 | 0.3282 |
67
- | 0.2491 | 3.84 | 240 | 0.2863 | 0.3975 | 0.1666 | 0.2617 | 0.3517 |
68
- | 0.2346 | 4.32 | 270 | 0.2911 | 0.3962 | 0.1663 | 0.262 | 0.3517 |
69
- | 0.2207 | 4.8 | 300 | 0.2919 | 0.3918 | 0.1614 | 0.259 | 0.3466 |
70
- | 0.2098 | 5.28 | 330 | 0.2989 | 0.3955 | 0.1611 | 0.2568 | 0.3495 |
71
- | 0.1985 | 5.76 | 360 | 0.3006 | 0.406 | 0.1651 | 0.2662 | 0.3547 |
72
-
73
 
74
  ### Framework versions
75
 
 
4
  - generated_from_trainer
5
  datasets:
6
  - pubmed-summarization
 
 
7
  model-index:
8
  - name: lsg-bart-base-16384-pubmed-finetuned-pubmed-16394
9
  results: []
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/thanhkt27507-vsu/huggingface/runs/056l8muj)
16
  # lsg-bart-base-16384-pubmed-finetuned-pubmed-16394
17
 
18
  This model is a fine-tuned version of [ccdv/lsg-bart-base-16384-pubmed](https://huggingface.co/ccdv/lsg-bart-base-16384-pubmed) on the pubmed-summarization dataset.
19
  It achieves the following results on the evaluation set:
20
+ - eval_loss: 5.6482
21
+ - eval_rouge1: 0.451
22
+ - eval_rouge2: 0.2128
23
+ - eval_rougeL: 0.2772
24
+ - eval_rougeLsum: 0.4174
25
+ - eval_runtime: 484.657
26
+ - eval_samples_per_second: 0.413
27
+ - eval_steps_per_second: 0.206
28
+ - epoch: 1.6
29
+ - step: 100
30
 
31
  ## Model description
32
 
 
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
+ - learning_rate: 1e-05
49
  - train_batch_size: 2
50
  - eval_batch_size: 2
51
  - seed: 42
 
54
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
  - lr_scheduler_type: linear
56
  - lr_scheduler_warmup_steps: 500
57
+ - num_epochs: 9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
 
59
  ### Framework versions
60
 
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2a5895e123cb0da321d4f5c854701ff55ba0b1696d0965f985f23c6ba2dc2302
3
  size 653857508
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7992feaeda807fa29cc86c1e3a712b0a1d88c03fe1cb066befffc849fecc9500
3
  size 653857508
runs/Jul19_10-30-21_3be8174ee72b/events.out.tfevents.1721385021.3be8174ee72b.31.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b9f176e6be2c97e402311ef9bb47d192ee0ec61fd79a0f73412a56ea942ccfe
3
+ size 8341
tokenizer_config.json CHANGED
@@ -48,7 +48,7 @@
48
  "eos_token": "</s>",
49
  "errors": "replace",
50
  "mask_token": "<mask>",
51
- "max_length": 512,
52
  "model_max_length": 16384,
53
  "pad_token": "<pad>",
54
  "sep_token": "</s>",
 
48
  "eos_token": "</s>",
49
  "errors": "replace",
50
  "mask_token": "<mask>",
51
+ "max_length": 4096,
52
  "model_max_length": 16384,
53
  "pad_token": "<pad>",
54
  "sep_token": "</s>",
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a09d24bdf34a09b0d0ed8efe040563dbacb630a2557e7f6ba3730bbbf9b21bb8
3
  size 4923
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99c3adba772b0b22dc1b0279283daa3e869a9ae4e216aa76ad3171065715e55c
3
  size 4923