silmi224 commited on
Commit
a8e4b56
1 Parent(s): 997e5b8

Training complete

Browse files
README.md ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: silmi224/finetune-led-35000
3
+ tags:
4
+ - summarization
5
+ - generated_from_trainer
6
+ metrics:
7
+ - rouge
8
+ model-index:
9
+ - name: exp2-led-risalah_data_v7-fix
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/silmiaulia/huggingface/runs/2a3srq9p)
17
+ # exp2-led-risalah_data_v7-fix
18
+
19
+ This model is a fine-tuned version of [silmi224/finetune-led-35000](https://huggingface.co/silmi224/finetune-led-35000) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 1.6801
22
+ - Rouge1: 20.0364
23
+ - Rouge2: 9.57
24
+ - Rougel: 13.9743
25
+ - Rougelsum: 14.0563
26
+
27
+ ## Model description
28
+
29
+ More information needed
30
+
31
+ ## Intended uses & limitations
32
+
33
+ More information needed
34
+
35
+ ## Training and evaluation data
36
+
37
+ More information needed
38
+
39
+ ## Training procedure
40
+
41
+ ### Training hyperparameters
42
+
43
+ The following hyperparameters were used during training:
44
+ - learning_rate: 2e-05
45
+ - train_batch_size: 1
46
+ - eval_batch_size: 1
47
+ - seed: 42
48
+ - gradient_accumulation_steps: 8
49
+ - total_train_batch_size: 8
50
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
+ - lr_scheduler_type: linear
52
+ - lr_scheduler_warmup_steps: 300
53
+ - num_epochs: 30
54
+ - mixed_precision_training: Native AMP
55
+
56
+ ### Training results
57
+
58
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
59
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
60
+ | 3.8706 | 1.0 | 10 | 3.3282 | 9.2634 | 1.825 | 6.2857 | 6.6749 |
61
+ | 3.5173 | 2.0 | 20 | 2.8713 | 9.381 | 1.5365 | 6.5965 | 6.6722 |
62
+ | 3.0587 | 3.0 | 30 | 2.5101 | 12.3761 | 3.5034 | 8.6155 | 8.7913 |
63
+ | 2.7254 | 4.0 | 40 | 2.2919 | 14.8916 | 4.9071 | 10.0 | 9.9487 |
64
+ | 2.504 | 5.0 | 50 | 2.1490 | 14.5316 | 4.9407 | 9.6973 | 9.5973 |
65
+ | 2.3306 | 6.0 | 60 | 2.0516 | 15.6234 | 5.419 | 10.6929 | 10.671 |
66
+ | 2.1991 | 7.0 | 70 | 1.9705 | 16.9222 | 6.1531 | 10.3785 | 10.4171 |
67
+ | 2.0922 | 8.0 | 80 | 1.9114 | 15.9531 | 6.007 | 10.2455 | 10.2734 |
68
+ | 2.0108 | 9.0 | 90 | 1.8601 | 16.3146 | 6.2786 | 10.632 | 10.6027 |
69
+ | 1.9243 | 10.0 | 100 | 1.8352 | 18.1771 | 6.6919 | 11.1811 | 11.2366 |
70
+ | 1.8675 | 11.0 | 110 | 1.7865 | 17.2554 | 7.4135 | 10.5322 | 10.5689 |
71
+ | 1.8066 | 12.0 | 120 | 1.7520 | 15.8483 | 7.1825 | 10.7059 | 10.7344 |
72
+ | 1.7476 | 13.0 | 130 | 1.7341 | 16.0049 | 6.6876 | 10.9744 | 10.9918 |
73
+ | 1.6911 | 14.0 | 140 | 1.7126 | 17.6921 | 8.9076 | 12.8474 | 12.8966 |
74
+ | 1.6388 | 15.0 | 150 | 1.6960 | 19.7192 | 9.1168 | 13.3649 | 13.3949 |
75
+ | 1.5902 | 16.0 | 160 | 1.6783 | 20.7583 | 9.7459 | 14.1533 | 14.1794 |
76
+ | 1.5433 | 17.0 | 170 | 1.6476 | 19.4203 | 9.4624 | 13.3403 | 13.401 |
77
+ | 1.4992 | 18.0 | 180 | 1.6450 | 18.74 | 8.8791 | 13.3925 | 13.3709 |
78
+ | 1.4614 | 19.0 | 190 | 1.6335 | 19.476 | 9.0282 | 13.5223 | 13.4966 |
79
+ | 1.4216 | 20.0 | 200 | 1.6246 | 17.6435 | 7.9777 | 13.1255 | 13.1599 |
80
+ | 1.3842 | 21.0 | 210 | 1.6102 | 18.6282 | 8.511 | 12.8825 | 12.7954 |
81
+ | 1.3479 | 22.0 | 220 | 1.6200 | 18.066 | 8.4414 | 12.467 | 12.4232 |
82
+ | 1.3087 | 23.0 | 230 | 1.6350 | 17.8312 | 8.6603 | 12.522 | 12.511 |
83
+ | 1.2752 | 24.0 | 240 | 1.6186 | 18.5374 | 9.7206 | 13.0955 | 13.0266 |
84
+ | 1.2434 | 25.0 | 250 | 1.6219 | 18.232 | 7.9904 | 12.7029 | 12.6916 |
85
+ | 1.2046 | 26.0 | 260 | 1.6393 | 17.4585 | 7.2075 | 12.5202 | 12.4766 |
86
+ | 1.1716 | 27.0 | 270 | 1.6139 | 19.6477 | 9.9919 | 14.3408 | 14.346 |
87
+ | 1.1388 | 28.0 | 280 | 1.6416 | 19.7279 | 8.8207 | 13.6708 | 13.7072 |
88
+ | 1.1083 | 29.0 | 290 | 1.6485 | 19.1252 | 9.2133 | 13.6003 | 13.6412 |
89
+ | 1.0745 | 30.0 | 300 | 1.6801 | 20.0364 | 9.57 | 13.9743 | 14.0563 |
90
+
91
+
92
+ ### Framework versions
93
+
94
+ - Transformers 4.42.3
95
+ - Pytorch 2.1.2
96
+ - Datasets 2.20.0
97
+ - Tokenizers 0.19.1
generation_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 0,
3
+ "decoder_start_token_id": 2,
4
+ "early_stopping": true,
5
+ "eos_token_id": 2,
6
+ "length_penalty": 2.0,
7
+ "max_length": 128,
8
+ "min_length": 40,
9
+ "no_repeat_ngram_size": 3,
10
+ "num_beams": 2,
11
+ "pad_token_id": 1,
12
+ "transformers_version": "4.42.3",
13
+ "use_cache": false
14
+ }
runs/Jul26_12-31-17_32b92d46a283/events.out.tfevents.1721997086.32b92d46a283.34.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:775b0c08d05dcb44779dd14bd5b40ec6a72d30709feb3ac2fc4ede38f280fb70
3
- size 25622
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e90ed58db4234fc6f9531e639df54d0ff078d117955dd51dc9ed83da71e65f99
3
+ size 26450
runs/Jul26_12-31-17_32b92d46a283/events.out.tfevents.1722012175.32b92d46a283.34.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c55de99330a8b0716feba9eeb7776d0c2d2e18155c43294d1d239f03ce616dfc
3
+ size 562