gweltou commited on
Commit
9169d68
1 Parent(s): a6dd643

Model save

Browse files
README.md CHANGED
@@ -1,17 +1,8 @@
1
  ---
2
- language:
3
- - br
4
  license: apache-2.0
 
5
  tags:
6
  - generated_from_trainer
7
- base_model: distilbert/distilgpt2
8
- datasets:
9
- - gweltou/text-br
10
- - gweltou/wikipedia-br-20240325
11
- widget:
12
- - text: E-kichen Plougerne
13
- - text: Emañ Katell
14
- - text: Yann a oa o
15
  model-index:
16
  - name: tiny-gpt2-br
17
  results: []
@@ -24,12 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
24
 
25
  This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/distilgpt2) on the None dataset.
26
  It achieves the following results on the evaluation set:
27
- - eval_loss: 3.2672
28
- - eval_runtime: 134.3513
29
- - eval_samples_per_second: 259.023
30
- - eval_steps_per_second: 16.189
31
- - epoch: 3.42
32
- - step: 134000
33
 
34
  ## Model description
35
 
@@ -48,18 +34,72 @@ More information needed
48
  ### Training hyperparameters
49
 
50
  The following hyperparameters were used during training:
51
- - learning_rate: 0.0008
52
- - train_batch_size: 8
53
- - eval_batch_size: 16
54
  - seed: 42
55
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
56
  - lr_scheduler_type: linear
57
  - lr_scheduler_warmup_steps: 500
58
- - num_epochs: 4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
 
60
  ### Framework versions
61
 
62
  - Transformers 4.39.1
63
  - Pytorch 2.0.1+cu117
64
  - Datasets 2.18.0
65
- - Tokenizers 0.15.2
 
1
  ---
 
 
2
  license: apache-2.0
3
+ base_model: distilbert/distilgpt2
4
  tags:
5
  - generated_from_trainer
 
 
 
 
 
 
 
 
6
  model-index:
7
  - name: tiny-gpt2-br
8
  results: []
 
15
 
16
  This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/distilgpt2) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 3.2128
 
 
 
 
 
19
 
20
  ## Model description
21
 
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
+ - learning_rate: 0.0007
38
+ - train_batch_size: 32
39
+ - eval_batch_size: 64
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
  - lr_scheduler_warmup_steps: 500
44
+ - num_epochs: 5
45
+
46
+ ### Training results
47
+
48
+ | Training Loss | Epoch | Step | Validation Loss |
49
+ |:-------------:|:-----:|:-----:|:---------------:|
50
+ | 5.8959 | 0.1 | 1000 | 4.8993 |
51
+ | 4.6543 | 0.2 | 2000 | 4.4073 |
52
+ | 4.329 | 0.31 | 3000 | 4.1635 |
53
+ | 4.1446 | 0.41 | 4000 | 4.0202 |
54
+ | 4.0133 | 0.51 | 5000 | 3.9119 |
55
+ | 3.9236 | 0.61 | 6000 | 3.8271 |
56
+ | 3.8622 | 0.72 | 7000 | 3.7583 |
57
+ | 3.7928 | 0.82 | 8000 | 3.7028 |
58
+ | 3.7379 | 0.92 | 9000 | 3.6607 |
59
+ | 3.672 | 1.02 | 10000 | 3.6198 |
60
+ | 3.5527 | 1.12 | 11000 | 3.5873 |
61
+ | 3.5428 | 1.23 | 12000 | 3.5617 |
62
+ | 3.514 | 1.33 | 13000 | 3.5328 |
63
+ | 3.4959 | 1.43 | 14000 | 3.4995 |
64
+ | 3.4762 | 1.53 | 15000 | 3.4816 |
65
+ | 3.4621 | 1.63 | 16000 | 3.4536 |
66
+ | 3.4392 | 1.74 | 17000 | 3.4368 |
67
+ | 3.4149 | 1.84 | 18000 | 3.4150 |
68
+ | 3.4006 | 1.94 | 19000 | 3.3950 |
69
+ | 3.3313 | 2.04 | 20000 | 3.3951 |
70
+ | 3.228 | 2.15 | 21000 | 3.3820 |
71
+ | 3.223 | 2.25 | 22000 | 3.3694 |
72
+ | 3.2234 | 2.35 | 23000 | 3.3470 |
73
+ | 3.215 | 2.45 | 24000 | 3.3350 |
74
+ | 3.2037 | 2.55 | 25000 | 3.3257 |
75
+ | 3.2265 | 2.66 | 26000 | 3.3122 |
76
+ | 3.2012 | 2.76 | 27000 | 3.2943 |
77
+ | 3.1827 | 2.86 | 28000 | 3.2816 |
78
+ | 3.1801 | 2.96 | 29000 | 3.2706 |
79
+ | 3.0519 | 3.06 | 30000 | 3.2998 |
80
+ | 3.0003 | 3.17 | 31000 | 3.2847 |
81
+ | 3.0091 | 3.27 | 32000 | 3.2764 |
82
+ | 3.0007 | 3.37 | 33000 | 3.2682 |
83
+ | 3.0013 | 3.47 | 34000 | 3.2586 |
84
+ | 2.9951 | 3.58 | 35000 | 3.2452 |
85
+ | 2.9943 | 3.68 | 36000 | 3.2452 |
86
+ | 2.9941 | 3.78 | 37000 | 3.2311 |
87
+ | 2.9839 | 3.88 | 38000 | 3.2174 |
88
+ | 2.9861 | 3.98 | 39000 | 3.2149 |
89
+ | 2.8311 | 4.09 | 40000 | 3.2509 |
90
+ | 2.8113 | 4.19 | 41000 | 3.2432 |
91
+ | 2.8074 | 4.29 | 42000 | 3.2450 |
92
+ | 2.8123 | 4.39 | 43000 | 3.2359 |
93
+ | 2.8086 | 4.5 | 44000 | 3.2245 |
94
+ | 2.8028 | 4.6 | 45000 | 3.2261 |
95
+ | 2.8046 | 4.7 | 46000 | 3.2204 |
96
+ | 2.7978 | 4.8 | 47000 | 3.2148 |
97
+ | 2.7982 | 4.9 | 48000 | 3.2128 |
98
+
99
 
100
  ### Framework versions
101
 
102
  - Transformers 4.39.1
103
  - Pytorch 2.0.1+cu117
104
  - Datasets 2.18.0
105
+ - Tokenizers 0.15.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6f37c29aeffff63ea0792ddbec7dcdd416b201bb0561685914fdb7c6b0558726
3
  size 327657928
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea1511efec445f03a6a88c256746eefe4524d9d0fdc36c47d888a7982680f04d
3
  size 327657928
runs/Jun02_14-44-33_gweltaz-NUC10i7FNK/events.out.tfevents.1717332286.gweltaz-NUC10i7FNK.2554.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2e3493a2ccb89923b602f28ff285c3a68eb707b628e6f3003e4afe06cce6e4ed
3
- size 27878
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bbff401d55c7a4c893db0e9933c6d653558698c2aa71e21ffef57e5b53c75bb1
3
+ size 28729