Rodrigo1771 commited on
Commit
25877ed
·
verified ·
1 Parent(s): 09b6e57

Model save

Browse files
README.md CHANGED
@@ -3,10 +3,9 @@ library_name: transformers
3
  license: apache-2.0
4
  base_model: michiyasunaga/BioLinkBERT-base
5
  tags:
6
- - token-classification
7
  - generated_from_trainer
8
  datasets:
9
- - Rodrigo1771/drugtemist-en-75-ner
10
  metrics:
11
  - precision
12
  - recall
@@ -19,24 +18,24 @@ model-index:
19
  name: Token Classification
20
  type: token-classification
21
  dataset:
22
- name: Rodrigo1771/drugtemist-en-75-ner
23
- type: Rodrigo1771/drugtemist-en-75-ner
24
  config: DrugTEMIST English NER
25
  split: validation
26
  args: DrugTEMIST English NER
27
  metrics:
28
  - name: Precision
29
  type: precision
30
- value: 0.9342105263157895
31
  - name: Recall
32
  type: recall
33
- value: 0.9263746505125815
34
  - name: F1
35
  type: f1
36
- value: 0.930276087973795
37
  - name: Accuracy
38
  type: accuracy
39
- value: 0.9987162671280663
40
  ---
41
 
42
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -44,13 +43,13 @@ should probably proofread and complete it, then remove this comment. -->
44
 
45
  # output
46
 
47
- This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on the Rodrigo1771/drugtemist-en-75-ner dataset.
48
  It achieves the following results on the evaluation set:
49
- - Loss: 0.0065
50
- - Precision: 0.9342
51
- - Recall: 0.9264
52
- - F1: 0.9303
53
- - Accuracy: 0.9987
54
 
55
  ## Model description
56
 
@@ -83,16 +82,16 @@ The following hyperparameters were used during training:
83
 
84
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
85
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
86
- | 0.0189 | 1.0 | 504 | 0.0052 | 0.8712 | 0.9394 | 0.9040 | 0.9984 |
87
- | 0.0047 | 2.0 | 1008 | 0.0048 | 0.9253 | 0.9236 | 0.9244 | 0.9987 |
88
- | 0.0027 | 3.0 | 1512 | 0.0059 | 0.9252 | 0.9226 | 0.9239 | 0.9986 |
89
- | 0.0015 | 4.0 | 2016 | 0.0065 | 0.9342 | 0.9264 | 0.9303 | 0.9987 |
90
- | 0.0011 | 5.0 | 2520 | 0.0073 | 0.9073 | 0.9394 | 0.9231 | 0.9986 |
91
- | 0.0005 | 6.0 | 3024 | 0.0090 | 0.9191 | 0.9217 | 0.9204 | 0.9984 |
92
- | 0.0007 | 7.0 | 3528 | 0.0084 | 0.9074 | 0.9310 | 0.9190 | 0.9986 |
93
- | 0.0004 | 8.0 | 4032 | 0.0085 | 0.9093 | 0.9338 | 0.9214 | 0.9986 |
94
- | 0.0003 | 9.0 | 4536 | 0.0080 | 0.9186 | 0.9357 | 0.9271 | 0.9987 |
95
- | 0.0002 | 10.0 | 5040 | 0.0083 | 0.9210 | 0.9348 | 0.9278 | 0.9987 |
96
 
97
 
98
  ### Framework versions
 
3
  license: apache-2.0
4
  base_model: michiyasunaga/BioLinkBERT-base
5
  tags:
 
6
  - generated_from_trainer
7
  datasets:
8
+ - drugtemist-en-8-ner
9
  metrics:
10
  - precision
11
  - recall
 
18
  name: Token Classification
19
  type: token-classification
20
  dataset:
21
+ name: drugtemist-en-8-ner
22
+ type: drugtemist-en-8-ner
23
  config: DrugTEMIST English NER
24
  split: validation
25
  args: DrugTEMIST English NER
26
  metrics:
27
  - name: Precision
28
  type: precision
29
+ value: 0.9172033118675254
30
  - name: Recall
31
  type: recall
32
+ value: 0.9291705498602051
33
  - name: F1
34
  type: f1
35
+ value: 0.9231481481481483
36
  - name: Accuracy
37
  type: accuracy
38
+ value: 0.9985418469009014
39
  ---
40
 
41
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
43
 
44
  # output
45
 
46
+ This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on the drugtemist-en-8-ner dataset.
47
  It achieves the following results on the evaluation set:
48
+ - Loss: 0.0089
49
+ - Precision: 0.9172
50
+ - Recall: 0.9292
51
+ - F1: 0.9231
52
+ - Accuracy: 0.9985
53
 
54
  ## Model description
55
 
 
82
 
83
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
84
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
85
+ | No log | 1.0 | 493 | 0.0050 | 0.9288 | 0.9245 | 0.9267 | 0.9987 |
86
+ | 0.018 | 2.0 | 986 | 0.0057 | 0.9104 | 0.9189 | 0.9147 | 0.9984 |
87
+ | 0.0044 | 3.0 | 1479 | 0.0079 | 0.9362 | 0.9161 | 0.9260 | 0.9985 |
88
+ | 0.0023 | 4.0 | 1972 | 0.0057 | 0.9318 | 0.9301 | 0.9310 | 0.9987 |
89
+ | 0.0014 | 5.0 | 2465 | 0.0070 | 0.9201 | 0.9226 | 0.9214 | 0.9986 |
90
+ | 0.0008 | 6.0 | 2958 | 0.0082 | 0.9118 | 0.9254 | 0.9186 | 0.9985 |
91
+ | 0.0006 | 7.0 | 3451 | 0.0074 | 0.9172 | 0.9394 | 0.9282 | 0.9986 |
92
+ | 0.0003 | 8.0 | 3944 | 0.0085 | 0.9219 | 0.9245 | 0.9232 | 0.9985 |
93
+ | 0.0003 | 9.0 | 4437 | 0.0086 | 0.9149 | 0.9320 | 0.9234 | 0.9985 |
94
+ | 0.0002 | 10.0 | 4930 | 0.0089 | 0.9172 | 0.9292 | 0.9231 | 0.9985 |
95
 
96
 
97
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cb1ef42300d9e3fe8df4b979fcc05f0ee5e712f35d3020950e015e2408591c1e
3
  size 430601004
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0e1906e22e1e74ba71629d5fba7177183dc796679b0ec27976f490ac5ac9c44
3
  size 430601004
tb/events.out.tfevents.1725528054.6cb9bed92fd1.10874.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:de23eae6be5b88fd6826f8a03ba329abd3f0071dceaf3bd1465d0456b13359e4
3
- size 11268
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:287148fc9a2a755f0eb05d0bf65de2fca23a1e7fc7b944e80a7cb18035edf940
3
+ size 12094
train.log CHANGED
@@ -1267,3 +1267,16 @@ Training completed. Do not forget to share your model on huggingface.co/models =
1267
  [INFO|trainer.py:2632] 2024-09-05 09:41:16,700 >> Loading best model from /content/dissertation/scripts/ner/output/checkpoint-1972 (score: 0.9309701492537313).
1268
 
1269
 
1270
  [INFO|trainer.py:4283] 2024-09-05 09:41:16,869 >> Waiting for the current checkpoint push to be finished, this might take a couple of minutes.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1267
  [INFO|trainer.py:2632] 2024-09-05 09:41:16,700 >> Loading best model from /content/dissertation/scripts/ner/output/checkpoint-1972 (score: 0.9309701492537313).
1268
 
1269
 
1270
  [INFO|trainer.py:4283] 2024-09-05 09:41:16,869 >> Waiting for the current checkpoint push to be finished, this might take a couple of minutes.
1271
+ [INFO|trainer.py:3503] 2024-09-05 09:41:23,447 >> Saving model checkpoint to /content/dissertation/scripts/ner/output
1272
+ [INFO|configuration_utils.py:472] 2024-09-05 09:41:23,449 >> Configuration saved in /content/dissertation/scripts/ner/output/config.json
1273
+ [INFO|modeling_utils.py:2799] 2024-09-05 09:41:24,716 >> Model weights saved in /content/dissertation/scripts/ner/output/model.safetensors
1274
+ [INFO|tokenization_utils_base.py:2684] 2024-09-05 09:41:24,717 >> tokenizer config file saved in /content/dissertation/scripts/ner/output/tokenizer_config.json
1275
+ [INFO|tokenization_utils_base.py:2693] 2024-09-05 09:41:24,717 >> Special tokens file saved in /content/dissertation/scripts/ner/output/special_tokens_map.json
1276
+ [INFO|trainer.py:3503] 2024-09-05 09:41:24,730 >> Saving model checkpoint to /content/dissertation/scripts/ner/output
1277
+ [INFO|configuration_utils.py:472] 2024-09-05 09:41:24,732 >> Configuration saved in /content/dissertation/scripts/ner/output/config.json
1278
+ [INFO|modeling_utils.py:2799] 2024-09-05 09:41:26,453 >> Model weights saved in /content/dissertation/scripts/ner/output/model.safetensors
1279
+ [INFO|tokenization_utils_base.py:2684] 2024-09-05 09:41:26,454 >> tokenizer config file saved in /content/dissertation/scripts/ner/output/tokenizer_config.json
1280
+ [INFO|tokenization_utils_base.py:2693] 2024-09-05 09:41:26,454 >> Special tokens file saved in /content/dissertation/scripts/ner/output/special_tokens_map.json
1281
+ {'eval_loss': 0.008874327875673771, 'eval_precision': 0.9172033118675254, 'eval_recall': 0.9291705498602051, 'eval_f1': 0.9231481481481483, 'eval_accuracy': 0.9985418469009014, 'eval_runtime': 13.6739, 'eval_samples_per_second': 507.975, 'eval_steps_per_second': 63.552, 'epoch': 10.0}
1282
+ {'train_runtime': 1222.2462, 'train_samples_per_second': 258.017, 'train_steps_per_second': 4.034, 'train_loss': 0.0028862289850114567, 'epoch': 10.0}
1283
+