Rodrigo1771 commited on
Commit
f4151db
·
verified ·
1 Parent(s): 08dcc3d

Model save

Browse files
README.md CHANGED
@@ -1,10 +1,10 @@
1
  ---
2
- base_model: IVN-RIN/bioBIT
 
3
  tags:
4
- - token-classification
5
  - generated_from_trainer
6
  datasets:
7
- - Rodrigo1771/drugtemist-it-ner
8
  metrics:
9
  - precision
10
  - recall
@@ -17,24 +17,24 @@ model-index:
17
  name: Token Classification
18
  type: token-classification
19
  dataset:
20
- name: Rodrigo1771/drugtemist-it-ner
21
- type: Rodrigo1771/drugtemist-it-ner
22
- config: DrugTEMIST Italian NER
23
  split: validation
24
- args: DrugTEMIST Italian NER
25
  metrics:
26
  - name: Precision
27
  type: precision
28
- value: 0.9328214971209213
29
  - name: Recall
30
  type: recall
31
- value: 0.9409486931268151
32
  - name: F1
33
  type: f1
34
- value: 0.936867469879518
35
  - name: Accuracy
36
  type: accuracy
37
- value: 0.9988184887042326
38
  ---
39
 
40
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -42,13 +42,13 @@ should probably proofread and complete it, then remove this comment. -->
42
 
43
  # output
44
 
45
- This model is a fine-tuned version of [IVN-RIN/bioBIT](https://huggingface.co/IVN-RIN/bioBIT) on the Rodrigo1771/drugtemist-it-ner dataset.
46
  It achieves the following results on the evaluation set:
47
- - Loss: 0.0067
48
- - Precision: 0.9328
49
- - Recall: 0.9409
50
- - F1: 0.9369
51
- - Accuracy: 0.9988
52
 
53
  ## Model description
54
 
@@ -81,16 +81,16 @@ The following hyperparameters were used during training:
81
 
82
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
83
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
84
- | No log | 1.0 | 425 | 0.0056 | 0.8672 | 0.9226 | 0.8940 | 0.9981 |
85
- | 0.0104 | 2.0 | 850 | 0.0042 | 0.9151 | 0.9284 | 0.9217 | 0.9986 |
86
- | 0.0034 | 3.0 | 1275 | 0.0043 | 0.9182 | 0.9129 | 0.9155 | 0.9985 |
87
- | 0.0022 | 4.0 | 1700 | 0.0044 | 0.9365 | 0.9138 | 0.9250 | 0.9986 |
88
- | 0.0012 | 5.0 | 2125 | 0.0061 | 0.9107 | 0.9284 | 0.9195 | 0.9985 |
89
- | 0.0009 | 6.0 | 2550 | 0.0060 | 0.9104 | 0.9342 | 0.9221 | 0.9987 |
90
- | 0.0009 | 7.0 | 2975 | 0.0065 | 0.9230 | 0.9400 | 0.9314 | 0.9987 |
91
- | 0.0005 | 8.0 | 3400 | 0.0059 | 0.9258 | 0.9303 | 0.9281 | 0.9987 |
92
- | 0.0004 | 9.0 | 3825 | 0.0066 | 0.9255 | 0.9380 | 0.9317 | 0.9987 |
93
- | 0.0001 | 10.0 | 4250 | 0.0067 | 0.9328 | 0.9409 | 0.9369 | 0.9988 |
94
 
95
 
96
  ### Framework versions
 
1
  ---
2
+ license: apache-2.0
3
+ base_model: PlanTL-GOB-ES/bsc-bio-ehr-es
4
  tags:
 
5
  - generated_from_trainer
6
  datasets:
7
+ - symptemist-ner
8
  metrics:
9
  - precision
10
  - recall
 
17
  name: Token Classification
18
  type: token-classification
19
  dataset:
20
+ name: symptemist-ner
21
+ type: symptemist-ner
22
+ config: SympTEMIST NER
23
  split: validation
24
+ args: SympTEMIST NER
25
  metrics:
26
  - name: Precision
27
  type: precision
28
+ value: 0.6594676042189854
29
  - name: Recall
30
  type: recall
31
+ value: 0.7186644772851669
32
  - name: F1
33
  type: f1
34
+ value: 0.6877946568884233
35
  - name: Accuracy
36
  type: accuracy
37
+ value: 0.9487631941993647
38
  ---
39
 
40
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
42
 
43
  # output
44
 
45
+ This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the symptemist-ner dataset.
46
  It achieves the following results on the evaluation set:
47
+ - Loss: 0.2767
48
+ - Precision: 0.6595
49
+ - Recall: 0.7187
50
+ - F1: 0.6878
51
+ - Accuracy: 0.9488
52
 
53
  ## Model description
54
 
 
81
 
82
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
83
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
84
+ | No log | 1.0 | 150 | 0.1504 | 0.5091 | 0.6409 | 0.5675 | 0.9456 |
85
+ | No log | 2.0 | 300 | 0.1547 | 0.5881 | 0.6995 | 0.639 | 0.9462 |
86
+ | No log | 3.0 | 450 | 0.1618 | 0.6237 | 0.6984 | 0.6589 | 0.9476 |
87
+ | 0.126 | 4.0 | 600 | 0.1920 | 0.6154 | 0.7181 | 0.6628 | 0.9451 |
88
+ | 0.126 | 5.0 | 750 | 0.2102 | 0.6561 | 0.7028 | 0.6786 | 0.9488 |
89
+ | 0.126 | 6.0 | 900 | 0.2414 | 0.6443 | 0.7088 | 0.6750 | 0.9467 |
90
+ | 0.0251 | 7.0 | 1050 | 0.2500 | 0.6588 | 0.7061 | 0.6816 | 0.9492 |
91
+ | 0.0251 | 8.0 | 1200 | 0.2642 | 0.6440 | 0.7307 | 0.6846 | 0.9474 |
92
+ | 0.0251 | 9.0 | 1350 | 0.2747 | 0.6675 | 0.7187 | 0.6921 | 0.9483 |
93
+ | 0.0091 | 10.0 | 1500 | 0.2767 | 0.6595 | 0.7187 | 0.6878 | 0.9488 |
94
 
95
 
96
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6017fad51f5dc5737798ff6d9fd9fbfbc1d8d5e8c58fda4e121890b2bd69b001
3
  size 496244100
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c0cc86344025d3d9ad816c10866b7af4bb1e9ae5a0fd5c4e9e2a0bfde337257
3
  size 496244100
tb/events.out.tfevents.1725056887.6b97e535edda.51600.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6b64cb2bd4ea4ebc30acb43ec3597dee9e97889efa2097df4b63784bd9dd34ba
3
- size 9984
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:828d0da32d0107678e9d0aa2a710b0216207647fa04be0e03222be8b137b81af
3
+ size 10810
train.log CHANGED
@@ -844,3 +844,16 @@ Training completed. Do not forget to share your model on huggingface.co/models =
844
  [INFO|trainer.py:2621] 2024-08-30 22:35:40,063 >> Loading best model from /content/dissertation/scripts/ner/output/checkpoint-1350 (score: 0.6921454928835002).
845
 
846
 
847
  [INFO|trainer.py:4239] 2024-08-30 22:35:40,264 >> Waiting for the current checkpoint push to be finished, this might take a couple of minutes.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
844
  [INFO|trainer.py:2621] 2024-08-30 22:35:40,063 >> Loading best model from /content/dissertation/scripts/ner/output/checkpoint-1350 (score: 0.6921454928835002).
845
 
846
 
847
  [INFO|trainer.py:4239] 2024-08-30 22:35:40,264 >> Waiting for the current checkpoint push to be finished, this might take a couple of minutes.
848
+ [INFO|trainer.py:3478] 2024-08-30 22:35:51,472 >> Saving model checkpoint to /content/dissertation/scripts/ner/output
849
+ [INFO|configuration_utils.py:472] 2024-08-30 22:35:51,473 >> Configuration saved in /content/dissertation/scripts/ner/output/config.json
850
+ [INFO|modeling_utils.py:2690] 2024-08-30 22:35:52,914 >> Model weights saved in /content/dissertation/scripts/ner/output/model.safetensors
851
+ [INFO|tokenization_utils_base.py:2574] 2024-08-30 22:35:52,915 >> tokenizer config file saved in /content/dissertation/scripts/ner/output/tokenizer_config.json
852
+ [INFO|tokenization_utils_base.py:2583] 2024-08-30 22:35:52,916 >> Special tokens file saved in /content/dissertation/scripts/ner/output/special_tokens_map.json
853
+ [INFO|trainer.py:3478] 2024-08-30 22:35:52,965 >> Saving model checkpoint to /content/dissertation/scripts/ner/output
854
+ [INFO|configuration_utils.py:472] 2024-08-30 22:35:52,967 >> Configuration saved in /content/dissertation/scripts/ner/output/config.json
855
+ [INFO|modeling_utils.py:2690] 2024-08-30 22:35:55,047 >> Model weights saved in /content/dissertation/scripts/ner/output/model.safetensors
856
+ [INFO|tokenization_utils_base.py:2574] 2024-08-30 22:35:55,048 >> tokenizer config file saved in /content/dissertation/scripts/ner/output/tokenizer_config.json
857
+ [INFO|tokenization_utils_base.py:2583] 2024-08-30 22:35:55,048 >> Special tokens file saved in /content/dissertation/scripts/ner/output/special_tokens_map.json
858
+ {'eval_loss': 0.27674129605293274, 'eval_precision': 0.6594676042189854, 'eval_recall': 0.7186644772851669, 'eval_f1': 0.6877946568884233, 'eval_accuracy': 0.9487631941993647, 'eval_runtime': 6.0833, 'eval_samples_per_second': 414.082, 'eval_steps_per_second': 51.781, 'epoch': 10.0}
859
+ {'train_runtime': 453.0745, 'train_samples_per_second': 211.819, 'train_steps_per_second': 3.311, 'train_loss': 0.05337127685546875, 'epoch': 10.0}
860
+