Rodrigo1771 commited on
Commit
dc41cb8
·
verified ·
1 Parent(s): 10f1a7c

Model save

Browse files
README.md CHANGED
@@ -3,10 +3,9 @@ library_name: transformers
3
  license: apache-2.0
4
  base_model: PlanTL-GOB-ES/bsc-bio-ehr-es
5
  tags:
6
- - token-classification
7
  - generated_from_trainer
8
  datasets:
9
- - Rodrigo1771/symptemist-fasttext-8-ner
10
  metrics:
11
  - precision
12
  - recall
@@ -19,24 +18,24 @@ model-index:
19
  name: Token Classification
20
  type: token-classification
21
  dataset:
22
- name: Rodrigo1771/symptemist-fasttext-8-ner
23
- type: Rodrigo1771/symptemist-fasttext-8-ner
24
  config: SympTEMIST NER
25
  split: validation
26
  args: SympTEMIST NER
27
  metrics:
28
  - name: Precision
29
  type: precision
30
- value: 0.6764102564102564
31
  - name: Recall
32
  type: recall
33
- value: 0.7219485495347564
34
  - name: F1
35
  type: f1
36
- value: 0.6984379136881121
37
  - name: Accuracy
38
  type: accuracy
39
- value: 0.9500465205813469
40
  ---
41
 
42
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -44,13 +43,13 @@ should probably proofread and complete it, then remove this comment. -->
44
 
45
  # output
46
 
47
- This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the Rodrigo1771/symptemist-fasttext-8-ner dataset.
48
  It achieves the following results on the evaluation set:
49
- - Loss: 0.3073
50
- - Precision: 0.6764
51
- - Recall: 0.7219
52
- - F1: 0.6984
53
- - Accuracy: 0.9500
54
 
55
  ## Model description
56
 
@@ -81,18 +80,18 @@ The following hyperparameters were used during training:
81
 
82
  ### Training results
83
 
84
- | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
85
- |:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
86
- | No log | 0.9975 | 203 | 0.1501 | 0.5960 | 0.6338 | 0.6143 | 0.9468 |
87
- | No log | 2.0 | 407 | 0.1761 | 0.6529 | 0.6940 | 0.6729 | 0.9492 |
88
- | 0.1312 | 2.9975 | 610 | 0.1995 | 0.6322 | 0.7170 | 0.6720 | 0.9470 |
89
- | 0.1312 | 4.0 | 814 | 0.2182 | 0.6446 | 0.7137 | 0.6774 | 0.9483 |
90
- | 0.0248 | 4.9975 | 1017 | 0.2461 | 0.6251 | 0.7219 | 0.6701 | 0.9449 |
91
- | 0.0248 | 6.0 | 1221 | 0.2695 | 0.6410 | 0.7302 | 0.6827 | 0.9469 |
92
- | 0.0248 | 6.9975 | 1424 | 0.2829 | 0.6529 | 0.7340 | 0.6911 | 0.9470 |
93
- | 0.0081 | 8.0 | 1628 | 0.2982 | 0.6711 | 0.7181 | 0.6938 | 0.9494 |
94
- | 0.0081 | 8.9975 | 1831 | 0.3073 | 0.6764 | 0.7219 | 0.6984 | 0.9500 |
95
- | 0.0038 | 9.9754 | 2030 | 0.3079 | 0.6713 | 0.7165 | 0.6931 | 0.9500 |
96
 
97
 
98
  ### Framework versions
 
3
  license: apache-2.0
4
  base_model: PlanTL-GOB-ES/bsc-bio-ehr-es
5
  tags:
 
6
  - generated_from_trainer
7
  datasets:
8
+ - symptemist-fasttext-85-ner
9
  metrics:
10
  - precision
11
  - recall
 
18
  name: Token Classification
19
  type: token-classification
20
  dataset:
21
+ name: symptemist-fasttext-85-ner
22
+ type: symptemist-fasttext-85-ner
23
  config: SympTEMIST NER
24
  split: validation
25
  args: SympTEMIST NER
26
  metrics:
27
  - name: Precision
28
  type: precision
29
+ value: 0.6548403446528129
30
  - name: Recall
31
  type: recall
32
+ value: 0.7071702244116037
33
  - name: F1
34
  type: f1
35
+ value: 0.6799999999999999
36
  - name: Accuracy
37
  type: accuracy
38
+ value: 0.9481215310083737
39
  ---
40
 
41
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
43
 
44
  # output
45
 
46
+ This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the symptemist-fasttext-85-ner dataset.
47
  It achieves the following results on the evaluation set:
48
+ - Loss: 0.2930
49
+ - Precision: 0.6548
50
+ - Recall: 0.7072
51
+ - F1: 0.6800
52
+ - Accuracy: 0.9481
53
 
54
  ## Model description
55
 
 
80
 
81
  ### Training results
82
 
83
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
84
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
85
+ | No log | 1.0 | 171 | 0.1502 | 0.5421 | 0.6765 | 0.6019 | 0.9458 |
86
+ | No log | 2.0 | 342 | 0.1539 | 0.5958 | 0.6793 | 0.6348 | 0.9468 |
87
+ | 0.1273 | 3.0 | 513 | 0.1838 | 0.6326 | 0.7077 | 0.6680 | 0.9468 |
88
+ | 0.1273 | 4.0 | 684 | 0.2018 | 0.6322 | 0.7121 | 0.6698 | 0.9466 |
89
+ | 0.1273 | 5.0 | 855 | 0.2153 | 0.6441 | 0.7192 | 0.6796 | 0.9465 |
90
+ | 0.0234 | 6.0 | 1026 | 0.2498 | 0.6461 | 0.7006 | 0.6723 | 0.9470 |
91
+ | 0.0234 | 7.0 | 1197 | 0.2653 | 0.6362 | 0.7209 | 0.6759 | 0.9462 |
92
+ | 0.0234 | 8.0 | 1368 | 0.2808 | 0.6529 | 0.7115 | 0.6810 | 0.9473 |
93
+ | 0.0082 | 9.0 | 1539 | 0.2917 | 0.6458 | 0.7115 | 0.6771 | 0.9467 |
94
+ | 0.0082 | 10.0 | 1710 | 0.2930 | 0.6548 | 0.7072 | 0.6800 | 0.9481 |
95
 
96
 
97
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:876f723be6ca469f211c557a65fa6be11b0b60eed47f4bdae5973bb6330c146c
3
  size 496244100
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa435b5d5128cb42a840438c5750299de3b5d75b06e70f57f6d654e4cc806af7
3
  size 496244100
tb/events.out.tfevents.1725884095.0a1c9bec2a53.15221.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:89d448c7000393aa4739bd3e8685df3e41ddb26d68381c46149f020f68e8f6eb
3
- size 10054
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a5eb73def06ef281d6c5abe9cbb6a47c633f2d7191b334dbe5cbead1c284e80
3
+ size 10880
train.log CHANGED
@@ -877,3 +877,16 @@ Training completed. Do not forget to share your model on huggingface.co/models =
877
  [INFO|trainer.py:2632] 2024-09-09 12:29:57,247 >> Loading best model from /content/dissertation/scripts/ner/output/checkpoint-1368 (score: 0.680984808800419).
878
 
879
 
880
  [INFO|trainer.py:4283] 2024-09-09 12:29:57,436 >> Waiting for the current checkpoint push to be finished, this might take a couple of minutes.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
877
  [INFO|trainer.py:2632] 2024-09-09 12:29:57,247 >> Loading best model from /content/dissertation/scripts/ner/output/checkpoint-1368 (score: 0.680984808800419).
878
 
879
 
880
  [INFO|trainer.py:4283] 2024-09-09 12:29:57,436 >> Waiting for the current checkpoint push to be finished, this might take a couple of minutes.
881
+ [INFO|trainer.py:3503] 2024-09-09 12:30:42,660 >> Saving model checkpoint to /content/dissertation/scripts/ner/output
882
+ [INFO|configuration_utils.py:472] 2024-09-09 12:30:42,661 >> Configuration saved in /content/dissertation/scripts/ner/output/config.json
883
+ [INFO|modeling_utils.py:2799] 2024-09-09 12:30:44,034 >> Model weights saved in /content/dissertation/scripts/ner/output/model.safetensors
884
+ [INFO|tokenization_utils_base.py:2684] 2024-09-09 12:30:44,035 >> tokenizer config file saved in /content/dissertation/scripts/ner/output/tokenizer_config.json
885
+ [INFO|tokenization_utils_base.py:2693] 2024-09-09 12:30:44,035 >> Special tokens file saved in /content/dissertation/scripts/ner/output/special_tokens_map.json
886
+ [INFO|trainer.py:3503] 2024-09-09 12:30:44,082 >> Saving model checkpoint to /content/dissertation/scripts/ner/output
887
+ [INFO|configuration_utils.py:472] 2024-09-09 12:30:44,083 >> Configuration saved in /content/dissertation/scripts/ner/output/config.json
888
+ [INFO|modeling_utils.py:2799] 2024-09-09 12:30:47,113 >> Model weights saved in /content/dissertation/scripts/ner/output/model.safetensors
889
+ [INFO|tokenization_utils_base.py:2684] 2024-09-09 12:30:47,114 >> tokenizer config file saved in /content/dissertation/scripts/ner/output/tokenizer_config.json
890
+ [INFO|tokenization_utils_base.py:2693] 2024-09-09 12:30:47,114 >> Special tokens file saved in /content/dissertation/scripts/ner/output/special_tokens_map.json
891
+ {'eval_loss': 0.2930145561695099, 'eval_precision': 0.6548403446528129, 'eval_recall': 0.7071702244116037, 'eval_f1': 0.6799999999999999, 'eval_accuracy': 0.9481215310083737, 'eval_runtime': 5.9346, 'eval_samples_per_second': 424.463, 'eval_steps_per_second': 53.079, 'epoch': 10.0}
892
+ {'train_runtime': 901.7971, 'train_samples_per_second': 121.269, 'train_steps_per_second': 1.896, 'train_loss': 0.047100042948248794, 'epoch': 10.0}
893
+