Model save
Browse files- README.md +27 -28
- model.safetensors +1 -1
- tb/events.out.tfevents.1725577909.2a66098fac87.9264.0 +2 -2
- train.log +13 -0
README.md
CHANGED
@@ -3,10 +3,9 @@ library_name: transformers
|
|
3 |
license: apache-2.0
|
4 |
base_model: PlanTL-GOB-ES/bsc-bio-ehr-es
|
5 |
tags:
|
6 |
-
- token-classification
|
7 |
- generated_from_trainer
|
8 |
datasets:
|
9 |
-
-
|
10 |
metrics:
|
11 |
- precision
|
12 |
- recall
|
@@ -19,24 +18,24 @@ model-index:
|
|
19 |
name: Token Classification
|
20 |
type: token-classification
|
21 |
dataset:
|
22 |
-
name:
|
23 |
-
type:
|
24 |
-
config:
|
25 |
split: validation
|
26 |
-
args:
|
27 |
metrics:
|
28 |
- name: Precision
|
29 |
type: precision
|
30 |
-
value: 0.
|
31 |
- name: Recall
|
32 |
type: recall
|
33 |
-
value: 0.
|
34 |
- name: F1
|
35 |
type: f1
|
36 |
-
value: 0.
|
37 |
- name: Accuracy
|
38 |
type: accuracy
|
39 |
-
value: 0.
|
40 |
---
|
41 |
|
42 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -44,13 +43,13 @@ should probably proofread and complete it, then remove this comment. -->
|
|
44 |
|
45 |
# output
|
46 |
|
47 |
-
This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the
|
48 |
It achieves the following results on the evaluation set:
|
49 |
-
- Loss: 0.
|
50 |
-
- Precision: 0.
|
51 |
-
- Recall: 0.
|
52 |
-
- F1: 0.
|
53 |
-
- Accuracy: 0.
|
54 |
|
55 |
## Model description
|
56 |
|
@@ -81,18 +80,18 @@ The following hyperparameters were used during training:
|
|
81 |
|
82 |
### Training results
|
83 |
|
84 |
-
| Training Loss | Epoch
|
85 |
-
|
86 |
-
| No log | 0
|
87 |
-
| 0.
|
88 |
-
| 0.
|
89 |
-
| 0.
|
90 |
-
| 0.
|
91 |
-
| 0.
|
92 |
-
| 0.
|
93 |
-
| 0.
|
94 |
-
| 0.
|
95 |
-
| 0.
|
96 |
|
97 |
|
98 |
### Framework versions
|
|
|
3 |
license: apache-2.0
|
4 |
base_model: PlanTL-GOB-ES/bsc-bio-ehr-es
|
5 |
tags:
|
|
|
6 |
- generated_from_trainer
|
7 |
datasets:
|
8 |
+
- drugtemist-85-ner
|
9 |
metrics:
|
10 |
- precision
|
11 |
- recall
|
|
|
18 |
name: Token Classification
|
19 |
type: token-classification
|
20 |
dataset:
|
21 |
+
name: drugtemist-85-ner
|
22 |
+
type: drugtemist-85-ner
|
23 |
+
config: DrugTEMIST NER
|
24 |
split: validation
|
25 |
+
args: DrugTEMIST NER
|
26 |
metrics:
|
27 |
- name: Precision
|
28 |
type: precision
|
29 |
+
value: 0.9347826086956522
|
30 |
- name: Recall
|
31 |
type: recall
|
32 |
+
value: 0.9485294117647058
|
33 |
- name: F1
|
34 |
type: f1
|
35 |
+
value: 0.9416058394160585
|
36 |
- name: Accuracy
|
37 |
type: accuracy
|
38 |
+
value: 0.9989083718950389
|
39 |
---
|
40 |
|
41 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
43 |
|
44 |
# output
|
45 |
|
46 |
+
This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the drugtemist-85-ner dataset.
|
47 |
It achieves the following results on the evaluation set:
|
48 |
+
- Loss: 0.0058
|
49 |
+
- Precision: 0.9348
|
50 |
+
- Recall: 0.9485
|
51 |
+
- F1: 0.9416
|
52 |
+
- Accuracy: 0.9989
|
53 |
|
54 |
## Model description
|
55 |
|
|
|
80 |
|
81 |
### Training results
|
82 |
|
83 |
+
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|
84 |
+
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
|
85 |
+
| No log | 1.0 | 466 | 0.0031 | 0.9292 | 0.9412 | 0.9352 | 0.9989 |
|
86 |
+
| 0.0199 | 2.0 | 932 | 0.0031 | 0.9212 | 0.9568 | 0.9387 | 0.9989 |
|
87 |
+
| 0.0026 | 3.0 | 1398 | 0.0040 | 0.9365 | 0.9357 | 0.9361 | 0.9989 |
|
88 |
+
| 0.0011 | 4.0 | 1864 | 0.0052 | 0.9400 | 0.9219 | 0.9309 | 0.9987 |
|
89 |
+
| 0.001 | 5.0 | 2330 | 0.0048 | 0.9461 | 0.9522 | 0.9492 | 0.9989 |
|
90 |
+
| 0.0005 | 6.0 | 2796 | 0.0046 | 0.9376 | 0.9522 | 0.9448 | 0.9989 |
|
91 |
+
| 0.0004 | 7.0 | 3262 | 0.0050 | 0.9328 | 0.9568 | 0.9446 | 0.9990 |
|
92 |
+
| 0.0002 | 8.0 | 3728 | 0.0055 | 0.9423 | 0.9449 | 0.9436 | 0.9989 |
|
93 |
+
| 0.0001 | 9.0 | 4194 | 0.0057 | 0.9399 | 0.9485 | 0.9442 | 0.9989 |
|
94 |
+
| 0.0001 | 10.0 | 4660 | 0.0058 | 0.9348 | 0.9485 | 0.9416 | 0.9989 |
|
95 |
|
96 |
|
97 |
### Framework versions
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 496244100
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8ffb52a560270ca01c848053ce5081195b99ce3022deb83a8d0d15a9c351568f
|
3 |
size 496244100
|
tb/events.out.tfevents.1725577909.2a66098fac87.9264.0
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:78b123fe2690b4ef8daee19df1596f88d3d9902ee0a7e2e67195ebf39fc38893
|
3 |
+
size 12146
|
train.log
CHANGED
@@ -1437,3 +1437,16 @@ Training completed. Do not forget to share your model on huggingface.co/models =
|
|
1437 |
[INFO|trainer.py:2632] 2024-09-05 23:34:18,337 >> Loading best model from /content/dissertation/scripts/ner/output/checkpoint-2330 (score: 0.9491525423728814).
|
1438 |
|
1439 |
|
1440 |
[INFO|trainer.py:4283] 2024-09-05 23:34:18,533 >> Waiting for the current checkpoint push to be finished, this might take a couple of minutes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1437 |
[INFO|trainer.py:2632] 2024-09-05 23:34:18,337 >> Loading best model from /content/dissertation/scripts/ner/output/checkpoint-2330 (score: 0.9491525423728814).
|
1438 |
|
1439 |
|
1440 |
[INFO|trainer.py:4283] 2024-09-05 23:34:18,533 >> Waiting for the current checkpoint push to be finished, this might take a couple of minutes.
|
1441 |
+
[INFO|trainer.py:3503] 2024-09-05 23:34:40,390 >> Saving model checkpoint to /content/dissertation/scripts/ner/output
|
1442 |
+
[INFO|configuration_utils.py:472] 2024-09-05 23:34:40,392 >> Configuration saved in /content/dissertation/scripts/ner/output/config.json
|
1443 |
+
[INFO|modeling_utils.py:2799] 2024-09-05 23:34:41,886 >> Model weights saved in /content/dissertation/scripts/ner/output/model.safetensors
|
1444 |
+
[INFO|tokenization_utils_base.py:2684] 2024-09-05 23:34:41,887 >> tokenizer config file saved in /content/dissertation/scripts/ner/output/tokenizer_config.json
|
1445 |
+
[INFO|tokenization_utils_base.py:2693] 2024-09-05 23:34:41,888 >> Special tokens file saved in /content/dissertation/scripts/ner/output/special_tokens_map.json
|
1446 |
+
[INFO|trainer.py:3503] 2024-09-05 23:34:41,936 >> Saving model checkpoint to /content/dissertation/scripts/ner/output
|
1447 |
+
[INFO|configuration_utils.py:472] 2024-09-05 23:34:41,937 >> Configuration saved in /content/dissertation/scripts/ner/output/config.json
|
1448 |
+
[INFO|modeling_utils.py:2799] 2024-09-05 23:34:43,193 >> Model weights saved in /content/dissertation/scripts/ner/output/model.safetensors
|
1449 |
+
[INFO|tokenization_utils_base.py:2684] 2024-09-05 23:34:43,194 >> tokenizer config file saved in /content/dissertation/scripts/ner/output/tokenizer_config.json
|
1450 |
+
[INFO|tokenization_utils_base.py:2693] 2024-09-05 23:34:43,194 >> Special tokens file saved in /content/dissertation/scripts/ner/output/special_tokens_map.json
|
1451 |
+
{'eval_loss': 0.005793666001409292, 'eval_precision': 0.9347826086956522, 'eval_recall': 0.9485294117647058, 'eval_f1': 0.9416058394160585, 'eval_accuracy': 0.9989083718950389, 'eval_runtime': 14.4595, 'eval_samples_per_second': 470.971, 'eval_steps_per_second': 58.923, 'epoch': 10.0}
|
1452 |
+
{'train_runtime': 1349.0548, 'train_samples_per_second': 220.873, 'train_steps_per_second': 3.454, 'train_loss': 0.002772659832779558, 'epoch': 10.0}
|
1453 |
+
|