--- language: - es license: cc-by-nc-4.0 tags: - generated_from_trainer datasets: - jpherrerap/competencia2 model-index: - name: ner-roberta-es-clinical-trials-ner results: [] --- # ner-roberta-es-clinical-trials-ner This model is a fine-tuned version of [lcampillos/roberta-es-clinical-trials-ner](https://huggingface.co/lcampillos/roberta-es-clinical-trials-ner) on the jpherrerap/competencia2 dataset. It achieves the following results on the evaluation set: - Loss: 0.2661 - Body Part Precision: 0.7124 - Body Part Recall: 0.8173 - Body Part F1: 0.7612 - Body Part Number: 197 - Disease Precision: 0.7712 - Disease Recall: 0.7697 - Disease F1: 0.7704 - Disease Number: 521 - Family Member Precision: 0.8462 - Family Member Recall: 0.8462 - Family Member F1: 0.8462 - Family Member Number: 13 - Medication Precision: 0.8378 - Medication Recall: 0.8378 - Medication F1: 0.8378 - Medication Number: 37 - Procedure Precision: 0.6510 - Procedure Recall: 0.7239 - Procedure F1: 0.6855 - Procedure Number: 134 - Overall Precision: 0.7418 - Overall Recall: 0.7772 - Overall F1: 0.7591 - Overall Accuracy: 0.9238 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 13 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Body Part Precision | Body Part Recall | Body Part F1 | Body Part Number | Disease Precision | Disease Recall | Disease F1 | Disease Number | Family Member Precision | Family Member Recall | Family Member F1 | Family Member Number | Medication Precision | Medication Recall | Medication F1 | Medication Number | Procedure Precision | Procedure Recall | Procedure F1 | Procedure Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.3329 | 1.0 | 502 | 0.2561 | 0.6830 | 0.7766 | 0.7268 | 197 | 0.7718 | 0.7658 | 0.7688 | 521 | 0.9231 | 0.9231 | 0.9231 | 13 | 0.75 | 0.8108 | 0.7792 | 37 | 0.6218 | 0.7239 | 0.6690 | 134 | 0.7274 | 0.7661 | 0.7462 | 0.9219 | | 0.1699 | 2.0 | 1004 | 0.2661 | 0.7124 | 0.8173 | 0.7612 | 197 | 0.7712 | 0.7697 | 0.7704 | 521 | 0.8462 | 0.8462 | 0.8462 | 13 | 0.8378 | 0.8378 | 0.8378 | 37 | 0.6510 | 0.7239 | 0.6855 | 134 | 0.7418 | 0.7772 | 0.7591 | 0.9238 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3