Model Name: NER-finetuned-BETO
This is a BERT model fine-tuned for Named Entity Recognition (NER).
Model Description
This is a fine-tuned BERT model for Named Entity Recognition (NER) task using CONLL2002 dataset.
In the first part, the dataset must be pre-processed in order to give it to the model. This is done using the 🤗 Transformers and BERT tokenizers. Once this is done, finetuning is applied from bert-base-cased and using the 🤗 AutoModelForTokenClassification.
Finally, the model is trained obtaining the neccesary metrics for evaluating its performance (Precision, Recall, F1 and Accuracy)
Summary of executed tests can be found in: https://docs.google.com/spreadsheets/d/1lI7skNIvRurwq3LA5ps7JFK5TxToEx4s7Kaah3ezyQc/edit?usp=sharing
Model can be found in: https://huggingface.co/paulrojasg/NER-finetuned-BETO
Github repository: https://github.com/paulrojasg/nlp_4th_workshop
Training
Training Details
- Epochs: 10
- Learning Rate: 2e-05
- Weight Decay: 0.01
- Batch Size (Train): 16
- Batch Size (Eval): 8
Training Metrics
Epoch | Training Loss | Validation Loss | Precision | Recall | F1 Score | Accuracy |
---|---|---|---|---|---|---|
1 | 0.0065 | 0.2077 | 0.8436 | 0.8564 | 0.8499 | 0.9712 |
2 | 0.0062 | 0.2345 | 0.8318 | 0.8513 | 0.8415 | 0.9683 |
3 | 0.0069 | 0.2156 | 0.8464 | 0.8470 | 0.8467 | 0.9674 |
4 | 0.0064 | 0.2189 | 0.8356 | 0.8490 | 0.8423 | 0.9686 |
5 | 0.0055 | 0.2383 | 0.8373 | 0.8488 | 0.8430 | 0.9687 |
6 | 0.0050 | 0.2315 | 0.8334 | 0.8543 | 0.8438 | 0.9694 |
7 | 0.0037 | 0.2343 | 0.8428 | 0.8573 | 0.8500 | 0.9703 |
8 | 0.0031 | 0.2493 | 0.8400 | 0.8555 | 0.8477 | 0.9694 |
9 | 0.0024 | 0.2421 | 0.8478 | 0.8617 | 0.8547 | 0.9704 |
10 | 0.0023 | 0.2497 | 0.8432 | 0.8598 | 0.8514 | 0.9703 |
Authors
Made by:
- Paul Rodrigo Rojas Guerrero
- Jose Luis Hincapie Bucheli
- Sebastián Idrobo Avirama
With help from:
- Downloads last month
- 9