File size: 8,355 Bytes
686df16 d030087 686df16 d030087 686df16 06e5ccd 686df16 d030087 686df16 d030087 705b779 686df16 06e5ccd 686df16 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
---
base_model: NlpHUST/ner-vietnamese-electra-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_ner-token_classification_v1.0.7-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_ner-token_classification_v1.0.7-7
This model is a fine-tuned version of [NlpHUST/ner-vietnamese-electra-base](https://huggingface.co/NlpHUST/ner-vietnamese-electra-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3324
- Age: {'precision': 0.8854961832061069, 'recall': 0.8656716417910447, 'f1': 0.8754716981132075, 'number': 134}
- Datetime: {'precision': 0.6675774134790529, 'recall': 0.7426545086119554, 'f1': 0.7031175059952038, 'number': 987}
- Disease: {'precision': 0.6914893617021277, 'recall': 0.7442748091603053, 'f1': 0.7169117647058824, 'number': 262}
- Event: {'precision': 0.3287671232876712, 'recall': 0.34285714285714286, 'f1': 0.3356643356643356, 'number': 280}
- Gender: {'precision': 0.7529411764705882, 'recall': 0.735632183908046, 'f1': 0.7441860465116279, 'number': 87}
- Law: {'precision': 0.5590062111801242, 'recall': 0.7058823529411765, 'f1': 0.6239168110918544, 'number': 255}
- Location: {'precision': 0.6794407042982911, 'recall': 0.7309192200557103, 'f1': 0.7042404723564144, 'number': 1795}
- Organization: {'precision': 0.6267441860465116, 'recall': 0.712491738268341, 'f1': 0.6668728734921126, 'number': 1513}
- Person: {'precision': 0.6789052069425902, 'recall': 0.7316546762589928, 'f1': 0.7042936288088643, 'number': 1390}
- Quantity: {'precision': 0.522273425499232, 'recall': 0.6007067137809188, 'f1': 0.5587510271158588, 'number': 566}
- Role: {'precision': 0.46021840873634945, 'recall': 0.5393053016453382, 'f1': 0.49663299663299665, 'number': 547}
- Transportation: {'precision': 0.49645390070921985, 'recall': 0.6086956521739131, 'f1': 0.5468749999999999, 'number': 115}
- Overall Precision: 0.6251
- Overall Recall: 0.6930
- Overall F1: 0.6573
- Overall Accuracy: 0.8992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Age | Datetime | Disease | Event | Gender | Law | Location | Organization | Person | Quantity | Role | Transportation | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.3138 | 1.9991 | 2313 | 0.3302 | {'precision': 0.8721804511278195, 'recall': 0.8656716417910447, 'f1': 0.8689138576779025, 'number': 134} | {'precision': 0.6596715328467153, 'recall': 0.7325227963525835, 'f1': 0.6941910705712914, 'number': 987} | {'precision': 0.6421725239616614, 'recall': 0.767175572519084, 'f1': 0.6991304347826088, 'number': 262} | {'precision': 0.34297520661157027, 'recall': 0.29642857142857143, 'f1': 0.31800766283524906, 'number': 280} | {'precision': 0.84, 'recall': 0.7241379310344828, 'f1': 0.7777777777777777, 'number': 87} | {'precision': 0.5373134328358209, 'recall': 0.7058823529411765, 'f1': 0.6101694915254238, 'number': 255} | {'precision': 0.6927312775330396, 'recall': 0.7008356545961003, 'f1': 0.6967599003046248, 'number': 1795} | {'precision': 0.6132789749563191, 'recall': 0.6959682749504296, 'f1': 0.6520123839009287, 'number': 1513} | {'precision': 0.704323570432357, 'recall': 0.7266187050359713, 'f1': 0.7152974504249292, 'number': 1390} | {'precision': 0.5159817351598174, 'recall': 0.598939929328622, 'f1': 0.55437448896157, 'number': 566} | {'precision': 0.4633333333333333, 'recall': 0.5082266910420475, 'f1': 0.4847428073234525, 'number': 547} | {'precision': 0.49206349206349204, 'recall': 0.5391304347826087, 'f1': 0.5145228215767634, 'number': 115} | 0.6280 | 0.6766 | 0.6514 | 0.9015 |
| 0.2556 | 3.9983 | 4626 | 0.3324 | {'precision': 0.8854961832061069, 'recall': 0.8656716417910447, 'f1': 0.8754716981132075, 'number': 134} | {'precision': 0.6675774134790529, 'recall': 0.7426545086119554, 'f1': 0.7031175059952038, 'number': 987} | {'precision': 0.6914893617021277, 'recall': 0.7442748091603053, 'f1': 0.7169117647058824, 'number': 262} | {'precision': 0.3287671232876712, 'recall': 0.34285714285714286, 'f1': 0.3356643356643356, 'number': 280} | {'precision': 0.7529411764705882, 'recall': 0.735632183908046, 'f1': 0.7441860465116279, 'number': 87} | {'precision': 0.5590062111801242, 'recall': 0.7058823529411765, 'f1': 0.6239168110918544, 'number': 255} | {'precision': 0.6794407042982911, 'recall': 0.7309192200557103, 'f1': 0.7042404723564144, 'number': 1795} | {'precision': 0.6267441860465116, 'recall': 0.712491738268341, 'f1': 0.6668728734921126, 'number': 1513} | {'precision': 0.6789052069425902, 'recall': 0.7316546762589928, 'f1': 0.7042936288088643, 'number': 1390} | {'precision': 0.522273425499232, 'recall': 0.6007067137809188, 'f1': 0.5587510271158588, 'number': 566} | {'precision': 0.46021840873634945, 'recall': 0.5393053016453382, 'f1': 0.49663299663299665, 'number': 547} | {'precision': 0.49645390070921985, 'recall': 0.6086956521739131, 'f1': 0.5468749999999999, 'number': 115} | 0.6251 | 0.6930 | 0.6573 | 0.8992 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|