bert_base_tcm_0.6 / README.md
ricardo-filho's picture
update model card README.md
a4f7f2e
|
raw
history blame
10.8 kB
metadata
license: mit
tags:
  - generated_from_trainer
model-index:
  - name: bert_base_tcm_0.6
    results: []

bert_base_tcm_0.6

This model is a fine-tuned version of neuralmind/bert-base-portuguese-cased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0295
  • Criterio Julgamento Precision: 0.8488
  • Criterio Julgamento Recall: 0.8902
  • Criterio Julgamento F1: 0.8690
  • Criterio Julgamento Number: 82
  • Data Sessao Precision: 0.7903
  • Data Sessao Recall: 0.8909
  • Data Sessao F1: 0.8376
  • Data Sessao Number: 55
  • Modalidade Licitacao Precision: 0.9571
  • Modalidade Licitacao Recall: 0.9781
  • Modalidade Licitacao F1: 0.9674
  • Modalidade Licitacao Number: 319
  • Numero Exercicio Precision: 0.9181
  • Numero Exercicio Recall: 0.9812
  • Numero Exercicio F1: 0.9486
  • Numero Exercicio Number: 160
  • Objeto Licitacao Precision: 0.6393
  • Objeto Licitacao Recall: 0.6724
  • Objeto Licitacao F1: 0.6555
  • Objeto Licitacao Number: 58
  • Valor Objeto Precision: 0.9211
  • Valor Objeto Recall: 0.9211
  • Valor Objeto F1: 0.9211
  • Valor Objeto Number: 38
  • Overall Precision: 0.8938
  • Overall Recall: 0.9340
  • Overall F1: 0.9135
  • Overall Accuracy: 0.9962

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Criterio Julgamento Precision Criterio Julgamento Recall Criterio Julgamento F1 Criterio Julgamento Number Data Sessao Precision Data Sessao Recall Data Sessao F1 Data Sessao Number Modalidade Licitacao Precision Modalidade Licitacao Recall Modalidade Licitacao F1 Modalidade Licitacao Number Numero Exercicio Precision Numero Exercicio Recall Numero Exercicio F1 Numero Exercicio Number Objeto Licitacao Precision Objeto Licitacao Recall Objeto Licitacao F1 Objeto Licitacao Number Valor Objeto Precision Valor Objeto Recall Valor Objeto F1 Valor Objeto Number Overall Precision Overall Recall Overall F1 Overall Accuracy
0.0252 1.0 1963 0.0202 0.8022 0.8902 0.8439 82 0.7391 0.9273 0.8226 55 0.9233 0.9812 0.9514 319 0.8966 0.975 0.9341 160 0.4730 0.6034 0.5303 58 0.7083 0.8947 0.7907 38 0.8327 0.9298 0.8786 0.9948
0.0191 2.0 3926 0.0226 0.8554 0.8659 0.8606 82 0.5641 0.4 0.4681 55 0.9572 0.9812 0.9690 319 0.9273 0.9563 0.9415 160 0.3770 0.3966 0.3866 58 0.8571 0.7895 0.8219 38 0.8620 0.8596 0.8608 0.9951
0.0137 3.0 5889 0.0193 0.8875 0.8659 0.8765 82 0.7571 0.9636 0.848 55 0.9394 0.9718 0.9553 319 0.9172 0.9688 0.9422 160 0.4659 0.7069 0.5616 58 0.8333 0.9211 0.875 38 0.8537 0.9340 0.8920 0.9951
0.0082 4.0 7852 0.0210 0.8780 0.8780 0.8780 82 0.7966 0.8545 0.8246 55 0.9512 0.9781 0.9645 319 0.9023 0.9812 0.9401 160 0.5385 0.6034 0.5691 58 0.9 0.9474 0.9231 38 0.8810 0.9256 0.9027 0.9963
0.0048 5.0 9815 0.0222 0.8261 0.9268 0.8736 82 0.7969 0.9273 0.8571 55 0.9512 0.9781 0.9645 319 0.9231 0.975 0.9483 160 0.6515 0.7414 0.6935 58 0.875 0.9211 0.8974 38 0.8867 0.9452 0.9150 0.9964
0.0044 6.0 11778 0.0262 0.8276 0.8780 0.8521 82 0.7681 0.9636 0.8548 55 0.9541 0.9781 0.9659 319 0.9235 0.9812 0.9515 160 0.5263 0.6897 0.5970 58 0.9211 0.9211 0.9211 38 0.8722 0.9396 0.9047 0.9959
0.0042 7.0 13741 0.0246 0.8523 0.9146 0.8824 82 0.7656 0.8909 0.8235 55 0.9509 0.9718 0.9612 319 0.9118 0.9688 0.9394 160 0.5938 0.6552 0.6230 58 0.8974 0.9211 0.9091 38 0.8815 0.9298 0.9050 0.9960
0.0013 8.0 15704 0.0294 0.8295 0.8902 0.8588 82 0.7391 0.9273 0.8226 55 0.9543 0.9812 0.9675 319 0.9070 0.975 0.9398 160 0.6094 0.6724 0.6393 58 0.875 0.9211 0.8974 38 0.8765 0.9368 0.9056 0.9961
0.0019 9.0 17667 0.0303 0.8690 0.8902 0.8795 82 0.8305 0.8909 0.8596 55 0.9538 0.9718 0.9627 319 0.9290 0.9812 0.9544 160 0.6441 0.6552 0.6496 58 0.9211 0.9211 0.9211 38 0.9019 0.9298 0.9156 0.9961
0.0007 10.0 19630 0.0295 0.8488 0.8902 0.8690 82 0.7903 0.8909 0.8376 55 0.9571 0.9781 0.9674 319 0.9181 0.9812 0.9486 160 0.6393 0.6724 0.6555 58 0.9211 0.9211 0.9211 38 0.8938 0.9340 0.9135 0.9962

Framework versions

  • Transformers 4.20.0.dev0
  • Pytorch 1.11.0+cu113
  • Datasets 2.2.2
  • Tokenizers 0.12.1