diegoale1248's picture
update readme
6a9898e
|
raw
history blame
1.58 kB
metadata
license: mit
base_model: neuralmind/bert-base-portuguese-cased
tags:
  - generated_from_trainer
model-index:
  - name: categories-estimation
    results: []

categories-estimation

This model is a fine-tuned version of neuralmind/bert-base-portuguese-cased on the None dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training Metrics

Step Training Loss Validation Loss F1 Accuracy 100 0.257000 0.432904 0.853367 0.905234 200 0.241600 0.431226 0.848656 0.903030 300 0.242200 0.407710 0.865890 0.908356 400 0.201600 0.375613 0.881634 0.918825 500 0.181400 0.378719 0.879368 0.916988 600 0.168800 0.361804 0.885401 0.920478

Evaluation Metrics

{'eval_loss': 0.3545467257499695, 'eval_F1': 0.8847876543649995, 'eval_Accuracy': 0.9213957759412305, 'eval_runtime': 14.8305, 'eval_samples_per_second': 367.149, 'eval_steps_per_second': 45.919, 'epoch': 1.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0