muhammadravi251001's picture
update model card README.md
97e68ba
|
raw
history blame
3.03 kB
metadata
license: mit
tags:
  - generated_from_trainer
metrics:
  - f1
model-index:
  - name: >-
      fine-tuned-DatasetQAS-TYDI-QA-ID-with-indobert-base-uncased-with-ITTL-without-freeze-LR-1e-05
    results: []

fine-tuned-DatasetQAS-TYDI-QA-ID-with-indobert-base-uncased-with-ITTL-without-freeze-LR-1e-05

This model is a fine-tuned version of indolem/indobert-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3123
  • Exact Match: 55.7319
  • F1: 68.7642

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Exact Match F1
6.3583 0.5 19 4.0413 7.7601 19.0346
6.3583 1.0 38 2.8990 13.4039 26.3293
4.0604 1.5 57 2.5086 21.3404 33.1133
4.0604 2.0 76 2.3326 24.6914 37.3153
4.0604 2.5 95 2.2257 27.3369 38.8270
2.4855 3.0 114 2.1022 31.9224 43.8833
2.4855 3.5 133 2.0109 33.1570 45.0028
2.1528 4.0 152 1.8668 35.6261 48.8805
2.1528 4.5 171 1.7791 38.9771 52.1137
2.1528 5.0 190 1.7014 42.1517 55.5969
1.8067 5.5 209 1.5955 44.4444 58.4357
1.8067 6.0 228 1.5192 48.3245 61.1827
1.8067 6.5 247 1.4530 50.7937 63.7891
1.5663 7.0 266 1.4101 52.3810 65.5927
1.5663 7.5 285 1.3799 53.0864 65.9441
1.3777 8.0 304 1.3490 54.1446 67.0120
1.3777 8.5 323 1.3281 55.0265 68.0820
1.3777 9.0 342 1.3247 55.0265 68.0493
1.3092 9.5 361 1.3147 55.3792 68.6187
1.3092 10.0 380 1.3123 55.7319 68.7642

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1+cu117
  • Datasets 2.2.0
  • Tokenizers 0.13.2