muhammadravi251001's picture
update model card README.md
19064ee
|
raw
history blame
2.73 kB
metadata
license: mit
tags:
  - generated_from_trainer
metrics:
  - f1
model-index:
  - name: >-
      fine-tuned-DatasetQAS-Squad-ID-with-indobert-base-uncased-with-ITTL-without-freeze-LR-1e-05
    results: []

fine-tuned-DatasetQAS-Squad-ID-with-indobert-base-uncased-with-ITTL-without-freeze-LR-1e-05

This model is a fine-tuned version of indolem/indobert-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5296
  • Exact Match: 48.5657
  • F1: 64.5763

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 32
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Exact Match F1
2.0185 0.5 463 1.8556 39.4128 54.9486
1.8481 1.0 926 1.6794 43.1985 58.7955
1.6413 1.5 1389 1.6081 45.2091 61.6638
1.611 2.0 1852 1.5670 45.9241 62.7351
1.4929 2.5 2315 1.5571 46.5298 63.3878
1.5119 3.0 2778 1.5222 47.2785 64.2211
1.3955 3.5 3241 1.5285 47.6235 64.2546
1.3643 4.0 3704 1.5133 47.9179 64.1430
1.3277 4.5 4167 1.5223 47.8927 64.3061
1.3058 5.0 4630 1.5126 48.6498 64.5757
1.2847 5.5 5093 1.5154 48.4479 64.5972
1.1984 6.0 5556 1.5289 48.4815 64.4181
1.1817 6.5 6019 1.5277 48.4395 64.7923
1.2203 7.0 6482 1.5134 48.5404 64.5935
1.1492 7.5 6945 1.5412 48.6330 64.6696
1.1567 8.0 7408 1.5296 48.5657 64.5763

Framework versions

  • Transformers 4.27.4
  • Pytorch 1.13.1+cu117
  • Datasets 2.2.0
  • Tokenizers 0.13.2