hkivancoral's picture
End of training
244d4cd
metadata
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_3x_deit_tiny_rms_001_fold4
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.7733333333333333

smids_3x_deit_tiny_rms_001_fold4

This model is a fine-tuned version of facebook/deit-tiny-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 2.2714
  • Accuracy: 0.7733

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.8968 1.0 225 0.7639 0.6083
0.8626 2.0 450 0.7483 0.6217
0.7883 3.0 675 0.8572 0.525
0.7849 4.0 900 1.5683 0.415
0.8414 5.0 1125 0.7843 0.605
0.7458 6.0 1350 0.6775 0.6783
0.7372 7.0 1575 0.7834 0.63
0.7187 8.0 1800 0.6584 0.6983
0.6863 9.0 2025 0.6503 0.6817
0.6631 10.0 2250 0.6660 0.6967
0.6649 11.0 2475 0.6379 0.725
0.6942 12.0 2700 0.6307 0.7233
0.6966 13.0 2925 0.6415 0.735
0.5939 14.0 3150 0.6031 0.7383
0.6282 15.0 3375 0.6088 0.7217
0.602 16.0 3600 0.6243 0.7367
0.5204 17.0 3825 0.6588 0.7317
0.5969 18.0 4050 0.5604 0.7533
0.5806 19.0 4275 0.5450 0.7517
0.5391 20.0 4500 0.5377 0.7517
0.5038 21.0 4725 0.5743 0.7633
0.5713 22.0 4950 0.5280 0.7683
0.5554 23.0 5175 0.6486 0.74
0.5176 24.0 5400 0.5324 0.775
0.495 25.0 5625 0.5436 0.7867
0.5361 26.0 5850 0.5289 0.7717
0.4994 27.0 6075 0.5964 0.765
0.4408 28.0 6300 0.5977 0.755
0.4143 29.0 6525 0.5955 0.77
0.4019 30.0 6750 0.5477 0.79
0.3202 31.0 6975 0.6202 0.7733
0.2909 32.0 7200 0.6279 0.7667
0.3341 33.0 7425 0.6209 0.7933
0.3201 34.0 7650 0.6151 0.7883
0.3077 35.0 7875 0.6910 0.79
0.2136 36.0 8100 0.8074 0.7567
0.1784 37.0 8325 0.7942 0.785
0.2068 38.0 8550 0.8891 0.78
0.1962 39.0 8775 0.8894 0.775
0.1531 40.0 9000 1.1161 0.7783
0.0708 41.0 9225 1.1305 0.7867
0.0729 42.0 9450 1.2984 0.755
0.0781 43.0 9675 1.3627 0.7867
0.0522 44.0 9900 1.6222 0.7683
0.0281 45.0 10125 1.8871 0.7683
0.0088 46.0 10350 2.0271 0.7767
0.0017 47.0 10575 2.0484 0.7717
0.002 48.0 10800 2.2088 0.7767
0.0003 49.0 11025 2.2142 0.7717
0.0001 50.0 11250 2.2714 0.7733

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.1+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2