hkivancoral's picture
End of training
54e3d0b
metadata
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_1x_deit_tiny_sgd_001_fold4
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.835

smids_1x_deit_tiny_sgd_001_fold4

This model is a fine-tuned version of facebook/deit-tiny-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4122
  • Accuracy: 0.835

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.0732 1.0 75 1.0625 0.4117
0.95 2.0 150 0.9470 0.52
0.8388 3.0 225 0.8541 0.6133
0.8107 4.0 300 0.7814 0.66
0.7296 5.0 375 0.7116 0.6933
0.6565 6.0 450 0.6560 0.7233
0.6075 7.0 525 0.6119 0.7367
0.5566 8.0 600 0.5801 0.76
0.5592 9.0 675 0.5568 0.7717
0.4945 10.0 750 0.5396 0.785
0.484 11.0 825 0.5228 0.79
0.4564 12.0 900 0.5098 0.7917
0.4689 13.0 975 0.5015 0.7917
0.4232 14.0 1050 0.4882 0.7967
0.4151 15.0 1125 0.4851 0.795
0.3646 16.0 1200 0.4743 0.8017
0.3676 17.0 1275 0.4658 0.8083
0.3612 18.0 1350 0.4603 0.8017
0.4051 19.0 1425 0.4555 0.81
0.3477 20.0 1500 0.4507 0.81
0.375 21.0 1575 0.4488 0.8017
0.3102 22.0 1650 0.4425 0.8083
0.3203 23.0 1725 0.4393 0.8117
0.3847 24.0 1800 0.4374 0.8133
0.3175 25.0 1875 0.4337 0.8133
0.3275 26.0 1950 0.4305 0.8183
0.2952 27.0 2025 0.4280 0.8167
0.3226 28.0 2100 0.4272 0.82
0.2919 29.0 2175 0.4254 0.82
0.3056 30.0 2250 0.4233 0.8233
0.2391 31.0 2325 0.4233 0.8233
0.3148 32.0 2400 0.4205 0.8267
0.2897 33.0 2475 0.4204 0.8267
0.2561 34.0 2550 0.4195 0.8267
0.2841 35.0 2625 0.4186 0.8283
0.2572 36.0 2700 0.4171 0.8267
0.2531 37.0 2775 0.4160 0.8267
0.2737 38.0 2850 0.4152 0.8333
0.276 39.0 2925 0.4146 0.8317
0.3158 40.0 3000 0.4142 0.8317
0.2611 41.0 3075 0.4144 0.8367
0.2512 42.0 3150 0.4134 0.835
0.2782 43.0 3225 0.4133 0.835
0.2613 44.0 3300 0.4133 0.8367
0.2656 45.0 3375 0.4131 0.835
0.2575 46.0 3450 0.4126 0.835
0.2475 47.0 3525 0.4125 0.8367
0.2893 48.0 3600 0.4124 0.835
0.2785 49.0 3675 0.4123 0.835
0.2483 50.0 3750 0.4122 0.835

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0