swinv2-tiny-patch4-window8-256-dmae-humeda-DAV34

This model is a fine-tuned version of microsoft/swinv2-tiny-patch4-window8-256 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3276
  • Accuracy: 0.6333

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 40

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.8889 6 1.4254 0.45
5.9772 1.8889 12 1.3083 0.45
5.9772 2.8889 18 1.3032 0.4667
5.1141 3.8889 24 1.1521 0.5667
5.1141 4.8889 30 1.2171 0.4667
3.8959 5.8889 36 1.1281 0.4833
3.8959 6.8889 42 0.9969 0.6167
2.9663 7.8889 48 0.9814 0.65
2.9663 8.8889 54 0.8797 0.6333
2.416 9.8889 60 0.8956 0.6667
2.416 10.8889 66 0.8364 0.7
1.9336 11.8889 72 0.9800 0.65
1.9336 12.8889 78 0.8707 0.6833
1.5631 13.8889 84 0.9331 0.6333
1.5631 14.8889 90 1.0113 0.6333
1.1295 15.8889 96 1.0988 0.6167
1.1295 16.8889 102 1.0197 0.6833
1.1454 17.8889 108 1.2170 0.6167
1.1454 18.8889 114 1.1182 0.6667
0.8852 19.8889 120 1.1026 0.6833
0.8852 20.8889 126 1.0868 0.6333
0.7881 21.8889 132 1.1674 0.6333
0.7881 22.8889 138 1.1763 0.6667
0.7913 23.8889 144 1.3433 0.6333
0.7913 24.8889 150 1.1228 0.7
0.667 25.8889 156 1.2828 0.6667
0.667 26.8889 162 1.2373 0.6833
0.6299 27.8889 168 1.2951 0.6667
0.6299 28.8889 174 1.3410 0.6333
0.5409 29.8889 180 1.1852 0.7
0.5409 30.8889 186 1.4286 0.6167
0.6085 31.8889 192 1.2376 0.65
0.6085 32.8889 198 1.2249 0.6667
0.562 33.8889 204 1.3640 0.6333
0.562 34.8889 210 1.4234 0.6333
0.4543 35.8889 216 1.3489 0.6333
0.4543 36.8889 222 1.3273 0.6333
0.4708 37.8889 228 1.3215 0.6333
0.4708 38.8889 234 1.3277 0.6333
0.5217 39.8889 240 1.3276 0.6333

Framework versions

  • Transformers 4.47.1
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
6
Safetensors
Model size
27.6M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV34

Finetuned
(110)
this model