--- library_name: transformers license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8727272727272727 --- # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4382 - Accuracy: 0.8727 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.9032 | 7 | 2.3727 | 0.2 | | 2.3966 | 1.9355 | 15 | 2.2910 | 0.3182 | | 2.3131 | 2.9677 | 23 | 2.1218 | 0.4091 | | 2.072 | 4.0 | 31 | 1.8349 | 0.4545 | | 2.072 | 4.9032 | 38 | 1.4635 | 0.5364 | | 1.5528 | 5.9355 | 46 | 1.1036 | 0.6636 | | 1.0472 | 6.9677 | 54 | 0.9273 | 0.7273 | | 0.7989 | 8.0 | 62 | 0.8008 | 0.7909 | | 0.7989 | 8.9032 | 69 | 0.7359 | 0.7818 | | 0.604 | 9.9355 | 77 | 0.7283 | 0.7909 | | 0.5228 | 10.9677 | 85 | 0.5897 | 0.8364 | | 0.4734 | 12.0 | 93 | 0.6503 | 0.8182 | | 0.3987 | 12.9032 | 100 | 0.5785 | 0.8273 | | 0.3987 | 13.9355 | 108 | 0.6091 | 0.8182 | | 0.3742 | 14.9677 | 116 | 0.5278 | 0.8455 | | 0.3588 | 16.0 | 124 | 0.5279 | 0.8545 | | 0.3536 | 16.9032 | 131 | 0.5189 | 0.8364 | | 0.3536 | 17.9355 | 139 | 0.5036 | 0.8545 | | 0.331 | 18.9677 | 147 | 0.5327 | 0.8364 | | 0.2836 | 20.0 | 155 | 0.4717 | 0.8636 | | 0.2785 | 20.9032 | 162 | 0.4598 | 0.8545 | | 0.2439 | 21.9355 | 170 | 0.4783 | 0.8545 | | 0.2439 | 22.9677 | 178 | 0.4948 | 0.8545 | | 0.2779 | 24.0 | 186 | 0.4884 | 0.8455 | | 0.2167 | 24.9032 | 193 | 0.5084 | 0.8545 | | 0.2164 | 25.9355 | 201 | 0.4715 | 0.8545 | | 0.2164 | 26.9677 | 209 | 0.5503 | 0.8273 | | 0.2342 | 28.0 | 217 | 0.4980 | 0.8273 | | 0.216 | 28.9032 | 224 | 0.4241 | 0.8545 | | 0.1986 | 29.9355 | 232 | 0.4466 | 0.8545 | | 0.1919 | 30.9677 | 240 | 0.4558 | 0.8636 | | 0.1919 | 32.0 | 248 | 0.4390 | 0.8636 | | 0.1958 | 32.9032 | 255 | 0.4379 | 0.8545 | | 0.1693 | 33.9355 | 263 | 0.4424 | 0.8455 | | 0.2158 | 34.9677 | 271 | 0.4524 | 0.8364 | | 0.2158 | 36.0 | 279 | 0.4388 | 0.8545 | | 0.1578 | 36.9032 | 286 | 0.4327 | 0.8545 | | 0.1866 | 37.9355 | 294 | 0.4528 | 0.8455 | | 0.1664 | 38.9677 | 302 | 0.4533 | 0.8455 | | 0.1757 | 40.0 | 310 | 0.4492 | 0.8545 | | 0.1757 | 40.9032 | 317 | 0.4418 | 0.8636 | | 0.1542 | 41.9355 | 325 | 0.4412 | 0.8636 | | 0.144 | 42.9677 | 333 | 0.4438 | 0.8545 | | 0.1647 | 44.0 | 341 | 0.4411 | 0.8636 | | 0.1647 | 44.9032 | 348 | 0.4383 | 0.8636 | | 0.1418 | 45.1613 | 350 | 0.4382 | 0.8727 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 3.0.0 - Tokenizers 0.19.1