deta-swin-large
This model is a fine-tuned version of jozhang97/deta-swin-large on the None dataset. It achieves the following results on the evaluation set:
- Loss: 7.7323
- Map: 0.448
- Map 50: 0.6146
- Map 75: 0.4994
- Map Small: 0.4453
- Map Medium: 0.5526
- Map Large: -1.0
- Mar 1: 0.286
- Mar 10: 0.4045
- Mar 100: 0.5073
- Mar Small: 0.5039
- Mar Medium: 0.571
- Mar Large: -1.0
- Map Ball: 0.2858
- Mar 100 Ball: 0.4068
- Map Goalkeeper: 0.7008
- Mar 100 Goalkeeper: 0.7649
- Map Player: 0.8053
- Mar 100 Player: 0.8576
- Map Referee: 0.0
- Mar 100 Referee: 0.0
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Ball | Mar 100 Ball | Map Goalkeeper | Mar 100 Goalkeeper | Map Player | Mar 100 Player | Map Referee | Mar 100 Referee |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
296.278 | 1.0 | 149 | 62.3890 | 0.0999 | 0.1535 | 0.1216 | 0.0976 | 0.1516 | -1.0 | 0.0078 | 0.0616 | 0.1808 | 0.1808 | 0.2414 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3997 | 0.7232 | 0.0 | 0.0 |
74.5246 | 2.0 | 298 | 25.7949 | 0.1643 | 0.236 | 0.1843 | 0.1638 | 0.2281 | -1.0 | 0.0162 | 0.1175 | 0.3643 | 0.3753 | 0.2564 | -1.0 | 0.0 | 0.0 | 0.0189 | 0.6969 | 0.6384 | 0.7605 | 0.0 | 0.0 |
23.5211 | 3.0 | 447 | 16.6520 | 0.1984 | 0.25 | 0.2246 | 0.1971 | 0.2759 | -1.0 | 0.0111 | 0.2288 | 0.4151 | 0.4131 | 0.5609 | -1.0 | 0.0 | 0.0 | 0.0371 | 0.8114 | 0.7566 | 0.849 | 0.0 | 0.0 |
18.7573 | 4.0 | 596 | 13.6105 | 0.2271 | 0.3101 | 0.2518 | 0.2264 | 0.2742 | -1.0 | 0.0608 | 0.2627 | 0.4917 | 0.4901 | 0.5426 | -1.0 | 0.1064 | 0.3159 | 0.0287 | 0.8139 | 0.7732 | 0.8368 | 0.0 | 0.0 |
14.4593 | 5.0 | 745 | 11.9880 | 0.2282 | 0.309 | 0.2486 | 0.2272 | 0.2805 | -1.0 | 0.0774 | 0.2571 | 0.4835 | 0.4812 | 0.531 | -1.0 | 0.1022 | 0.3023 | 0.028 | 0.7914 | 0.7824 | 0.8405 | 0.0 | 0.0 |
12.878 | 6.0 | 894 | 10.9109 | 0.3867 | 0.5529 | 0.418 | 0.3955 | 0.3944 | -1.0 | 0.2517 | 0.3723 | 0.4836 | 0.4837 | 0.526 | -1.0 | 0.1759 | 0.3349 | 0.6007 | 0.7629 | 0.7704 | 0.8367 | 0.0 | 0.0 |
11.4964 | 7.0 | 1043 | 10.0801 | 0.4098 | 0.5788 | 0.4432 | 0.4076 | 0.5423 | -1.0 | 0.2558 | 0.3796 | 0.4805 | 0.4774 | 0.5626 | -1.0 | 0.2016 | 0.3227 | 0.6587 | 0.7622 | 0.7792 | 0.8372 | 0.0 | 0.0 |
10.938 | 8.0 | 1192 | 9.6962 | 0.4168 | 0.5909 | 0.4511 | 0.4136 | 0.5736 | -1.0 | 0.2622 | 0.384 | 0.4829 | 0.479 | 0.5953 | -1.0 | 0.2148 | 0.3045 | 0.6771 | 0.7944 | 0.7753 | 0.8327 | 0.0 | 0.0 |
10.2185 | 9.0 | 1341 | 9.2201 | 0.4414 | 0.6237 | 0.4939 | 0.4461 | 0.4126 | -1.0 | 0.2975 | 0.4088 | 0.5078 | 0.5043 | 0.5659 | -1.0 | 0.3159 | 0.4023 | 0.6628 | 0.7886 | 0.7866 | 0.8403 | 0.0 | 0.0 |
10.1417 | 10.0 | 1490 | 8.8414 | 0.4258 | 0.5998 | 0.4616 | 0.432 | 0.4459 | -1.0 | 0.2829 | 0.4079 | 0.5097 | 0.5112 | 0.5104 | -1.0 | 0.2659 | 0.4256 | 0.6382 | 0.76 | 0.799 | 0.8533 | 0.0 | 0.0 |
9.6417 | 11.0 | 1639 | 8.9382 | 0.4158 | 0.5798 | 0.4758 | 0.4134 | 0.5169 | -1.0 | 0.2758 | 0.401 | 0.5016 | 0.499 | 0.5324 | -1.0 | 0.2131 | 0.3791 | 0.6644 | 0.7857 | 0.7857 | 0.8414 | 0.0 | 0.0 |
9.3618 | 12.0 | 1788 | 8.5653 | 0.4349 | 0.6074 | 0.5026 | 0.4337 | 0.5436 | -1.0 | 0.2857 | 0.4042 | 0.5052 | 0.5022 | 0.5646 | -1.0 | 0.2359 | 0.375 | 0.7111 | 0.8 | 0.7927 | 0.8458 | 0.0 | 0.0 |
9.1601 | 13.0 | 1937 | 8.5162 | 0.4346 | 0.6159 | 0.4669 | 0.4332 | 0.5417 | -1.0 | 0.2803 | 0.3997 | 0.4998 | 0.497 | 0.5623 | -1.0 | 0.2628 | 0.3795 | 0.6862 | 0.7778 | 0.7892 | 0.842 | 0.0 | 0.0 |
8.9938 | 14.0 | 2086 | 8.2598 | 0.4403 | 0.6165 | 0.4962 | 0.4379 | 0.5494 | -1.0 | 0.2803 | 0.4002 | 0.5017 | 0.4986 | 0.5674 | -1.0 | 0.2661 | 0.3773 | 0.7019 | 0.7806 | 0.7931 | 0.8489 | 0.0 | 0.0 |
8.7258 | 15.0 | 2235 | 8.2129 | 0.4507 | 0.6142 | 0.5192 | 0.4505 | 0.5363 | -1.0 | 0.2931 | 0.4137 | 0.5149 | 0.5123 | 0.5611 | -1.0 | 0.3076 | 0.4114 | 0.7154 | 0.8028 | 0.7796 | 0.8453 | 0.0 | 0.0 |
8.1655 | 16.0 | 2384 | 8.1459 | 0.4436 | 0.6071 | 0.4985 | 0.4455 | 0.4346 | -1.0 | 0.2891 | 0.4125 | 0.5136 | 0.5134 | 0.5115 | -1.0 | 0.3046 | 0.4091 | 0.6794 | 0.7946 | 0.7905 | 0.8507 | 0.0 | 0.0 |
8.1808 | 17.0 | 2533 | 8.1069 | 0.4426 | 0.6231 | 0.4971 | 0.4436 | 0.5167 | -1.0 | 0.2868 | 0.4078 | 0.5074 | 0.5057 | 0.5667 | -1.0 | 0.2985 | 0.4023 | 0.6942 | 0.7886 | 0.7776 | 0.8388 | 0.0 | 0.0 |
8.45 | 18.0 | 2682 | 7.9322 | 0.4443 | 0.6157 | 0.4739 | 0.4427 | 0.5757 | -1.0 | 0.2839 | 0.4082 | 0.5104 | 0.5066 | 0.5963 | -1.0 | 0.2715 | 0.4023 | 0.704 | 0.7833 | 0.8018 | 0.856 | 0.0 | 0.0 |
8.0344 | 19.0 | 2831 | 7.9556 | 0.4464 | 0.6133 | 0.4963 | 0.4456 | 0.5388 | -1.0 | 0.2866 | 0.4113 | 0.5129 | 0.5111 | 0.5584 | -1.0 | 0.2678 | 0.4093 | 0.722 | 0.7914 | 0.7956 | 0.851 | 0.0 | 0.0 |
7.9178 | 20.0 | 2980 | 7.7323 | 0.448 | 0.6146 | 0.4994 | 0.4453 | 0.5526 | -1.0 | 0.286 | 0.4045 | 0.5073 | 0.5039 | 0.571 | -1.0 | 0.2858 | 0.4068 | 0.7008 | 0.7649 | 0.8053 | 0.8576 | 0.0 | 0.0 |
Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.3.1
- Tokenizers 0.21.0
- Downloads last month
- 35
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for theButcher22/deta-swin-large
Base model
jozhang97/deta-swin-large