text
stringlengths 0
820
|
---|
codebases and pretraining on FMoW dataset for 800 epochs. The |
results differ from their reported results, but are evaluated consis- |
tently with ours. * Reports the results from the SatMAE paper [13]. |
Semantic segmentation transfer We use the SpaceNet v1 |
building segmentation dataset [53] to evaluate semantic seg- |
mentation results on contrastive and MAE-based pretrainingmethods. Prior methods relied on the PSANet [68] segmen- |
tation architecture, while Scale-MAE uses the UperNet [58] |
segmentation architecture which is more common for ViT |
backbones. For even comparison, we test the current state- |
of-the-art SatMAE and ConvMAE methods with UperNet |
as well. Results are detailed in Table 4. |
Method Backbone Model mIoU |
Sup. (Scratch) ResNet50 PSANet 75.6 |
GASSL [3] ResNet50 PSANet 78.5 |
Sup. (Scratch) ViT-Large PSANet 74.7 |
SatMAE [13] ViT-Large PSANet 78.1 |
Sup. (Scratch) ViT-Large UperNet 71.6 |
Vanilla MAE ViT-Large UperNet 77.9 |
SatMAE ViT-Large UperNet 78.0 |
ConvMAE ViT-Large UperNet 77.6 |
Scale-MAE ViT-Large UperNet 78.9 |
Table 4. Semantic segmentation results on SpaceNet v1. |
Scale-MAE outperforms other methods across backbone and seg- |
mentation architectures, where Sup. (Scratch) indicates a super- |
vised model trained from scratch (a randomly initialized network). |
With the same pretraining settings, Scale-MAE outper- |
forms SatMAE by 0.9 mIoU, ConvMAE by 1.3 mIoU, and |
a vanilla MAE by 1.0 mIoU. Scale-MAE outperforms all |
other prior work, including GASSL [3], which SatMAE did |
not outperform on the mean Intersection over Union (mIoU) |
metric for semantic segmentation. Particularly, Scale-MAE |
increases the gap in performance as the resolution of input |
imagery becomes coarser, highlighting the absolute scale- |
invariance introduced by our method. |
In Figure 6, we compare SpaceNet v1 evaluations across |
downscaled images (at 50%, 75%, and 100% of the origi- |
nal image size) for Scale-MAE , SatMAE, and ConvMAE. |
Similar to the classification results, Scale-MAE maintains |
higher semantic segmentation performance over both meth- |
ods, even with images at a coarser GSD. In fact, the per- |
formance gap grows at coarser GSDs. Compared to the |
next-best-performing method at the input GSD, Scale-MAE |
is 0.9 mIoU higher, at 75% GSD Scale-MAE is 1.2 mIoU |
higher, and at 50% Scale-MAE is 1.7 mIoU higher. |
In Table 5, we further evaluate Scale-MAE , SatMAE, and |
ConvMAE across SpaceNet v1, SpaceNet v2 [53], INRIA |
Aerial Image [44], and GID-15 [59] remote sensing datasets |
at native resolution. Scale-MAE outperforms both compara- |
ble methods across all benchmarks. |
4.2. Ablations |
We ablate the key components of the Scale-MAE pretrain- |
ing framework. For these experiments, we use a lightweight |
pretraining setting, where we pretrain for 300 epochs on |
50% 75% 100% |
Relative GSD7072747678mIoU |
Scale-MAE |
SatMAE |
ConvMAEFigure 6. SpaceNet v1 evaluation across downscaled images for |
both Scale-MAE and SatMAE. Scale-MAE maintains higher se- |
mantic segmentation performance over SatMAE, even with images |
of coarser GSD. |
SN1 SN2 INR. G15 |
RI SH VE PA KH - - |
Conv. 77.6 78.7 82.2 78.3 74.8 82.2 37.4 |
Sat. 78.0 81.9 86.6 80.3 76.1 83.0 44.3 |
Scale 78.9 82.2 87.4 81.1 77.1 84.2 46.2 |
Table 5. mIoU on semantic segmentation tasks. SN1/2 (SpaceNet |
v1/2), RI: Rio, SH: Shanghai, VE: Vegas, PA: Paris, KH: Khar- |
toum; INR: INRIA; G15: GID-15. Conv., Sat., and Scale. are |
ConvMAE, SatMAE, and Scale-MAE. |
Method GSDPE KNN 50% KNN 100% |
Vanilla MAE 72.8 77.8 |
Vanilla MAE ! 75.4 78.5 |
MAE + LP 75.3 79.6 |
Scale-MAE ! 78.1 80.7 |
Table 6. Ablation results indicating the importance of GSDPE as |
determined by a KNN classification on RESISC-45 at a relative |
GSD of 50% and 100% of its native GSD. Using the GSDPE |
leads to better performance for both Scale-MAE and the Vanilla |
MAE. MAE + LP denotes the vanilla MAE with the addition of |
our progressive Laplacian decoder. |
FMoW (rather than 800) and use a ViT-Base encoder (rather |
than ViT-Large), and evaluate using a kNN evaluation on |
RESISC-45 at 100% and 50% of its native GSD. The key |
contributions that we ablate are as follows: the GSD posi- |
tional encoder in Table 6, in which we find that the GSD |
postional encoder benefits both Scale-MAE and Vanilla MAE |
across resolutions. In Table 8, we see that the number of |
transformer layers can be reduced from 8 to 3 compared to a |
Vanilla MAE, which results in a performance improvement. |
The standard masking rate of 75% still appears optimal for |
Scale-MAE according to the results in Table 7. |
In Table 9 we ablate the necessity of the low and high res- |
olution reconstructions. Specifically, we test reconstructing |
the low resolution image only, the high resolution image, andMask Rate KNN 50% KNN 100% |
70% 77.3 79.3 |
75% 78.1 80.7 |
80% 78.1 79.9 |
Table 7. Ablation results indicating that a 75% mask rate is optimal |
as determined by a KNN classification on RESISC-45 at a relative |