text
stringlengths
0
820
with every other embedded instance in the training set. The
instance is classified correctly if the majority of its k-nearest-
neighbors are in the same class as the validation instance,
and incorrectly if they are in any other.
The reasoning behind the kNN classifier evaluation is
that a strong pretrained network will output semantically
grouped representation for unseen data of the same class.
This evaluation for the quality of representations occurs in
other notable works [7, 9, 57]. In addition to using evalua-
tion datasets at different GSDs, to further test the multiscale
representations, we create multiple test sets for each dataset.
Since we cannot synthesize data at a finer GSD than the
provided ground truth, we only downsample the full reso-
lution validation set to coarser GSDs at fixed percentages:
XG%
val, G∈ {12.5,25,50,100}.
Our analysis uses eight different land-use classification
datasets: RESISC-45 [11], the UC Merced Land Use Dataset
[65], AiRound and CV-BrCT [43], MLRSNet [48], EuroSAT
[29], Optimal-31 [55], WHU-RS19 [14], SpaceNet v1 and
v2 [53], and Functional Map of the World [12]. The datasets
used span a wide range of GSDs, e.g., MLRSNet consists of
data captured from aerial platforms with 0.1m GSD, while
RESISC45 has imagery from medium-resolution satellites
at>30m GSD. In some cases, the datasets present imagery
at mixed GSDs which are not specified, in which case we as-
sume an approximate constant GSD: see the supplementary
material for all details. Furthermore, we provide an expanded
set of experiments with linear probing and finetuning in the
supplementary material.Average Accuracy (%)
Dataset Scale-MAE SatMAE ConvMAE
AiRound 63.2 57.8 59.7
CV-BrCT 69.7 66.2 68.4
EuroSAT 86.7 84.4 88.8
MLRSNet 81.7 75.0 79.5
OPTIMAL-31 65.5 55.7 61.7
RESISC 70.0 61.0 67.0
UC Merced 75.0 69.8 70.0
WHU-RS19 79.5 78.5 77.0
Table 1. Scale-MAE performs better, across all GSDs (as in Fig-
ure 5), for all datasets we experimented with compared to SatMAE.
The average improvement across all datasets for Scale-MAE com-
pared to SatMAE is 5.6% and 2.4% compared to ConvMAE with
ViT-Large backbones.
We run kNN classification with k= 20 . Figure 5 shows
thatScale-MAE outperforms SatMAE and ConvMAE across
GSD scales in the different evaluation datasets and across
relative GSD scales within individual datasets. For example,
the UC Merced has a GSD of 0.3m, but evaluating at scales
[12.5%,100%] provides an artificial GSD range of [0.3m,
2.4m]. On this example, we see that Scale-MAE provides
the largest performance gap at the 2.4m GSD, with similar
performance at 0.3m.
Across all other evaluation datasets and wider range of
GSDs, Scale-MAE outperforms SatMAE and ConvMAE,
where Scale-MAE outperforms both methods by a larger gap
as the GSD increasingly varies from the original GSD, indi-
cating that Scale-MAE learns representations that are more
robust to changes in scale for remote sensing imagery. We
outperform SatMAE by an average of 5.6% and ConvMAE
by an average of 2.4% across all resolutions and datasets (see
Table 1). UC Merced at 100% of the true GSD is the only
evaluation where SatMAE outperforms Scale-MAE . The
supplementary material contains an extensive table demon-
strating kNN classification results with varying k.
Linear probing and finetuning We perform linear classi-
fication on the RESISC-45 and FMoW-RGB datasets. We
fine-tune for 50 epochs using the same hyperparameter set-
tings as SatMAE [13]: a base learning rate of 5Γ—10βˆ’3, a
weight decay of 5Γ—10βˆ’3. We do not use temporal data for
classification. For RESISC-45, we fine-tune for 100 epochs
with a base learning rate of 4Γ—10βˆ’3, a weight decay of
5Γ—10βˆ’3, and a global batch size of 256 across 2 GPUs. The
learning rate on the backbone is multiplied by a factor of 0.1.
We use RandomResizedCrop for augmentation. We train on
224x224 images and evaluate 256x256 images because we
found evaluating at a higher scale improves the performance
of all models. We report both the performance of end-to-end
fine-tuning and linear probing with a frozen backbone. The
linear probing setup was the same as finetuning except the
learning rate was 0.1. The results are shown in Table 2 and
Table 3.
Model Backbone Frozen/Finetune
Scale-MAE Vit-Large 89.6/95.7
SatMAE [13] Vit-Large 88.3/94.8
ConvMAE [21] ConvVit-Large 81.2/95.0
MAE [26] Vit-Large 88.9/93.3
Table 2. Transfer classification results on RESISC-45. Frozen
indicates a linear probe and finetune is a full end-to-end finetuning
of the entire model.
Model Backbone Top-1/Top-5
Scale-MAE ViT-Large 77.9/94.3
SatMAE †[13] ViT-Large 72.4/91.9
MAE [26] ViT-Large 68.4/90.3
ConvMAE [21] ConvVit-Large 74.1/91.4
SatMAE βˆ—[13] ViT-Large 77.8/-
GASSL [4] ResNet-50 71.55/-
MoCo-V2 [27] ResNet-50 64.34/-
Table 3. Full finetuning results on FMoW-RGB. †: We repro-
duce SatMAE and ConvMAE by taking their publicly available