text
stringlengths
0
820
of MAEs as a pretraining approach for passive and active
remote sensing imagery. Their method introduced flexible
“adapters” which could be used interchangeably with an en-
coder for a set of input imagery modes. Cong et al. [13]
introduced the SatMAE, which used temporal and spectral
metadata in a positional encoding to encode spatio-temporal
relationships in data. The temporal data contains the year,
month, and hour enabling understanding of long-term change
with the year, weather information from the month, and hour
information for the time of day. Further Liu et al. [41] and
Iba˜nezet al. [32] have shown that MAE architectures can
be used for band selection in hyperspectral remote sensing
images, significantly reducing data redundancy while main-
taining high classification accuracy. Scale-MAE leverages
inherent absolute scale information information present in
scale-dependent domains as a way to learn robust, multiscale
features that reduce data usage downstream.
Super-resolution Super-resolution has proven effective in
improving accuracy within remote sensing images due to
the extremely small size of objects within the image [51].
Previous works have aimed to learn continuous implicit rep-
resentations for images at arbitrary resolutions to aid the
super-resolution task. These representations are used to
upsample the images either to specific scales [38] or to ar-
bitrary resolutions [10, 31, 61]. Most super-resolution work
aims to increase the resolution of the input image, whereas
Scale-MAE produces both higher and lower resolution im-
ages. There is some work on super-resolution for satellite
imagery, but much of this work is focused on synthetically
creating high-resolution datasets for use with models trained
specifically for high-resolution data [28, 35]. Scale-MAE ,
however, utilizes super-resolution as a means to obtain mul-
tiscale representations during pretraining.
Multiscale Features Because images can contain objects
of many different pixel resolutions, the vision community has
proposed many methods to extract multiscale features. These
include spatial pyramids [6, 34, 36, 50] and dense sampling
of windows [33, 62, 63]. These approaches have been com-
bined by methods such as [19], in which dense histogram-
of-gradient features are computed for each feature pyramid
level. Rather than using classical computer vision techniques
to extract multiscale features, convolutional neural networks
have been used to build deep multiscale features. CNNs
with subsampling layers inherently build feature pyramids, a
property exploited explicitly by models such as the Feature
Pyramid Network and the Single-Shot Detector, amongstothers [23, 39, 40]. Recently, this multiscale idea has been
extended to vision transformers by [18], who show that this
architecture improves various video recognition and image
classification tasks, as well as in [21, 67] which proposes
various hybrid CNN-MAE architectures that yield multi-
scale features during MAE pretraining. Different from these
works, Scale-MAE uses a Laplacian pyramid decoder as a
way to force an encoder to learn multiscale features with the
ViT architecture.
3. Scale-MAE
This section describes the Scale-MAE pretraining frame-
work as illustrated in Figure 2. Scale-MAE is a self-
supervised pretraining framework based on the Masked Au-
toencoder (MAE) [26]. Scale-MAE makes two contribu-
tions to the MAE framework. Standard MAE-based methods
use absolute or relative positional encodings to inform the
ViT of the position of the unmasked components, where
an image at resolution rwill have the same positional en-
codings regardless of the image content. Scale-MAE in-
troduces the Ground Sample Distance (GSD) based posi-
tional encoding that scales in proportion to the area of land
in an image, regardless of the resolution of the image. In
addition, Scale-MAE introduces the Laplacian-pyramid de-
coder to the MAE framework to encourage the network to
learn multiscale representations. Embeddings from a ViT
encoder are decoded to a lower resolution image that cap-
tures the lower frequency information and a higher resolu-
tion image that captures the high-frequency information. We
formalize Scale-MAE in the following subsections by first
specifying the necessary MAE background, describing the
GSD-based positional encoding, and then explaining the
Laplacian-pyramid decoder.
Setup LetI∈RH×W×Crepresent an input image of
height H, width W, and Cchannels. The MAE patchifies
Iinto a sequence Sof independent patches of height and
width Ppixels, where each of the Nppatches, s∈Shas
dimension s∈RP2C. A fraction, m, of the patches are
then removed and the remaining patches are then passed
through a projection function (e.g., a linear layer) to project
the patches SintoDdimensions, fE:RP2C→RD, to
obtain embedded patches SE=fE(S). AnR2positional
encoding vector, is then added to the embedded patches with
vx(pos,2i) = sinpos
100002i
D(1)
vy(pos,2i+ 1) = cospos
100002i
D(2)
where posis the position of the patch along the given axis
andiis the feature index (visualized in Figure 3), exactly
as introduced in [54]. These positional encodings are then
concatenated and added to the embedded patches, which
are then fed into a ViT encoder. After the encoder, the
removed mpatches are then placed back into their original
location in the sequence of patches where a learned mask