text
stringlengths 0
820
|
---|
token represents the masked patches that were not encoded. |
Another positional encoding vector is added to all patches |
and a sequence of transformer blocks decodes these patches |
to form the original input image, which is used as the learning |
target. |
Input Scale-MAE performs a super resolution reconstruc- |
tion, where the input image Iis downsampled from a higher |
resolution image Ihrat the ground truth GSD. Instead of |
targeting the input image, Scale-MAE targets high frequency |
and low frequency components of Ihr, which is common in |
Laplacian pyramid super resolution models [64], where the |
high frequency component is at the same resolution as the |
ground truth image Ihrand the low frequency component |
is at the same resolution as the input image I, as shown in |
Figure 2. Following many works in super resolution [64], the |
low frequency target image is obtained by interpolating Ihr |
to a much lower resolution, rlowand then interpolating to the |
same resolution as the input image I. The high frequency tar- |
get image is obtained by downsampling Ihrto another lower |
resolution rhigh-low , and then upsampling to the same resolu- |
tion as the ground truth image Ihrand subtracting this image |
Ihf=Ihr−Ihigh-low . The supplementary material provide |
more information on the upsampling/downsampling method- |
ology. The key components for Scale-MAE are described |
next. |
GSD Positional Encoding Images from scale-dependent |
domains have a metric which defines the absolute scale for |
the image. This metric has different names across domains |
and is referred to as the Ground Sample Distance (GSD) in |
remote sensing. The GSD is critical to understanding, con- |
ceptually, the kinds of features that will be available in an |
image. An image with finer GSD (lower number) will have |
higher frequency details than an image with coarser GSD |
(high number). Models are generally unaware of absolute |
scale when learning over a set of data. Specifically, even if |
they implicitly learn that all images in a dataset share a vary- |
ing resolution from input-space augmentations, then these |
models do not explicitly condition on the GSDs encountered |
in unseen data. |
We extend the positional encoding from Equation (2) to |
include GSD by scaling the positional encoding relative to |
the land area covered in an image as depicted in Figure 3 |
and mathematically: |
vgsd,x(pos,2i) = sing |
Gpos |
100002i |
D(3) |
vgsd,y(pos,2i+ 1) = cosg |
Gpos |
100002i |
D(4) |
Figure 3. Ground Sample Distance Positional Encoding (GS- |
DPE). (Left) Input images at the same pixel resolution but different |
GSDs are shown. The image on the bottom is a subset of the image |
on the top. (Center) This overlap in location, albeit at a different |
resolution, is reflected in the GSDPE. The finer image with smaller |
spatial extent is represented by a corresponding subsection of the |
overall sine wave on the bottom. (Right) A standard positional |
encoding is strictly dependent on the image resolution and uses the |
same embedding for both. The colors behind the sine waves show |
the intensity and quantization of the encoding. |
where gis the GSD of the image and Gis a reference GSD, |
nominally set to 1m. Intuitively, an object imaged at a finer |
resolution has more pixels representing it. When imaging the |
same object at a coarser resolution, those pixels must map to |
fewer pixels. In Equation (4), we interpolate the positional |
encoding by a factor ofG |
gto account for the ordering of the |
coarser set of pixels. This simple idea underpins the GSD |
Positional Encoding, visualized in Figure 3. |
Scale-MAE decoder The standard MAE learns represen- |
tations by tasking a network with reconstructing an image |
after masking out most of its pixels. While the standard |
MAE decoder reconstructs the input image at the same scale |
as its input, the objective of Scale-MAE is to learn multi- |
scale representations. We draw on works from progressive |
super-resolution such as [56], that learn a high resolution, |
high frequency image and a lower resolution low frequency |
image, that when combined together, yield the input image |
at a higher resolution. |
The Scale-MAE introduces a novel decoder which de- |
codes to multiple scales with a progressive Laplacian de- |
coder architecture, replacing the traditional MAE “decoder”, |
which is really a Transfomer encoder. This architecture |
consists of three stages: decoding, upsampling, and recon- |
struction, which are shown in Figure 2 and detailed below. |
Decoding follows the standard MAE decoder where fol- |
lowing the encoder, the removed mpatches are then placed |
back into their original location in the sequence of patches |
where a learned mask token represents the masked patches |
that were not encoded, a positional encoding is added, and |
then a series of transformer layers decode all patches. In |
contrast to the standard MAE decoder, the Scale-MAE de- |
coder uses fewer transformer layers (e.g. 3 layers instead of |
8), which reduces the parameter complexity as quantified |
in Section 5. The output of these layers is then fed into the |
upsampling stage. |
Upsampling The latent feature maps from the decoding |
stage are progressively upsampled to 2x and 4x resolution |
using deconvolution blocks, where the first deconvolution |