text
stringlengths
0
820
Point Cloud Pre-training, Oct. 2022.
[68] Hengshuang Zhao, Yi Zhang, Shu Liu, Jianping Shi,
Chen Change Loy, Dahua Lin, and Jiaya Jia. PSANet: Point-
wise Spatial Attention Network for Scene Parsing. In Pro-
ceedings of the European Conference on Computer Vision
(ECCV) , pages 267–283, 2018.
A. Datasets
In our experiments, we used a total of ten datasets (Table 10) for the tasks of land-use/land-cover classification and semantic
segmentation. There are a large amount of remote sensing datasets in existence. Many remote sensing datasets fundamentally
capture the same data with minor changes in location or distribution. We selected datasets with key, representative properties .
These properties include (1) a diversity in the amount of kinds of classes/objects represented, (2) a large spectrum of ground
sample distances from (ideally) known sensor configurations, and (3) pansharpened, othrorectified, and quality controlled
imagery and labels. We capture these properties in Table 10.
A.1. Diversity in classes
For both pretraining and downstream evaluations, it is a desirable property to include as much geographic and class diversity
as possible. In order to capture a wide amount of classes in remote sensing, it is necessary to include multiple localities and
environments. This property serves as a proxy for the amount of unique “features” available in the dataset.
Dataset Resolution (px) GSD (m) Number of Images Number of Classes Task Type
AiRound [43] 500 0.3 - 4800 11,753 11 C
CV-BrCT [43] 500 0.3 - 4800 24,000 9 C
EuroSAT [29] 64 10 27,000 10 C
MLRSNet [48] 256 0.1 - 10 109,161 46 C
Optimal-31 [55] 256 0.5 - 8 1,860 31 C
RESISC-45 [11] 256 0.2 - 30 31,500 45 C
UC Merced [65] 256 0.3 2,100 21 C
WHU-RS19 [14] 256 0.5 1050 19 C
fMoW [12] Various 0.3 1,047,691 62 C
SpaceNet v1 [53] Various 0.5 6,940 2 SS
Table 10. Statistics of all datasets used in our experiments. Task types are classification (C) and semantic segmentation (SS).
A.2. Spectrum of GSDs
Scale-MAE is built to be invariant to the input absolute scale of the dataset. Many datasets are collected from a single sensor
and processed in a uniform fashion. To validate that our method works with many resolutions, we included datasets which are
collected from a variety of sensors but then processed in a uniform fashion. This excludes differences in processing as a factor
affecting our experiments and narrowly targets resolution instead.
A.3. Quality control
It is hard to assess the quality of remote sensing datasets without manually verifying a majority of instances of the data.
We mandated that images used are pansharpened (and therefore the highest resolution possible to extract from the sensor),
orthorectified (and therefore well-aligned with the geodetic ellispoid), and projected to the same coordinate reference system.
This eliminates large differences in sensor-to-image processing.
B. Laplacian and Upsampling Block Architectures
Figure 7 illustrates the architecture of Laplacian and Upsampling block architectures described below.
B.1. Laplacian Block
Laplacian Blocks are used to reconstruct the target at a specific resolution and frequency. A Laplacian Block consists of
a chain of Feature Mapping Block, which distills information at a specific frequency, followed by one final Reconstruction
Block, which generates the final output. A Feature Mapping Block consists of a 3x3 depth-wise convolution layer with GELU
activation, followed by 1x1 convolution. A Reconstruction Block consists of a 4x4 transpose convolution layer followed by a
k= 20 k= 100 k= 5
Dataset Res Scale. Sat. Conv. Scale. Sat. Conv. Scale. Sat. Conv.
AiRound16 0.401 0.375 0.423 0.396 0.367 0.401 0.370 0.355 0.403
32 0.561 0.510 0.539 0.536 0.491 0.517 0.541 0.492 0.539
64 0.689 0.607 0.658 0.643 0.579 0.621 0.692 0.604 0.666
128 0.743 0.650 0.681 0.690 0.600 0.622 0.749 0.660 0.690
256 0.729 0.662 0.658 0.678 0.621 0.602 0.731 0.663 0.676
496 0.670 0.664 0.620 0.609 0.613 0.566 0.685 0.669 0.632
CV-BrCT16 0.522 0.478 0.567 0.485 0.443 0.513 0.524 0.475 0.585
32 0.653 0.615 0.656 0.588 0.560 0.592 0.695 0.644 0.699
64 0.744 0.701 0.711 0.674 0.635 0.644 0.780 0.727 0.754
128 0.763 0.725 0.732 0.710 0.662 0.667 0.805 0.758 0.782
256 0.761 0.725 0.727 0.694 0.666 0.664 0.802 0.770 0.771
496 0.737 0.727 0.709 0.656 0.657 0.631 0.792 0.771 0.765
EuroSAT16 0.744 0.727 0.826 0.699 0.695 0.788 0.751 0.729 0.835
32 0.901 0.876 0.898 0.869 0.854 0.863 0.912 0.871 0.909
64 0.956 0.931 0.940 0.935 0.913 0.914 0.960 0.934 0.947
MLRSNet16 0.563 0.491 0.607 0.535 0.461 0.549 0.551 0.479 0.617
32 0.772 0.677 0.744 0.726 0.625 0.688 0.772 0.684 0.762
64 0.893 0.815 0.851 0.849 0.754 0.792 0.911 0.839 0.876
128 0.936 0.875 0.894 0.892 0.814 0.834 0.950 0.899 0.918
256 0.918 0.892 0.882 0.862 0.840 0.817 0.940 0.913 0.910
OPTIMAL-3116 0.354 0.322 0.439 0.312 0.298 0.370 0.317 0.319 0.418
32 0.574 0.500 0.587 0.567 0.508 0.545 0.565 0.519 0.561
64 0.793 0.609 0.698 0.742 0.561 0.598 0.782 0.646 0.688
128 0.816 0.670 0.714 0.731 0.646 0.595 0.809 0.694 0.725
256 0.739 0.681 0.646 0.653 0.638 0.550 0.761 0.731 0.693
RESISC16 0.382 0.347 0.458 0.370 0.327 0.428 0.353 0.323 0.435
32 0.628 0.527 0.601 0.597 0.505 0.568 0.609 0.508 0.592
64 0.798 0.667 0.731 0.754 0.631 0.677 0.803 0.667 0.734
128 0.864 0.748 0.798 0.819 0.699 0.743 0.882 0.762 0.817
256 0.826 0.758 0.762 0.761 0.708 0.690 0.850 0.771 0.788
UC Merced16 0.524 0.472 0.598 0.400 0.370 0.462 0.512 0.488 0.617
32 0.767 0.670 0.683 0.605 0.535 0.593 0.828 0.682 0.726
64 0.842 0.795 0.771 0.719 0.729 0.652 0.884 0.842 0.845
128 0.858 0.788 0.750 0.662 0.738 0.655 0.884 0.847 0.838
256 0.762 0.802 0.700 0.595 0.757 0.590 0.851 0.842 0.817
WHU-RS1916 0.545 0.445 0.576 0.400 0.380 0.562 0.525 0.490 0.631
32 0.650 0.729 0.670 0.610 0.675 0.576 0.760 0.690 0.754
64 0.850 0.805 0.833 0.770 0.730 0.680 0.920 0.840 0.837
128 0.970 0.910 0.882 0.890 0.890 0.685 0.985 0.895 0.941
256 0.960 0.940 0.892 0.880 0.925 0.709 0.975 0.945 0.931
Table 11. Scale-MAE outperforms SatMAE and ConvMAE on kNN classification across a variety of k, across a variety of resolutions.
kNN Classification results for Scale-MAE , SatMAE and ConvMAE across a variety of k. Resolution is reported in pixels.
3x3 depth-wise convolution layer, a 1x1 convolution layer, and a 2x2 transpose convolution layer. In our experiments, we have
two Feature Mapping Blocks per Laplacian Block.
d3x3, 256512-d
1x1, 512+x N times
4x4T, 256
d3x3, 128
1x1, 256
2x2T, 3
GELU
Layer Norm