text
stringlengths 0
820
|
---|
2 |
Accepted as a paper at ICLR 2023 Workshop on Machine Learning for Remote Sensing |
3 S TRUCTURES AND PATTERNS IN SPATIAL AND REMOTELY SENSED DATA |
Geospatial and remotely sensed data exhibit distinct structures. For example, the chosen extent and |
scale of a spatial prediction unit ( ℓin §2) has important implications for the design, use, and evalua- |
tion of geospatial ML models, evidenced by the phenomena of “modifiable areal unit problem” and |
“ecological fallacy” (Haining, 2009; Nikparvar & Thill, 2021; Yuan & McKee, 2022). Here, I focus |
on two key factors of geospatial data that affect the validity of geospatial ML evaluation methods. |
3.1 S PATIAL STRUCTURES : (A UTO)CORRELATION AND COVARIATE SHIFT |
One key phenomena exhibited by many geopsatial data is that values of a variable (e.g. tree canopy |
height) are often correlated across locations. Formally, for random process Z, thespatial autocor- |
relation function is defined as RZZ(ℓi,ℓj) =E[Z(ℓi)Z(ℓj)]/σiσj, whereσi,σjare the standard |
deviations associated with Z(ℓi),Z(ℓj). For geospatial variables, we might expect RZZ(ℓi,ℓj)>0 |
whenℓiandℓjare closer together, namely that values of Zat nearby points tend to be closer in |
value. The degree of spatial autocorrelation in data can be assessed with statistics such as Moran’s |
Iand Geary’s C, and semi-variogram analsyes (see, e.g. (Gaetan & Guyon, 2010)). |
Spatial autocorrelations and correlations between predictor and label variables can be an important |
source of structure to leverage in geospatial ML models (Rolf et al., 2020; Klemmer & Neill, 2021), |
yet they also present challenges. Models can “over-rely” on spatial correlations in the data, leading |
to over-estimated accuracy despite poor spatial generalization performance. Overfitting to spatial |
relationships in training data is of particular concern in the when data distributions differ between |
training regions and regions of use. Such covariate shifts are common in geospatial data, e.g. across |
climate zones, or spectral shifts in satellite imagery (Tuia et al., 2016; Hoffimann et al., 2021). |
Presence of spatial correlations or domain shift alone do not invalidate assessing map accuracy with |
a probability sample (§2.1). However, when evaluation is limited to existing data, issues of data |
availability and representivity can amplify the challenges of geospatial model evaluation. |
3.2 A VAILABILITY ,QUALITY ,AND REPRESENTIVITY OF GEOSPATIAL EVALUATION DATA |
Many geospatial datasets exhibit gaps in coverage or quality of data. Meyer & Pebesma (2022) evi- |
dence trends of geographic clustering around research sites primarily in a small number of countries, |
across three datasets used for global mapping in ecology. Oliver et al. (2021) find geographical bias |
in coverage of species distribution data aggregated from field observation, sensors measurements, |
and citizen science efforts. Burke et al. (2021) note that the frequency at which nationally represen- |
tative data on agriculture, population, and economic factors are collected varies widely across the |
globe. While earth observation data such as satellite imagery have comparatively much higher cov- |
erage across time and space (Burke et al., 2021), coverage of commercial products has been shown |
to be biased toward regions of high commercial value (Dowman & Reuter, 2017). |
Filling in data gaps is goal for which geospatial ML can be transformative (§2.2), yet these same |
gaps complicate model evaluation. When training data are clustered in small regions, this can affect |
our ability to train a high-performing model. When evaluation data are clustered in small regions, |
this affects our ability to evaluate geospatial ML model performance at all. |
4 S PATIALLY -AWARE EVALUATION METHODS : A BRIEF OVERVIEW |
In §3, we established that geospatial data generally exhibit spatial correlations and data gaps, even |
when target use areas Dare small. It is well documented that calculating accuracy indices with |
non-spatial validation methods (e.g. standard k-fold cross-validation) will generally over-estimate |
performance in such settings. Spatially-aware evaluation methods can control the spatial distribution |
of training and validation set points to better simulate conditions of intended model use. |
4.1 S PATIAL CROSS -VALIDATION METHODS |
Several spatial cross-validation methods have been proposed that reduce spatial dependencies be- |
tween train set points ℓ∈S trainfrom evaluation set points ℓ∈S eval. Spatial cross-validation methods |
3 |
Accepted as a paper at ICLR 2023 Workshop on Machine Learning for Remote Sensing |
typically stratify training and evaluation instances by larger geographies (Roberts et al., 2017; Valavi |
et al., 2018) e.g. existing boundaries, spatial blocks, or automatically generated clusters. Buffered |
cross-validation methods (such as spatial leave-one-out (Le Rest et al., 2014), leave-pair out (Airola |
et al., 2019) and k-fold cross validation (Pohjankukka et al., 2017)) control the minimum distance |
from any training point to any evaluation point. In addition to evaluating model performance, spa- |
tial cross-validation has also been suggested as a way to to improve model selection and parameter |
estimation in geospatial ML (Meyer et al., 2019; Schratz et al., 2019; Roberts et al., 2017). |
While separating ℓ∈S trainfromℓ∈S evalcan reduce the amount of correlation between training and |
evaluation data, a spatial split also induces a higher degree of spatial extrapolation to the learning |
setup and potentially reduces variation in the evaluation set labels. As a result, it is possible for |
spatial validation methods to systematically under-report performance, especially in interpolation |
regimes (Roberts et al., 2017; Wadoux et al., 2021). In a different flavor from the evaluation methods |
above, Mil `a et al. (2022) propose to match the distribution of nearest neighbor distances between |
train and evaluation set points to the corresponding distances between train set and target use area. |
4.2 O THER SPATIALLY -AWARE VALIDATION EVALUATION METHODS |
When the intended use of a geospatial model is to generate predictions outside the training distribu- |
tion, it is critical to test the model’s ability to generalize across different conditions. For example, |
studies have varied the amount of spatial extrapolation required by changing parameters of the spa- |
tial validation setups in §4.1 , e.g. with buffered leave one out (Ploton et al., 2020; Brenning, 2022) |
and checkerboard designs (Roberts et al., 2017; Rolf et al., 2021). Jean et al. (2016) assess extrapola- |
tion ability across pairs of countries by iteratitvely training in one region and evaluating performance |
in another. Rolf et al. (2021) find that the distances at which a geospatial model has extrapolation |
power can differ substantially depending on the properties of the prediction variable. |
It is always critical to put the reported performance of geospatial ML models in context. Visualizing |
the spatial distributions of predictions and error residuals can help expose overreliance on spatially |
correlated predictors (Meyer et al., 2019) and sub-regions with low local model performance. Com- |
paring performance to that of a baseline model built entirely on spatial predictors can contextualize |
the value-add of a new geospatial model (Fourcade et al., 2018; Rolf et al., 2021). |
5 T AKING STOCK : CONSIDERATIONS AND OPPORTUNITIES |
Comprehensive reporting of performance is critical for geospatial ML methods, especially as stated |
gains in research progress make their way to maps and decisions of real-world consequence. Eval- |
uating performance of geospatial models is especially challenging in the face of spatial correlations |
and limited availability or representivity of data. This means non-spatial data splits are generally |
unsuitable for geospatial model evaluation with most existing datasets. Spatially-aware validation |
methods are an important indicator of model performance including spatial generalization; however, |
they generally do not provide valid statistical estimates of prediction map accuracy. This brings us |
to end with three key opportunities for improving the landscape of geospatial ML evaluation: |
Opportunity 1: Invest in evaluation data to measure map accuracy and overall performance of |
geospatial models. When remote annotations are appropriate, labeling tools (e.g. Robinson et al. |
(2022)) can facilitate the creation of probability-sampled evaluation datasets. Data collection and |
aggregation efforts can focus on filling existing geospatial data gaps (Paliyam et al., 2021) or simu- |
lating real-world prediction conditions like covariate or domain shift (Koh et al., 2021). |
Opportunity 2: Invest in evaluation frameworks to precisely and transparently and report perfor- |
mance and valid uses of a geospatial ML model ( `a la “model cards” (Mitchell et al., 2019)). This |
includes improving spatial baselines, expanding methods for reporting uncertainty over space, and |
delineating “areas of applicability” for geospatial models, e.g. as in Meyer & Pebesma (2022). |
Opportunity 3: If the available data and evaluation frameworks are insufficient, explain the limita- |
tions of what types of performance can be evaluated . Distinguish between performance measures |
that estimate a statistical parameter and those that indicate potential skill for a possible use case. |