Preprints
https://doi.org/10.5194/egusphere-2023-1308
https://doi.org/10.5194/egusphere-2023-1308
05 Jul 2023
 | 05 Jul 2023

kNNDM: k-fold Nearest Neighbour Distance Matching Cross-Validation for map accuracy estimation

Jan Linnenbrink, Carles Milà, Marvin Ludwig, and Hanna Meyer

Abstract. Random and spatial Cross-Validation (CV) methods are commonly used to evaluate machine learning-based spatial prediction models, and the obtained performance values are often interpreted as map accuracy estimates. However, the appropriateness of such approaches is currently the subject of controversy. For the common case where no probability sample for validation purposes is available, in Milà et al. (2022) we proposed the Nearest Neighbour Distance Matching (NNDM) Leave-One-Out (LOO) CV method. This method produces a distribution of geographical Nearest Neighbour Distances (NND) between test and train locations during CV that matches the distribution of NND between prediction and training locations. Hence, it creates predictive conditions during CV that are comparable to what is required when predicting a defined area. Although NNDM LOO CV produced largely reliable map accuracy estimates in our analysis, as a LOO-based method, it cannot be applied to large datasets found in many studies.

Here, we propose a novel k-fold CV strategy for map accuracy estimation inspired by the concepts of NNDM LOO CV: the k-fold NNDM (kNNDM) CV. The kNNDM algorithm tries to find a k-fold configuration such that the Empirical Cumulative Distribution Function (ECDF) of NND between test and train locations during CV is matched to the ECDF of NND between prediction and training locations.

We tested kNNDM CV in a simulation study with different sampling distributions and compared it to other CV methods including NNDM LOO CV. We found that kNNDM CV performed similarly to NNDM LOO CV and produced reasonably reliable map accuracy estimates across sampling patterns with strong reductions in computation time for large sample sizes. Furthermore, we found a positive linear association between the quality of the match of the two ECDFs in kNNDM and the reliability of the map accuracy estimates.

kNNDM provided the advantages of our original NNDM LOO CV strategy while bypassing its sample size limitations.

Jan Linnenbrink, Carles Milà, Marvin Ludwig, and Hanna Meyer

Status: final response (author comments only)

Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor | : Report abuse
  • CC1: 'Comment on egusphere-2023-1308', Nils Tjaden, 07 Jul 2023
  • RC1: 'Comment on egusphere-2023-1308', Italo Goncalves, 23 Aug 2023
  • RC2: 'Comment on egusphere-2023-1308', Anonymous Referee #2, 23 Aug 2023
  • AC1: 'Comment on egusphere-2023-1308', Jan Linnenbrink, 19 Oct 2023
Jan Linnenbrink, Carles Milà, Marvin Ludwig, and Hanna Meyer

Model code and software

kNNDM: k-fold Nearest Neighbour Distance Matching Cross-Validation for map accuracy estimation Jan Linnenbrink, Carles Milà, Marvin Ludwig, and Hanna Meyer https://doi.org/10.6084/m9.figshare.23514135.v1

Jan Linnenbrink, Carles Milà, Marvin Ludwig, and Hanna Meyer

Viewed

Total article views: 902 (including HTML, PDF, and XML)
HTML PDF XML Total BibTeX EndNote
658 214 30 902 28 17
  • HTML: 658
  • PDF: 214
  • XML: 30
  • Total: 902
  • BibTeX: 28
  • EndNote: 17
Views and downloads (calculated since 05 Jul 2023)
Cumulative views and downloads (calculated since 05 Jul 2023)

Viewed (geographical distribution)

Total article views: 880 (including HTML, PDF, and XML) Thereof 880 with geography defined and 0 with unknown origin.
Country # Views %
  • 1
1
 
 
 
 
Latest update: 01 Mar 2024
Download
Short summary
Estimation of map accuracy based on Cross-Validation (CV) in spatial modeling is pervasive but controversial. Here, we build upon our previous work and propose a novel, prediction-oriented k-fold CV strategy for map accuracy estimation in which the distribution of geographical distances between prediction and training points is taken into account when constructing the CV folds. Our method produces more reliable estimates than other CV methods and can be used for large datasets.