Preprints
https://doi.org/10.5194/egusphere-2023-658
https://doi.org/10.5194/egusphere-2023-658
15 May 2023
 | 15 May 2023
Status: this preprint is open for discussion.

A Bayesian model for quantifying errors in citizen science data: Application to rainfall observations from Nepal

Jessica A. Eisma, Gerrit Schoups, Jeffrey C. Davids, and Nick van de Giesen

Abstract. High quality citizen science data can be instrumental in advancing science toward new discoveries and a deeper understanding of under-observed phenomena. However, the error structure of citizen scientist (CS) data must be well-defined. Within a citizen science program, the errors in submitted observations vary, and their occurrence may depend on CS-specific characteristics. This study develops a graphical Bayesian inference model of error types in CS data. The model assumes that: (1) each CS observation is subject to a specific error type, each with its own bias and noise; and (2) an observation's error type depends on the error community of the CS, which in turn relates to characteristics of the CS submitting the observation. Given a set of CS observations and corresponding ground-truth values, the model can be calibrated for a specific application, yielding (i) number of error types and error communities, (ii) bias and noise for each error type, (iii) error distribution of each error community, and (iv) the error community to which each CS belongs. The model, applied to Nepal CS rainfall observations, identifies five error types and sorts CSs into four model-inferred communities. In the case study, 73 % of CSs submitted data with errors in fewer than 5 % of their observations. The remaining CSs submitted data with unit, meniscus, unknown, and outlier errors. A CS’s assigned community, coupled with model-inferred error probabilities, can identify observations that require verification. With such a system, the onus of validating CS data is partially transferred from human effort to machine-learned algorithms.

Jessica A. Eisma et al.

Status: open (until 10 Jul 2023)

Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor | : Report abuse
  • RC1: 'Comment on egusphere-2023-658', Jonathan Paul, 22 May 2023 reply

Jessica A. Eisma et al.

Jessica A. Eisma et al.

Viewed

Total article views: 171 (including HTML, PDF, and XML)
HTML PDF XML Total BibTeX EndNote
129 37 5 171 2 1
  • HTML: 129
  • PDF: 37
  • XML: 5
  • Total: 171
  • BibTeX: 2
  • EndNote: 1
Views and downloads (calculated since 15 May 2023)
Cumulative views and downloads (calculated since 15 May 2023)

Viewed (geographical distribution)

Total article views: 182 (including HTML, PDF, and XML) Thereof 182 with geography defined and 0 with unknown origin.
Country # Views %
  • 1
1
 
 
 
 
Latest update: 04 Jun 2023
Download
Short summary
Citizen scientists often submit high quality data, but a robust method for assessing data quality is needed. This study develops a semi-automated program that characterizes the mistakes made by citizen scientists by grouping them into communities of citizen scientists with similar mistake tendencies and flags potentially erroneous data for further review. This work may help citizen science programs assess the quality of their data and can inform training practices.