the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Performance Evaluation of Atmotube Pro sensors for Air Quality Measurements
Abstract. This study presents a performance evaluation of eight Atmotube Pro sensors using US Environmental Protection Agency (US-EPA) guidelines. The Atmotube Pro sensors were collocated side-by-side with a reference-grade FIDAS monitor in an outdoor setting for a 14-week period. The result of the assessment showed the Atmotube Pro sensors had a coefficient of variation (CoV) of 23 %, 15 % and 13 % for minutes, hourly and daily PM2.5 data averages, respectively. The PM2.5 data was cleaned prior to analysis to improve reproducibility between units. 6 out of 8 Atmotube Pro sensor units had particularly good precision. The inter-sensor variability assessment showed two sensors with low bias and one sensor with a higher bias in comparison with the sensor average. Simple univariate analysis was sufficient to obtain good fitting quality to a FIDAS reference-grade monitor (R2 > 0.7) at hourly averages although, poorer performance was observed using a higher time resolution of 15 minutes averaged PM2.5 data (R2; 0.43–0.54). The average error bias, root mean square error (RMSE) and normalized root mean square error (NRMSE) were 4.19 µgm-3 and 2.17 % respectively. While there were negligible influences of temperature on Atmotube Pro measured PM2.5 values, substantial positive biases (compared to a reference instrument) occurred at relative humidity (RH) values > 80 %. The Atmotube Pro sensors correlated well with the purple air sensor (R2=0.86, RMSE=2.85 µgm-3). In general, the Atmotube Pro sensors performed well and passed the base testing metrics as stipulated by recommended guidelines for low-cost PM2.5 sensors.
- Preprint
(1309 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on egusphere-2024-1685', Anonymous Referee #1, 14 Jul 2024
This paper presents a comparison of 8 Atmotubes to a FIDAS reference monitor in Leeds, UK. It is a comparison of a single sensor type at a single location that is well-monitored, thus the impact and reach of this study is a bit limited. The analysis of the performance is done adequately, but it is a bit shallow. I recommend major revisions.
Here are the two main things that I expected to see in this paper. I think these additions would substantially increase the impact and value of this paper.
- A comparison of the size distribution of what the AtmoTubes measure versus what the FIDAS measures. It is claimed that the AtmoTubes can do PM1, PM2.5, and PM10. It would be very interesting to see if that is true, or if as recent literature shows, these size distributions are more mathematical constructions. See Molina Rueda et al (https://pubs.acs.org/doi/full/10.1021/acs.estlett.3c00030). This analysis would be especially pertinent given the FIDAS ability to size particles (PM1, PM2.5, PM4, PM10, etc).
- Correction factor development. If a sensor evaluation is already being undertaken, it is not much more work to provide correction factors based on these co-locations. There are numerous references in the literature that do this. This would help provide motivation to the paper so that future AtmoTube users may be able to apply corrections to their data in certain environments.
Below are other less major comments:
Abstract: quantify “particularly good precision”
Line 60: Would be good to cite more than one group working on source apportionment with sensors, consider also citing these and potentially others: https://pubs.acs.org/doi/10.1021/acs.estlett.9b00393, https://pubs.acs.org/doi/abs/10.1021/acs.est.1c07005,
https://pubs.acs.org/doi/10.1021/acsestair.3c00024
In fact some of these came before the papers that are cited in the original manuscript.
Line 61: The paragraph about LMICs isn’t important or relevant, as this paper does colocation in the UK. Sensor colocations should be done locally according to the vast majority of the literature and best practices.
Line 80: consider citing major sensor intercomparison project publications such as: https://pubs.acs.org/doi/full/10.1021/acs.est.2c09264
Line 94: It is claimed that there are no detailed sensor evaluation studies for Atmotube, but in the next sentence an AQ-SPEC evaluation of the AtmoTube is cited. AQ-SPEC is a global authority on sensor evaluation, so I’m not sure how these two sentences can be consistent.
Line 106: “Especially in LMICs”. Again, I don’t see how this can be a motivation for this paper considering evaluating sensors in Leeds would not be scientifically valid for use in LMICs
Line 118: Purple Air should be capitalized, and the specific details of the device and how long it has been operating should be mentioned.
Line 126: Mie Theory, not MIE
Line 129: Can the Atmotube really size-resolve particles (PM1, PM2.5, PM10), though? Doubtful, unless it is using a true optical particle counter such as the Alphasense. See https://pubs.acs.org/doi/full/10.1021/acs.est.2c09264 . Related, what kind of “bare sensor” is in the Atmotube? Some candid discussion about the limitations of the device is needed. See also the major comment related to this.
Line 173: How was the filter threshold of 50% CoV determined? Is there a possibility to be filtering “real” data, e.g. the extreme events mentioned later (Guy Fawkes night, etc)
Section 3.1 Data cleaning seems more like methodology than results/discussion.
Line 251: Plantower and PurpleAir should be essentially the same thing. PurpleAir reports directly from dual plantower sensors. Also which version of the Plantower sensor? It is mentioned again in the very next sentence.
Figure 4: It would be better to see these on the same plot, with some smoothing (daily averaging). As the figure stands now it is impossible to glean much information. The accompanying statistics however, are useful.
Section 3.3: Provide some details about the colocation. Height of deployment, proximity of sensors to reference, and as mentioned previously there is no info about the PurpleAir.
Citation: https://doi.org/10.5194/egusphere-2024-1685-RC1 -
AC1: 'Reply on RC1', Aishah Shittu, 28 Jul 2024
Thank you for your feedback on the manuscript. For the two major additions suggested;
1. I did a performance evaluation of other size distributions; PM10 as reported in literature and more recently by Molina Rueda et al., 2023 showed very poor performance, this was not different for Atmotube Pro as the PM10 association with FIDAS reference was quite poor; R2 0.4 and error bias (RMSE) 8.8 µg/m3 while PM1 performed very well (R2 >0.9 and RMSE value <2 µg/m3). That was why the focus of the paper was on PM2.5.
2. I explored correction model methods by BarkJohn 2021 https://doi.org/10.5194/amt-14-4617-2021; Zimmerman, 2021 https://doi.org/10.1016/j.jaerosci.2021.105872 and Hong et al., 2021 https://doi.org/10.1016/j.jaerosci.2021.105829 to improve the PM2.5 data using different models, the model which involved the addition of relative humidity values recorded by the sensors proved positive. There was an improvement in the R2 value for all 8 Atmotube sensors (R2 ranged from 0.80 - 0.89 and a reduction in the average error bias <3.84 µg/m3 for the PM2.5 sensor average and a correction model equation was derived for the average of all 8 sensors.
The collocation of the Atmotube Pro sensors was done side by side with Purple Air and FIDAS reference monitor in an outdoor setting at height of about 3.5 metres.
I agree the data cleaning should be a part of the methodology and not in the result section. The intention was not to filter out real data rather the data cleaning method was to improve inter-sensor variability. The filtered data still captured the high PM episodes. Atmotube sensors log minute-wise data. For the raw data, some sensors recorded high PM2.5 values while others recorded low values for the same time step. For all 8 sensors, coefficient of variation (CoV) was determined for each time step (every minute that is, each row). The CoV ranged from 0 - 247.7%. Lower CoV indicates good inter-sensor variability. Rows of data with CoV > 50% threshold was removed (data loss was <4%) thus the reason for choosing this threshold as there was negligible data loss and this improved the inter-sensor variability.
Citation: https://doi.org/10.5194/egusphere-2024-1685-AC1 - AC2: 'Reply on RC1', Aishah Shittu, 21 Oct 2024
-
RC2: 'Comment on egusphere-2024-1685', Anonymous Referee #2, 07 Sep 2024
The focused of the study is on the assessment of 8 Atmotube Pro air quality units to ascertain whether they are fit for purpose at different time resolutions. The study collocated these units against a reference-grade instrument at Leeds, UK for a 14-week period. Detailed analysis for the study was well presented to assess the unit’s performance which is commendable. However, the paper needs revision.
I expected the following in the paper:
The motive of testing the unit’s data at different time resolutions. Most of the uncalibrated particulate matter sensors data contains noise below 1-hour resolution. I think testing the level of noise in low-cost sensors data is appropriate for different brands of sensors like Plantower sensor against Alphasense, Sensirion, etc.
Description of quality control measures that were taken before the colocation. For example, should the Atmotube units be assembled and charged before deployment, etc. I think adding such information to the study will educate the general public.
Description of the Leeds city center should be presented to understand the environmental features and sources of pollutants impacting particulate matter measurement by the Atmotube Pro units. Also, description of the study period, was it during winter, fall or summer and some literature review of how meteorological conditions affect low-cost PM sensors.
The comments below can be considered minor:
Line 60: LMIC countries need to be removed since the sensors were colocated in Leeds.
Line 80: Is calibration part of the study?
Line 85: Here, adding some literature concerning work done in the Uk on sensor performance evaluation will be good.
Line 95: Here, you can throw more light on AQ-SPEC standard operating procedure (SOP) which can be found on their website: https://www.aqmd.gov/aq-spec/. Also, Polidori et al., 2017 and Feenstra et al, 2019 highlighted the AQ-SPEC SOP.
Line 105: Same here too; LMIC needs to be removed.
Line 120: Here, are you referring to the sensor itself (e.g, Plantower - PMS or Alphasense sensors) or the unit itself?
Line 125: Here, you can throw more light on the type of PM sensor in the Atmotube unit.
Line 155: Section 2.2 needs more information. For example, the method that was used to calculate data completeness. Did the units achieve the data completeness threshold before the analysis?
Line 270: The statement here needs to be rewritten because adding ‘‘it is possible that some sensors were not calibrated as precisely as others’’ sound a bit confusing.
Line 325: Here, does it mean that other studies have focus on calibration of the Atmotube Pro units?. You need to clarify this.
Line 390: Here, citation is needed for the statement: ''Data were collected from a local weather station rather than from the Atmotube Pros because the RH and temperature sensors in the Atmotube Pro sensors can be influenced by sensor heating when connected to power''. Many studies have utilized the internal temperature and relative humidity to calibrate the particulate matter data from the units.
Citation: https://doi.org/10.5194/egusphere-2024-1685-RC2 - AC3: 'Reply on RC2', Aishah Shittu, 21 Oct 2024
Data sets
Sensor data with reference Aishah Shittu, Kirsty Pringle, Steve Arnold, Richard Pope, Ailish Graham, Carly Reddington, Richard Rigby, and James McQuaid https://zenodo.org/records/11059054
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
404 | 187 | 35 | 626 | 11 | 13 |
- HTML: 404
- PDF: 187
- XML: 35
- Total: 626
- BibTeX: 11
- EndNote: 13
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1