the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
The Puy de Dôme ICe Nucleation Intercomparison Campaign (PICNIC): Comparison between online and offline methods in ambient air
Abstract. Only a tiny fraction of all aerosol particles nucleate ice (ice nucleating particles; INPs) and their concentration over the relevant temperature range for mixed-phase clouds covers up to ten orders of magnitude, providing a challenge for contemporary INP measurement techniques. INP concentrations can be detected online with high-time resolutions of minutes, or offline, where aerosols are collected on filters for hours to days. Here we present measurements of INP concentrations in ambient air under conditions relevant to mixed-phase clouds from a total of ten INP methods over two weeks in October 2018 at the Puy de Dôme observatory in central France. INP concentrations were detected online in the immersion freezing mode, between ~ -5 °C and -30 °C. Two continuous flow diffusion chambers (CFDC; Colorado State University-Continuous Flow Diffusion Chamber, CSU-CFDC; Spectrometer for Ice Nuclei, SPIN) and an expansion chamber (Portable Ice Nucleation Experiment, PINE) measured the INP concentration with a time resolution of several minutes and at temperatures below -20 °C. Seven offline freezing techniques determined the temperature-dependent INP concentration above ~ -30 °C using water suspensions of filter-collected particles sampled over 8 hours (FRankfurt Ice Nuclei Deposition FreezinG Experiment, FRIDGE; Ice Nucleation Droplet Array INDA; Ice Nucleation Spectrometer of the Karlsruhe Institute of Technology, INSEKT; Ice Spectrometer, IS; Leipzig Ice Nucleation Array, LINA; LED based Ice Nucleation Detection Apparatus LINDA; Micro-Orifice Uniform Deposit Impactor–Droplet Freezing Technique, MOUDI-DFT). A special focus in this intercomparison campaign was placed on having overlapping sampling periods for the methods: INP concentrations measured with the online instruments were compared within 10 minutes and at the same temperature (±1 °C), while the filter collections for offline methods were started and stopped simultaneously and aerosol freezing spectra were compared at 1 °C steps. The majority of INP concentrations measured with PINE agreed well with the CSU-CFDC within a factor of two and five (71 % and 100 % of the data, respectively). There was a consistent observation of lower INP concentration with SPIN, and only 35 % of the data are within a factor of two from the CSU-CFDC, but 80 % of the data are still within a factor of five. This might have been caused by an incomplete exposure of all aerosol particles to water-supersaturated conditions within the instrument – a feature inherent to CFDC-style instruments – demonstrating the need to account for aerosol lamina spreading when interpreting INP concentration data from online instrument’s data.
The comparison of the offline methods, which deposited aerosol particles on filters in the laboratory via a whole air inlet, revealed that more than 45 % of the data fall within a factor of two from the results obtained with INSEKT. Measurements using different filter materials and filter holders revealed no difference in the temperature-dependent INP concentration at overlapping temperatures. However, consistently higher INP concentrations were observed from aerosol filters collected on the rooftop at the Puy de Dôme station without the use of an inlet, compared to measurements performed behind the whole air inlet system.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(3051 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(3051 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Review of “The Puy de Dôme ICe Nucleation Intercomparison Campaign (PICNIC): Comparison between online and offline methods in ambient air” by Lacher et al.', Anonymous Referee #1, 17 Jul 2023
In this paper, the authors present the ambient ice-nucleating particle (INP) number concentration obtained using online and offline measurement techniques during PICNIC campaign at Puy de Dôme. Intercomparison between online and offline instrumentation, as well as the impact of sampling site and sampling setups for offline instruments were assessed. The authors addressed the necessity of online and offline INP measurement instruments nicely. Such instrument intercomparison is essential and of great significance for the ice nucleation and atmospheric community and requires a lot of effort. Therefore, the paper fits the scope of ACP.
However, the quality of the paper should be improved before acceptance for publication in ACP. There are several typos and inconsistent usage of abbreviations in the manuscript. The reviewer tried to go from line to line to edit the manuscript, but the authors hold the responsibility for a thorough typo, format, and grammar check before re-submission.
Major comments
- Residence time is a critical factor that can affect online INP concentration measurement. The discussion of different residence times for different online instruments requires elaboration.
- The logical flow of the introduction section needs to be more organized.
- The presentation quality, for example, marker colors in figures, should be reconsidered.
- The mixed-use of units and abbreviations is problematic.
Specific comments
L4: ice nucleating particles -> ice-nucleating particles.
L5: It would be more informative for the readers if the authors could provide numerical ranges of the temperature range and the orders of magnitude here.
L10: What are the wall temperatures of the two CFDCs in immersion freezing mode at -5 °C?
L13: “temperatures below -20 °C” contradicts with the statement in L10.
L16: Missing a comma before INDA.
L22: It’s better to clarify these are temperature spectra.
L23: Not all offline instruments were operated at 1 °C step according to Table 1.
L25-27: The explanation of the discrepancy between SPIN and CS-CFDC is confusing. CSU-CFDC is also a CFDC-type instrument, isn’t it?
L30-31: The description of the offline sampling technique adds little to the results. Either combine it with the description between L13-15 or remove it.
L33-34: Did the authors collect filter samples simultaneously on the rooftop and in the laboratory? Fig. 1 shows that IS and LINDA analyze filters from the rooftop, and the rest five offline instruments analyze filters collected in the laboratory.
L36: Do the authors mean primary ice formation?
L53-55: Could shorter sampling time for offline measurement techniques result in a smaller sampling volume, and therefore INP concentration below the detection limit?
L58: By saying organic INPs, do the authors mean biological INPs here? It would be beneficial for the readers if the authors can add a few lines here to briefly discuss the sampling size limit of most online instruments, which normally excludes pollens and dust particles above 10 - 20 μm that are ice-active at warmer temperatures and could be captured by offline sampling.
L70-72: Please reword.
L74-87: The logic can be improved here. The statement here is a general comparison between the size ranges of online and offline sampling techniques, instead of “the size range of aerosol particles that are INPs”. These lines should be combined with the paragraph before, please refer to the previous comment on L58. Following the impact of aerosol type and nucleation temperature in instruments’ comparability at the end of the last paragraph, a review of previous intercomparison results showing the impact of aerosol size range on INP concentration measured by different instruments is missing here, which is helpful for the readers.
L88: a -> an.
L102-103: Is this statement relevant to instrument comparison?
L104-107: Consider replacing “Moreover”. Is this paragraph relevant to instrument comparison? If yes, please organize the logic.
L114: Move the definition of CSU-CFDC to L111 after the first appearance of “online instrument”.
L115: Please reword.
L130: What types of cloud form and occur at Puy de Dôme? Is it liquid clouds or mixed-phase clouds? If they are liquid clouds, how are the aerosols connected to mixed-phase clouds?
L136: was -> were.
L138-139: Consider removing the statement.
L150-151: Please specify the size range of SMPS. Why do the authors couple an SMPS to a CPC?
L155-156: Please elaborate on the characterization of WAI. How was the transmission efficiency computed, using number or mass concentration? What instrument was used to measure the concentration of 10 μm particles? Can this explain the consistently higher INP concentration from the filters sampled from the rooftop? Did the authors measure filters collected on the rooftop and downstream of the WAI during the same period with the same instrument and check the difference?
L157: Delete “, and”.
L180: The data points are already very limited (only 20 and 34 points according to Table 2) for a 14-day measurement period with a time resolution of 10 min. How could this be?
L181-183: What is the flow rate of PFPC in this study? What type of impactor(s) was used?
L186-192: What are the INP concentration factors for the other two online instruments? These values should be reported even though the authors decided to use 11.4 for all three instruments. Do different geometries and structures of the online instruments, as well as the impactors installed at instrument inlets affect INP concentration factors? An assessment is needed.
L206-208: Please specify the size range of the OPCs for the three online instruments. What’s the transmission efficiency of the single-jet impactors? Do the impactors modify the sampling flow or aerosol population entering CSU-CFDC compared to the other two online instruments?
L211: What’s the sample RH entering CSU-CFDC?
L240: Why do the lamina supersaturation and RHwater have different uncertainties?
L241: What’s the sample RH entering SPIN?
L242: What’s the transmission efficiency of the impactor? Will it change the sampling flow and aerosol population entering SPIN compared to the other two online instruments?
L252: Application of correction factor(s) may improve the systematic underestimation of SPIN compared to CSU-CFDC. Consider moving the clarification between L525-527 here.
L255: either side-> both sides
L258: Please keep consistent use of StdL or L in the paper, for example, L270, L274, and L293. Please check the figures and text throughout.
L262: Please include “e.g.” in the citation or complete the list of citations.
L292: welas-2500 OPC, delete “which”.
L293: Specify the LOD is specific for the two consecutive experiments in this study.
L299: , however -> However. Please check the usage throughout.
L343-345: How long did the transport take? Why are the authors so certain such exposure to heat doesn’t impact the results? What are the dominant INP source at Puy de Dôme?
L347: (2015 -> , 2015
L387: 100 nm deionized water?
L415: Are there SEM images or other evidence to indicate “release of all particles from the filter”?
L426: Is “real insects” opposed to plastic or resin insects? “insects” would be sufficient.
L429: sec. -> Sect. Please refer to ACP guidelines.
L443: Does the camera illuminate the droplets?
L450: Define “UNAM”.
L461: Please explicitly indicate aerodynamic particle size here.
L465: x -> ×
L472: It’s very nice to indicate the units of all physical quantities here. The unit for INP concentration is missing. Is fnu supposed to be dimensionless? What is the range of fne used in this study?
L480: Delete “as”. Please check the usage throughout.
L496-498: Please reword.
L499: Fig. 3, panel a -> Fig. 3a. Please replace other appearances accordingly.
L500-502: Could this be caused by the absence of correction factor(s)? Please refer to the comment on L252.
L514-515: What does this statement mean?
L515-523: What about the residence time difference for CSU-CFDC and SPIN?
L525: “uncalibrated” doesn’t sound right.
L528-529: Delete “(Fig. 3, panel b)”. Add “Fig. 3” in the second bracket.
L532-535: What about the residence time of aerosols in CSU-CFDC and PINE?
L547-550: Again, is it possible to measure filters collected on the rooftop and downstream of the WAI during the same period with the same instrument and check the difference?
L557: Delete “impactor”, the I in MOUDI is impactor.
L582-584: Please specify “a large temperature range”. FRIDGE and IS seem to cover even wider temperature ranges in Figs. 5-7.
L585-586: Can the authors add the comparison between two different samplers in the appendix?
L588-L593: Tab. 2 -> Table 2. Please refer to ACP guidelines. Does droplet size (2.5 μL vs. 11 μL) play a role here?
T594-595: Which stage(s) and size(s) did UNAM-MOUDI-DFT use to quantify INP concentration?
L596-597: Again, SEM images before and after washing would help.
L598: Puy de Dome -> Puy de Dôme
L602-603: miss -> lose. By saying “impaction”, do the authors mean particle impaction on the WAI surface? nanometer-sized what? It would be helpful to perform an estimation of diffusion loss.
L605: Again, what’s the dominant source of INP at Puy de Dôme in October?
P616-617: Please reword.
L634: Please reconsider a more informative and precise title.
L640: What is “the presence of aerosol particles”?
L643: Is “lognormal difference” appropriate here?
L645-646: Please clarify these are number concentrations.
L647: larger than 0.5 and 1 μm -> between 0.5 – 2.5 μm and 1 – 2.5 μm.
L649: Does the inlet refer to WAI discussed between L604-609? If yes, how come PM2.5 are mostly lost when 10 μm particles have a transmission efficiency above 60%?
L666: Please reconsider a more informative title.
L671-673: Please reword.
L682-683: Shouldn’t this conclusion be drawn by comparing the results for INDA and LINA using quartz fiber filters, and polycarbonate filters with 200 nm and 800 nm pores?
L689: Please specify “the examined size range”.
Figure 1:
- Full names of the acronyms should be given in the caption.
- L166: within -> with
Table1:
- Instrument names should be consistent throughout. Please define UNAM-MOUDI-DFT.
Figure 2:
- It would be helpful to indicate the factors of 2 and 5 ranges for the INP concentration of CSU-CFDC.
- How are the error bars for each instrument calculated? Please elaborate.
- The INP concentrations exceed 20 #/L at -30 °C. Do the authors have an explanation for such a high INP concentration?
- L490: Please keep the same order of instruments in the caption and legend.
- L491: The time resolution of CSU-CFDC doesn’t seem like 1 minute.
- Can the authors provide the full-time series in the appendix?
Figure 3 and Table 2:
- Why do the three online instruments have so few inter-comparable data points with a much higher time resolution?
- Is there a specific reason for the authors to choose CSU-CFDC and INSEKT as the baselines/references for online and offline INP measurement techniques? Do the intermediate measured values by these two instruments play a role in reference instrumentation selection? L493-496 and L582-584 partly address the concern.
- The information in Table 2 can be merged into Fig. 3 and Fig. 8.
Figure 4:
- Inconsistent usage of instrument name in the caption and legend.
- The information has been included in Figs. 5 - 7?
Figures 5 - 7:
- The marker colors for online instruments are hard to distinguish. Please consider changing marker shapes for different online instruments.
- Is it better to keep one representative panel, and move the other to the appendix?
- Please include filter information.
- L577: missing “of”.
Figure 8:
- Inconsistent usage of instrument name in the caption and y-label.
Figure 9:
- The symbols are hard to read.
Figure 10:
- Please change the color and label color of the right y-axis in panel d.
- L662: Please reword the caption.
Citation: https://doi.org/10.5194/egusphere-2023-1125-RC1 -
RC2: 'Comment on egusphere-2023-1125', Anonymous Referee #2, 13 Aug 2023
Lacher et al, do an extensive intercomparison of INP measurement techniques at the Puy de Dome research station. They generally find that all of the measurement techniques agree reasonably well, especially when accounting for instrumental differences and the sampling location (in front/behind the inlet). The manuscript is extremely well written and shows that the INP community has developed a suite of different INP measurement techniques that are capable of robustly measuring INPs in the field. I recommend the manuscript is published once the comments below are addressed.
General comments:
This is a very technical paper, which makes huge efforts to do a much needed instrument comparison. However, due to its technical nature and since it is primarily an instrument intercomparison, I wonder if it is more appropriate for it to be published in AMT. I know this is up to the authors and editor, but it is something I would consider since it mainly discussing and comparing instrument sampling discrepancies, rather than investigating processes/ atmospheric concentration of INPs.
As I was reading, I often found myself wondering why the instrument intercomparison did not do more to ensure that everything was the same across measurement platforms (e.g. same filter water distributed across DFTs, same set points on the CFDCs). Then I realized that one of the strengths of the studies is that the comparison was conducted using the native sampling format of each measurement technique. I think this point could be strengthened/empahsized, as it is a very nice conclusion showing that all of the tested techniques give representative INP concentrations and are equally useable in the field.
It is not immediately clear if all the INP concentrations are reported in stdL-1 or not (although the figure units are stdL-1). Perhaps make it clear how this is reported especially when for some of the filter techniques (ie IS, INSEKT) a mass flow is explicitly mentioned.
At a more critical level, sometimes the main messages of the paper were not very clear. First, there is a lot of discussion about why the instruments may not agree completely. I appreciate the authors taking the time to discuss these uncertainties/factors. However, all of the discussion is quite speculative and some calculations could determine if the proposed factors are the culprit or not. Second, is such a long discussion warranted when the measurement techniques agree so well i.e. majority agreement within a factor 5 or better. Along the same lines, at what point is the agreement high enough that we no longer need to do intercomparisons? Have we reached this agreement level? How much of an influence would a factor of 5 agreement have on the development of parametrizations and modelling results? Lastly, if we don’t expect to get closer than a factor of 5, do we really need more field intercomparisons? Wouldn’t it make more sense to do careful lab intercomparisons where we can adjust our techniques to have better agreement in a controlled setting?
Minor comments:
Abstract: Consider adding a discussion about the importance of INPs as this is geared for ACP, if AMT is the end journal then it is fine as is.
Line 155-156: Is there a reference for these reported transmission efficiencies?
Line 181-193: This is a bit unclear. So the concentrator was only used occasionally but how does this work when comparing between PINE, which fills for a fixed interval and then expands while the CFDCs measure continuously? Was the CFDC data excluded when PINE was filling as there would be a different flow ratio in the concentrator when all three instruments sampled? Do you expect this to be a potential problem/ lead to a change in the concentration factors ie change in particle size distribution?
Role of impactors: Does it matter that SPIN uses one impactor and CSU-CFDC is using two? The impactor cut size is distribution based so would this lead to more lager particles making it into SPIN than the CSU-CFDC? Of course this does not seem to be the case based on the results but it could be something to mention if this another source of uncertainty. I see this is now discussed a bit in the results. Perhaps it would be nice to have a table with the online instruments and their associated set up e.g., impactors, set conditions etc.
Section 2.2.2- Is it concerning that the RHw in spin was ~3 % lower than in the CSU-CFDC yet the ice crystal detection was one micron higher? It might be worth mentioning that the longer growth (residence) time in SPIN would ensure that at this supersaturation, the ice crystals would reach a 5 micron size even with the lower RHw than in the CSU-CFDC if this is the case. This is also discussed later but not actually calculated. Consider doing this calculation.
Line 258-259: Why does the limit of detection double behind the concentrator? Aren’t the background counts from the wall the same regardless of the concentrator? I this just due to having more "air" going into the instrument? This is not immediately clear, consider explaining this a bit better.
Section 2.2.3 -
Firstly, does this mean that PINE can only measure between -19 and -13 based on the start temperature of -13 and only a 6 degree cooling? This is not immediately clear. It might be nice to add a table with the temperatures that the comparisons were conducted at (see previous comment about impactors).
Second, during the continuous expansion, aerosol particles are removed, therefore is there some sort of correction for the decrease in effective volume in the chamber? I ask as depending at the ambient pressure (I guess around 850 hPa) a 300 hPa decrease is ~35 % loss of aerosol during the expansion. This would suggest that depending on when in the cycle the set temperature is reached, a fraction of the aerosol are already removed (albeit still less than the factor of 2 threshold). Please add information about this correction if it is performed or necessary.
Third, does the lack of impactor here also mean larger particles are expected to enter PINE than the other two online measurements since a cut size is distribution based?
Table 1: Consider adding the total volume of the air sampled for each filter technique as well as the minimum INP detection limit for each instrumental technique in the table. I know that this is corrected for, but I still think it is nice to see since the range in INP concentration covered appears to vary quite a bit.
Line 343-344: Consider adding a ref where cold transportation is discussed e.g. Beall et al., (2020)
Line 369: You could add Sarah Grawe’s new paper about HERA if you wanted (Grawe et al., 2023)
Section 3.2-
Here it would be worth mentioning the probability of detecting INPs at the warmer temperatures due to the limited sampling volumes used. It looks like LINDA and IS, which are also two of the methods using the equivalent of the most volume of air sampled, are the highest.
Line 456-458: Again it would be worth citing the importance of transporting samples frozen or at least the impact of warmer temperature transport (e.g. Beall et al., 2020)
Line 466: What are the impacts of such a high cooling rate? Previous studies have looked into the impact of cooling rate (e.g. Budke and Koop, 2015) and have seen a difference due to the stochastic nature of ice nucleation. This should also be discussed more in the results and not just as a conclusion. See point again below.
Line 495: Even though it is well known that the CSU-CFDC is a well-known and established measurement system, please add some references that attest to its established reputation.
Editorial comments:
Line 10: please switch to listing coldest temperature first followed by the warmer temperature when giving a T range. This should be consistent throughout the entire manuscript
Line 38: Although this is a complete list, you could add studies that show a relationship between INP concentrations and cloud phase e.g. (Creamean et al., 2022; Carlsen and David, 2022; Sze et al., 2023)
Line 74-78: It would be nice to add a reference for this
Line 149:program-> programs
Figure 1: Please have T range go from cold to warm
Line 308: remove the additional “periods”
Eq. 1: is it appropriate to call it a Vsol as you later refer to it as the droplets containing the suspension? Consider calling it Vsus or similar for suspension, but this is just a semantics thing.
Line 347: add ) at end of Hiranuma ref.
Line 347: Consider adding that the filter had been exposed or was a sample containing filter or something similar
Line 495-496: It’s a bit unclear here what is meant by the true INP concentration? Does this lead to an undercounting or overcounting? My guess would be undercounting, I would specify this.
Line 496: comma before which
Line 500: It might be worth mentioning that an undercounting of SPIN might be expected as observed in Garimella et al, (2017). I see this is done later (see comment below), but it is interesting that SPIN would have more lamina spreading for lower RH settings than the CSU-CFDC. This could be mentioned and is consistent with Garimella et al, (2017) who reported that spreading can contribute to a factor of 10 undercounting, which is more than the factor of 3 reported for the CSU-CFDC.
Line 514: comma before which
Line 520-527: Here the discussion of spreading is well discussed. I would consider reordering and only mention these issues once, like done in this section.
Line 745-747: Please make sure this is also discussed in the main text as it is an interesting point to raise. Does it make sense to consider longer time scales for the importance of time-dependence at these warm temperatures i.e. maybe you need longer time scales like 5-10s of minutes for the stochasticity to matter e.g. Budke and Koop, (2015) who needed a very low freezing rate to observe a big difference (.1 K/min)?
References:
Beall, C. M., Lucero, D., Hill, T. C., DeMott, P. J., Stokes, M. D., and Prather, K. A.: Best practices for precipitation sample storage for offline studies of ice nucleation, Atmospheric Meas. Tech. Discuss., 1–20, https://doi.org/10.5194/amt-2020-183, 2020.
Budke, C. and Koop, T.: BINARY: an optical freezing array for assessing temperature and time dependence of heterogeneous ice nucleation, Atmospheric Meas. Tech., 8, 689–703, https://doi.org/10.5194/amt-8-689-2015, 2015.
Carlsen, T. and David, R. O.: Spaceborne Evidence That Ice-Nucleating Particles Influence High-Latitude Cloud Phase, Geophys. Res. Lett., 49, e2022GL098041, https://doi.org/10.1029/2022GL098041, 2022.
Creamean, J. M., Barry, K., Hill, T. C. J., Hume, C., DeMott, P. J., Shupe, M. D., Dahlke, S., Willmes, S., Schmale, J., Beck, I., Hoppe, C. J. M., Fong, A., Chamberlain, E., Bowman, J., Scharien, R., and Persson, O.: Annual cycle observations of aerosols capable of ice formation in central Arctic clouds, Nat. Commun., 13, 3537, https://doi.org/10.1038/s41467-022-31182-x, 2022.
Grawe, S., Jentzsch, C., Schaefer, J., Wex, H., and Stratmann, F.: Next-generation ice nucleating particle sampling on aircraft: Characterization of the High-volume flow aERosol particle filter sAmpler (HERA), Atmospheric Meas. Tech. Discuss., 1–30, https://doi.org/10.5194/amt-2023-88, 2023.
Sze, K. C. H., Wex, H., Hartmann, M., Skov, H., Massling, A., Villanueva, D., and Stratmann, F.: Ice-nucleating particles in northern Greenland: annual cycles, biological contribution and parameterizations, Atmospheric Chem. Phys., 23, 4741–4761, https://doi.org/10.5194/acp-23-4741-2023, 2023.
Citation: https://doi.org/10.5194/egusphere-2023-1125-RC2 - AC1: 'Comment on egusphere-2023-1125', Larissa Lacher, 09 Oct 2023
- AC2: 'Comment on egusphere-2023-1125', Larissa Lacher, 09 Oct 2023
Interactive discussion
Status: closed
-
RC1: 'Review of “The Puy de Dôme ICe Nucleation Intercomparison Campaign (PICNIC): Comparison between online and offline methods in ambient air” by Lacher et al.', Anonymous Referee #1, 17 Jul 2023
In this paper, the authors present the ambient ice-nucleating particle (INP) number concentration obtained using online and offline measurement techniques during PICNIC campaign at Puy de Dôme. Intercomparison between online and offline instrumentation, as well as the impact of sampling site and sampling setups for offline instruments were assessed. The authors addressed the necessity of online and offline INP measurement instruments nicely. Such instrument intercomparison is essential and of great significance for the ice nucleation and atmospheric community and requires a lot of effort. Therefore, the paper fits the scope of ACP.
However, the quality of the paper should be improved before acceptance for publication in ACP. There are several typos and inconsistent usage of abbreviations in the manuscript. The reviewer tried to go from line to line to edit the manuscript, but the authors hold the responsibility for a thorough typo, format, and grammar check before re-submission.
Major comments
- Residence time is a critical factor that can affect online INP concentration measurement. The discussion of different residence times for different online instruments requires elaboration.
- The logical flow of the introduction section needs to be more organized.
- The presentation quality, for example, marker colors in figures, should be reconsidered.
- The mixed-use of units and abbreviations is problematic.
Specific comments
L4: ice nucleating particles -> ice-nucleating particles.
L5: It would be more informative for the readers if the authors could provide numerical ranges of the temperature range and the orders of magnitude here.
L10: What are the wall temperatures of the two CFDCs in immersion freezing mode at -5 °C?
L13: “temperatures below -20 °C” contradicts with the statement in L10.
L16: Missing a comma before INDA.
L22: It’s better to clarify these are temperature spectra.
L23: Not all offline instruments were operated at 1 °C step according to Table 1.
L25-27: The explanation of the discrepancy between SPIN and CS-CFDC is confusing. CSU-CFDC is also a CFDC-type instrument, isn’t it?
L30-31: The description of the offline sampling technique adds little to the results. Either combine it with the description between L13-15 or remove it.
L33-34: Did the authors collect filter samples simultaneously on the rooftop and in the laboratory? Fig. 1 shows that IS and LINDA analyze filters from the rooftop, and the rest five offline instruments analyze filters collected in the laboratory.
L36: Do the authors mean primary ice formation?
L53-55: Could shorter sampling time for offline measurement techniques result in a smaller sampling volume, and therefore INP concentration below the detection limit?
L58: By saying organic INPs, do the authors mean biological INPs here? It would be beneficial for the readers if the authors can add a few lines here to briefly discuss the sampling size limit of most online instruments, which normally excludes pollens and dust particles above 10 - 20 μm that are ice-active at warmer temperatures and could be captured by offline sampling.
L70-72: Please reword.
L74-87: The logic can be improved here. The statement here is a general comparison between the size ranges of online and offline sampling techniques, instead of “the size range of aerosol particles that are INPs”. These lines should be combined with the paragraph before, please refer to the previous comment on L58. Following the impact of aerosol type and nucleation temperature in instruments’ comparability at the end of the last paragraph, a review of previous intercomparison results showing the impact of aerosol size range on INP concentration measured by different instruments is missing here, which is helpful for the readers.
L88: a -> an.
L102-103: Is this statement relevant to instrument comparison?
L104-107: Consider replacing “Moreover”. Is this paragraph relevant to instrument comparison? If yes, please organize the logic.
L114: Move the definition of CSU-CFDC to L111 after the first appearance of “online instrument”.
L115: Please reword.
L130: What types of cloud form and occur at Puy de Dôme? Is it liquid clouds or mixed-phase clouds? If they are liquid clouds, how are the aerosols connected to mixed-phase clouds?
L136: was -> were.
L138-139: Consider removing the statement.
L150-151: Please specify the size range of SMPS. Why do the authors couple an SMPS to a CPC?
L155-156: Please elaborate on the characterization of WAI. How was the transmission efficiency computed, using number or mass concentration? What instrument was used to measure the concentration of 10 μm particles? Can this explain the consistently higher INP concentration from the filters sampled from the rooftop? Did the authors measure filters collected on the rooftop and downstream of the WAI during the same period with the same instrument and check the difference?
L157: Delete “, and”.
L180: The data points are already very limited (only 20 and 34 points according to Table 2) for a 14-day measurement period with a time resolution of 10 min. How could this be?
L181-183: What is the flow rate of PFPC in this study? What type of impactor(s) was used?
L186-192: What are the INP concentration factors for the other two online instruments? These values should be reported even though the authors decided to use 11.4 for all three instruments. Do different geometries and structures of the online instruments, as well as the impactors installed at instrument inlets affect INP concentration factors? An assessment is needed.
L206-208: Please specify the size range of the OPCs for the three online instruments. What’s the transmission efficiency of the single-jet impactors? Do the impactors modify the sampling flow or aerosol population entering CSU-CFDC compared to the other two online instruments?
L211: What’s the sample RH entering CSU-CFDC?
L240: Why do the lamina supersaturation and RHwater have different uncertainties?
L241: What’s the sample RH entering SPIN?
L242: What’s the transmission efficiency of the impactor? Will it change the sampling flow and aerosol population entering SPIN compared to the other two online instruments?
L252: Application of correction factor(s) may improve the systematic underestimation of SPIN compared to CSU-CFDC. Consider moving the clarification between L525-527 here.
L255: either side-> both sides
L258: Please keep consistent use of StdL or L in the paper, for example, L270, L274, and L293. Please check the figures and text throughout.
L262: Please include “e.g.” in the citation or complete the list of citations.
L292: welas-2500 OPC, delete “which”.
L293: Specify the LOD is specific for the two consecutive experiments in this study.
L299: , however -> However. Please check the usage throughout.
L343-345: How long did the transport take? Why are the authors so certain such exposure to heat doesn’t impact the results? What are the dominant INP source at Puy de Dôme?
L347: (2015 -> , 2015
L387: 100 nm deionized water?
L415: Are there SEM images or other evidence to indicate “release of all particles from the filter”?
L426: Is “real insects” opposed to plastic or resin insects? “insects” would be sufficient.
L429: sec. -> Sect. Please refer to ACP guidelines.
L443: Does the camera illuminate the droplets?
L450: Define “UNAM”.
L461: Please explicitly indicate aerodynamic particle size here.
L465: x -> ×
L472: It’s very nice to indicate the units of all physical quantities here. The unit for INP concentration is missing. Is fnu supposed to be dimensionless? What is the range of fne used in this study?
L480: Delete “as”. Please check the usage throughout.
L496-498: Please reword.
L499: Fig. 3, panel a -> Fig. 3a. Please replace other appearances accordingly.
L500-502: Could this be caused by the absence of correction factor(s)? Please refer to the comment on L252.
L514-515: What does this statement mean?
L515-523: What about the residence time difference for CSU-CFDC and SPIN?
L525: “uncalibrated” doesn’t sound right.
L528-529: Delete “(Fig. 3, panel b)”. Add “Fig. 3” in the second bracket.
L532-535: What about the residence time of aerosols in CSU-CFDC and PINE?
L547-550: Again, is it possible to measure filters collected on the rooftop and downstream of the WAI during the same period with the same instrument and check the difference?
L557: Delete “impactor”, the I in MOUDI is impactor.
L582-584: Please specify “a large temperature range”. FRIDGE and IS seem to cover even wider temperature ranges in Figs. 5-7.
L585-586: Can the authors add the comparison between two different samplers in the appendix?
L588-L593: Tab. 2 -> Table 2. Please refer to ACP guidelines. Does droplet size (2.5 μL vs. 11 μL) play a role here?
T594-595: Which stage(s) and size(s) did UNAM-MOUDI-DFT use to quantify INP concentration?
L596-597: Again, SEM images before and after washing would help.
L598: Puy de Dome -> Puy de Dôme
L602-603: miss -> lose. By saying “impaction”, do the authors mean particle impaction on the WAI surface? nanometer-sized what? It would be helpful to perform an estimation of diffusion loss.
L605: Again, what’s the dominant source of INP at Puy de Dôme in October?
P616-617: Please reword.
L634: Please reconsider a more informative and precise title.
L640: What is “the presence of aerosol particles”?
L643: Is “lognormal difference” appropriate here?
L645-646: Please clarify these are number concentrations.
L647: larger than 0.5 and 1 μm -> between 0.5 – 2.5 μm and 1 – 2.5 μm.
L649: Does the inlet refer to WAI discussed between L604-609? If yes, how come PM2.5 are mostly lost when 10 μm particles have a transmission efficiency above 60%?
L666: Please reconsider a more informative title.
L671-673: Please reword.
L682-683: Shouldn’t this conclusion be drawn by comparing the results for INDA and LINA using quartz fiber filters, and polycarbonate filters with 200 nm and 800 nm pores?
L689: Please specify “the examined size range”.
Figure 1:
- Full names of the acronyms should be given in the caption.
- L166: within -> with
Table1:
- Instrument names should be consistent throughout. Please define UNAM-MOUDI-DFT.
Figure 2:
- It would be helpful to indicate the factors of 2 and 5 ranges for the INP concentration of CSU-CFDC.
- How are the error bars for each instrument calculated? Please elaborate.
- The INP concentrations exceed 20 #/L at -30 °C. Do the authors have an explanation for such a high INP concentration?
- L490: Please keep the same order of instruments in the caption and legend.
- L491: The time resolution of CSU-CFDC doesn’t seem like 1 minute.
- Can the authors provide the full-time series in the appendix?
Figure 3 and Table 2:
- Why do the three online instruments have so few inter-comparable data points with a much higher time resolution?
- Is there a specific reason for the authors to choose CSU-CFDC and INSEKT as the baselines/references for online and offline INP measurement techniques? Do the intermediate measured values by these two instruments play a role in reference instrumentation selection? L493-496 and L582-584 partly address the concern.
- The information in Table 2 can be merged into Fig. 3 and Fig. 8.
Figure 4:
- Inconsistent usage of instrument name in the caption and legend.
- The information has been included in Figs. 5 - 7?
Figures 5 - 7:
- The marker colors for online instruments are hard to distinguish. Please consider changing marker shapes for different online instruments.
- Is it better to keep one representative panel, and move the other to the appendix?
- Please include filter information.
- L577: missing “of”.
Figure 8:
- Inconsistent usage of instrument name in the caption and y-label.
Figure 9:
- The symbols are hard to read.
Figure 10:
- Please change the color and label color of the right y-axis in panel d.
- L662: Please reword the caption.
Citation: https://doi.org/10.5194/egusphere-2023-1125-RC1 -
RC2: 'Comment on egusphere-2023-1125', Anonymous Referee #2, 13 Aug 2023
Lacher et al, do an extensive intercomparison of INP measurement techniques at the Puy de Dome research station. They generally find that all of the measurement techniques agree reasonably well, especially when accounting for instrumental differences and the sampling location (in front/behind the inlet). The manuscript is extremely well written and shows that the INP community has developed a suite of different INP measurement techniques that are capable of robustly measuring INPs in the field. I recommend the manuscript is published once the comments below are addressed.
General comments:
This is a very technical paper, which makes huge efforts to do a much needed instrument comparison. However, due to its technical nature and since it is primarily an instrument intercomparison, I wonder if it is more appropriate for it to be published in AMT. I know this is up to the authors and editor, but it is something I would consider since it mainly discussing and comparing instrument sampling discrepancies, rather than investigating processes/ atmospheric concentration of INPs.
As I was reading, I often found myself wondering why the instrument intercomparison did not do more to ensure that everything was the same across measurement platforms (e.g. same filter water distributed across DFTs, same set points on the CFDCs). Then I realized that one of the strengths of the studies is that the comparison was conducted using the native sampling format of each measurement technique. I think this point could be strengthened/empahsized, as it is a very nice conclusion showing that all of the tested techniques give representative INP concentrations and are equally useable in the field.
It is not immediately clear if all the INP concentrations are reported in stdL-1 or not (although the figure units are stdL-1). Perhaps make it clear how this is reported especially when for some of the filter techniques (ie IS, INSEKT) a mass flow is explicitly mentioned.
At a more critical level, sometimes the main messages of the paper were not very clear. First, there is a lot of discussion about why the instruments may not agree completely. I appreciate the authors taking the time to discuss these uncertainties/factors. However, all of the discussion is quite speculative and some calculations could determine if the proposed factors are the culprit or not. Second, is such a long discussion warranted when the measurement techniques agree so well i.e. majority agreement within a factor 5 or better. Along the same lines, at what point is the agreement high enough that we no longer need to do intercomparisons? Have we reached this agreement level? How much of an influence would a factor of 5 agreement have on the development of parametrizations and modelling results? Lastly, if we don’t expect to get closer than a factor of 5, do we really need more field intercomparisons? Wouldn’t it make more sense to do careful lab intercomparisons where we can adjust our techniques to have better agreement in a controlled setting?
Minor comments:
Abstract: Consider adding a discussion about the importance of INPs as this is geared for ACP, if AMT is the end journal then it is fine as is.
Line 155-156: Is there a reference for these reported transmission efficiencies?
Line 181-193: This is a bit unclear. So the concentrator was only used occasionally but how does this work when comparing between PINE, which fills for a fixed interval and then expands while the CFDCs measure continuously? Was the CFDC data excluded when PINE was filling as there would be a different flow ratio in the concentrator when all three instruments sampled? Do you expect this to be a potential problem/ lead to a change in the concentration factors ie change in particle size distribution?
Role of impactors: Does it matter that SPIN uses one impactor and CSU-CFDC is using two? The impactor cut size is distribution based so would this lead to more lager particles making it into SPIN than the CSU-CFDC? Of course this does not seem to be the case based on the results but it could be something to mention if this another source of uncertainty. I see this is now discussed a bit in the results. Perhaps it would be nice to have a table with the online instruments and their associated set up e.g., impactors, set conditions etc.
Section 2.2.2- Is it concerning that the RHw in spin was ~3 % lower than in the CSU-CFDC yet the ice crystal detection was one micron higher? It might be worth mentioning that the longer growth (residence) time in SPIN would ensure that at this supersaturation, the ice crystals would reach a 5 micron size even with the lower RHw than in the CSU-CFDC if this is the case. This is also discussed later but not actually calculated. Consider doing this calculation.
Line 258-259: Why does the limit of detection double behind the concentrator? Aren’t the background counts from the wall the same regardless of the concentrator? I this just due to having more "air" going into the instrument? This is not immediately clear, consider explaining this a bit better.
Section 2.2.3 -
Firstly, does this mean that PINE can only measure between -19 and -13 based on the start temperature of -13 and only a 6 degree cooling? This is not immediately clear. It might be nice to add a table with the temperatures that the comparisons were conducted at (see previous comment about impactors).
Second, during the continuous expansion, aerosol particles are removed, therefore is there some sort of correction for the decrease in effective volume in the chamber? I ask as depending at the ambient pressure (I guess around 850 hPa) a 300 hPa decrease is ~35 % loss of aerosol during the expansion. This would suggest that depending on when in the cycle the set temperature is reached, a fraction of the aerosol are already removed (albeit still less than the factor of 2 threshold). Please add information about this correction if it is performed or necessary.
Third, does the lack of impactor here also mean larger particles are expected to enter PINE than the other two online measurements since a cut size is distribution based?
Table 1: Consider adding the total volume of the air sampled for each filter technique as well as the minimum INP detection limit for each instrumental technique in the table. I know that this is corrected for, but I still think it is nice to see since the range in INP concentration covered appears to vary quite a bit.
Line 343-344: Consider adding a ref where cold transportation is discussed e.g. Beall et al., (2020)
Line 369: You could add Sarah Grawe’s new paper about HERA if you wanted (Grawe et al., 2023)
Section 3.2-
Here it would be worth mentioning the probability of detecting INPs at the warmer temperatures due to the limited sampling volumes used. It looks like LINDA and IS, which are also two of the methods using the equivalent of the most volume of air sampled, are the highest.
Line 456-458: Again it would be worth citing the importance of transporting samples frozen or at least the impact of warmer temperature transport (e.g. Beall et al., 2020)
Line 466: What are the impacts of such a high cooling rate? Previous studies have looked into the impact of cooling rate (e.g. Budke and Koop, 2015) and have seen a difference due to the stochastic nature of ice nucleation. This should also be discussed more in the results and not just as a conclusion. See point again below.
Line 495: Even though it is well known that the CSU-CFDC is a well-known and established measurement system, please add some references that attest to its established reputation.
Editorial comments:
Line 10: please switch to listing coldest temperature first followed by the warmer temperature when giving a T range. This should be consistent throughout the entire manuscript
Line 38: Although this is a complete list, you could add studies that show a relationship between INP concentrations and cloud phase e.g. (Creamean et al., 2022; Carlsen and David, 2022; Sze et al., 2023)
Line 74-78: It would be nice to add a reference for this
Line 149:program-> programs
Figure 1: Please have T range go from cold to warm
Line 308: remove the additional “periods”
Eq. 1: is it appropriate to call it a Vsol as you later refer to it as the droplets containing the suspension? Consider calling it Vsus or similar for suspension, but this is just a semantics thing.
Line 347: add ) at end of Hiranuma ref.
Line 347: Consider adding that the filter had been exposed or was a sample containing filter or something similar
Line 495-496: It’s a bit unclear here what is meant by the true INP concentration? Does this lead to an undercounting or overcounting? My guess would be undercounting, I would specify this.
Line 496: comma before which
Line 500: It might be worth mentioning that an undercounting of SPIN might be expected as observed in Garimella et al, (2017). I see this is done later (see comment below), but it is interesting that SPIN would have more lamina spreading for lower RH settings than the CSU-CFDC. This could be mentioned and is consistent with Garimella et al, (2017) who reported that spreading can contribute to a factor of 10 undercounting, which is more than the factor of 3 reported for the CSU-CFDC.
Line 514: comma before which
Line 520-527: Here the discussion of spreading is well discussed. I would consider reordering and only mention these issues once, like done in this section.
Line 745-747: Please make sure this is also discussed in the main text as it is an interesting point to raise. Does it make sense to consider longer time scales for the importance of time-dependence at these warm temperatures i.e. maybe you need longer time scales like 5-10s of minutes for the stochasticity to matter e.g. Budke and Koop, (2015) who needed a very low freezing rate to observe a big difference (.1 K/min)?
References:
Beall, C. M., Lucero, D., Hill, T. C., DeMott, P. J., Stokes, M. D., and Prather, K. A.: Best practices for precipitation sample storage for offline studies of ice nucleation, Atmospheric Meas. Tech. Discuss., 1–20, https://doi.org/10.5194/amt-2020-183, 2020.
Budke, C. and Koop, T.: BINARY: an optical freezing array for assessing temperature and time dependence of heterogeneous ice nucleation, Atmospheric Meas. Tech., 8, 689–703, https://doi.org/10.5194/amt-8-689-2015, 2015.
Carlsen, T. and David, R. O.: Spaceborne Evidence That Ice-Nucleating Particles Influence High-Latitude Cloud Phase, Geophys. Res. Lett., 49, e2022GL098041, https://doi.org/10.1029/2022GL098041, 2022.
Creamean, J. M., Barry, K., Hill, T. C. J., Hume, C., DeMott, P. J., Shupe, M. D., Dahlke, S., Willmes, S., Schmale, J., Beck, I., Hoppe, C. J. M., Fong, A., Chamberlain, E., Bowman, J., Scharien, R., and Persson, O.: Annual cycle observations of aerosols capable of ice formation in central Arctic clouds, Nat. Commun., 13, 3537, https://doi.org/10.1038/s41467-022-31182-x, 2022.
Grawe, S., Jentzsch, C., Schaefer, J., Wex, H., and Stratmann, F.: Next-generation ice nucleating particle sampling on aircraft: Characterization of the High-volume flow aERosol particle filter sAmpler (HERA), Atmospheric Meas. Tech. Discuss., 1–30, https://doi.org/10.5194/amt-2023-88, 2023.
Sze, K. C. H., Wex, H., Hartmann, M., Skov, H., Massling, A., Villanueva, D., and Stratmann, F.: Ice-nucleating particles in northern Greenland: annual cycles, biological contribution and parameterizations, Atmospheric Chem. Phys., 23, 4741–4761, https://doi.org/10.5194/acp-23-4741-2023, 2023.
Citation: https://doi.org/10.5194/egusphere-2023-1125-RC2 - AC1: 'Comment on egusphere-2023-1125', Larissa Lacher, 09 Oct 2023
- AC2: 'Comment on egusphere-2023-1125', Larissa Lacher, 09 Oct 2023
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
465 | 205 | 23 | 693 | 17 | 12 |
- HTML: 465
- PDF: 205
- XML: 23
- Total: 693
- BibTeX: 17
- EndNote: 12
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Cited
2 citations as recorded by crossref.
- The Puy de Dôme ICe Nucleation Intercomparison Campaign (PICNIC): comparison between online and offline methods in ambient air L. Lacher et al. 10.5194/acp-24-2651-2024
- Next-generation ice-nucleating particle sampling on board aircraft: characterization of the High-volume flow aERosol particle filter sAmpler (HERA) S. Grawe et al. 10.5194/amt-16-4551-2023
Michael P. Adams
Kevin Barry
Barbara Bertozzi
Heinz Bingemer
Cristian Boffo
Yannick Bras
Nicole Büttner
Dimitri Castarede
Daniel J. Cziczo
Paul J. DeMott
Romy Fösig
Megan Goodell
Kristina Höhler
Thomas C. J. Hill
Conrad Jentzsch
Luis A. Ladino
Ezra J. T. Levin
Stephan Mertes
Ottmar Möhler
Kathryn A. Moore
Benjamin J. Murray
Jens Nadolny
Tatjana Pfeuffer
David Picard
Carolina Ramírez-Romero
Mickael Ribeiro
Sarah Richter
Jann Schrod
Karine Sellegri
Frank Stratmann
Benjamin E. Swanson
Erik Thomson
Heike Wex
Martin Wolf
Evelyn Freney
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(3051 KB) - Metadata XML