the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Developing a low-cost device for estimating air–water ΔpCO2 in coastal environments
Abstract. The ocean is one of the world’s largest anthropogenic carbon dioxide (CO2) sinks, but closing the carbon budget is logistically difficult and expensive, and uncertainties in carbon fluxes and reservoirs remain. Specifically, measuring the CO2 flux at the air–sea interface usually requires costly sensors or analyzers (>30,000 USD), which can limit what a group is able to monitor. Our group has developed and validated a low-cost ΔpCO2 system for ~1,400 USD with Internet of Things (IoT) capabilities to combat this limitation using a ~100 USD pCO2 K30 sensor at its core. Our Sensor for the Exchange of Atmospheric CO2 with Water (SEACOW) may be placed in an observational network with traditional pCO2 sensors to extend the spatial coverage and resolution of monitoring systems. After calibration, the SEACOW reports atmospheric pCO2 measurements within 2–3 % of measurements made by a calibrated LI-COR LI-850. We also demonstrate the SEACOW’s ability to capture diel pCO2 cycling in seagrass, provide recommendations for SEACOW field deployments, and provide additional technical specifications for the SEACOW and for the K30 itself (e.g., air and water-side 99.3 % response time; 5.7 and 29.6 minutes, respectively).
- Preprint
(1502 KB) - Metadata XML
-
Supplement
(326 KB) - BibTeX
- EndNote
Status: closed (peer review stopped)
-
RC1: 'Comment on egusphere-2024-3375', Anonymous Referee #1, 17 Dec 2024
This manuscript details the design of a Do-It-Yourself (DIY) pCO2 sensor. The design cleverly uses two sample loops to successively measure steams air and in-water CO2, which are passively equilibrated across membranes. The CO2 detector used in this work is an inexpensive infrared gas analyzer used in previous designs of low-cost CO2 sensors. The authors do a nice job of providing the details of their design, including parts lists, 3D drawings, and circuit design. They describe some tests of sensor performance, and then present results from a two-week mesocosm-type evaluation of sensor response using tanks with and without seagrass. The fundamentals of a good technical note and here, but I was left with several questions which I think should be addressed.
MAJOR POINTS
-Use of the proposed design is framed around coastal or perhaps estuarine environments, but why? In theory this design should be applicable to freshwaters as well, where direct measurements of pCO2 are lacking. Why not use this design to study lakes, for example? Is there concern about the equilibration time for higher-CO2 waters? I think the authors are limiting themselves a bit. Similarly, the authors repeatedly describe how using this design as a “delta-pCO2” device helps compensate for potential CO2 detector drift, which is a good point, but I imagine many researchers would want to use this device to measure absolute pCO2 for studying quantities other than air-sea fluxes.
-I have concerns about the results in Figure 4B. If I am reading the Methods correctly (section 2.2.2), these results are from measurements of 2L of DI water which was bubbled with 1,000 ppm CO2 for 24 hours. It seems that the pCO2 of this water should therefore be right around 1,000 uatm, but SEACOW3 is reading a pCO2 of only about 550 uatm. Is there a potential accuracy issue shown here which is not discussed, or am I misunderstanding what these data are showing? I know that these data are mostly intended to show the response time, but the accuracy discrepancy caught my eye.
-The Discussion is very perfunctory. I think there is more that the authors should examine, including potential reasons for SEACOW 2 and 4 failures, other deployment use cases, potential improvements to the sensor design, and putting the data quality in perspective (perhaps in the climate/weather framework like that suggested by the IWG-OA).
MINOR POINTS
-These references are very relevant to this manuscript:
Lee, D.J.J., Kek, K.T., Wong, W.W., Mohd Nadzir, M.S., Yan, J., Zhan, L. and Poh, S.-C. (2022), Design and optimization of wireless in-situ sensor coupled with gas–water equilibrators for continuous pCO2 measurement in aquatic environments. Limnol Oceanogr Methods, 20: 500-513. https://doi.org/10.1002/lom3.10500
Robison, A. L., L. E. Koenig, J. D. Potter, L. E. Snyder, C. W. Hunt, W. H. McDowell, and W. M. Wollheim. 2024. Lotic-SIPCO2: Adaptation of an open-source CO sensor system and examination of associated emission uncertainties across a range of stream sizes and land uses. Limnology and Oceanography: Methods 22: 191–207. doi:10.1002/lom3.10600
-L17 what about water-side pCO2 accuracy? This is an important metric
-L13 the concept of IoT is mentioned a few times, but I’m not sure what IoT capabilities are added in the design. Some sort of communication upgrade?
-L25 be clear that “sources” are sources of CO2 from aquatic to atmospheric reservoirs
-L29: how high of a temporal or spatial resolution is needed to constrain CO2 flux budgets?
-L38: Is the CO2-Pro a “delta-pCO2” device as defined in this manuscript? I though the CO2-Pro only measured in-water pCO2. Furthermore, I think the authors should explicitly define what makes a “delta-pCO2” device earlier than L53, as this is a term I haven’t seen in common use. I suggest not using this term in the title either, perhaps substituting it for “air-water CO2 fluxes”.
-Figure 1: I like this figure a lot. It would be nice to see what the device looks like with the housing as well. Also, could there be some sort of air-water boundary line included, and maybe blue shading below this line to easily indicate the water side?
-L80: why was the ABC turned off?
-L81-87: can you talk more about the logging and communication systems more?
-Figure 2: some more detail would be useful: which lines are air in or out, and what are the various wires? The housing also has the BME280 inside I believe. I think this figure could be expanded a lot with pictures of the components described in Lines 90-105 too. This Figure can be a really useful resource to help readers understand how the various components work together.
-L112: 250 ppm steps are pretty large for examining atmospheric CO2
-L114: This confuses me. Eq. 2 requires inputs of m(dry) and b(dry), but Lines 113-114 says these parameters are produced by applying Eq 2-4. I’m not sure how these calculations were done.
-L138: I need more description of how this K30 reading vs vapor pressure curve was done, and maybe even an example in the Supplementary.
-L147-149: if the drying system is only removing some of the water vapor, is it really needed? What’s the advantage in keeping the drying system, especially if the 100% humidity response can be accounted for?
-L161: I think this is a typo and should say “control tank”. My understanding is that two “tanks” were used, and each tank had several “containers” of sediment. Is that right?
-L163: is a “power filter” some sort of pumped filter unit?
-L165: the tanks were “closed-loop” with respect to water flow, but not to air exchange I believe. Were they covered in any way?
L167: can you rephrase this line?
L168: I believe each “tank” had four “containers” inside. So each “container” had one light?
L186-189: I’m not sure that the discrete sample pCO2 calculations add much to this manuscript. The discussion of them is pretty minimal and the calculated pCO2 seems hard to interpret (perhaps unsurprisingly). I would be OK with them being omitted as analytical issues seem to keep them from being useful. However, if the discrete data remain, then there needs to be a lot more analytical detail regarding the TA and DIC measurements (accuracy/precision? CRM used? Propagated uncertainties in calculated pCO2? etc).
-Figure 3: It’s very interesting that plumber’s tape could be used as a cheap (although not very durable) membrane- I never considered that!
L197-201: Tie this paragraph more explicitly to the Methods. Was the same chamber used as described in 2.2.2?
-Section 3.2: It looks like there was some variability in the response time among the SEACOW units. Can you put some uncertainties around the 1T/5T estimates?
-Figure 4B: Why is only one SEACOW unit shown here?
-L236-237: can you explain “difference in sensor gain could contribute to inaccuracy as delta-pCO2 increases” more? I don’t follow that statement.
-Figure 6: can uncertainty bounds be put around the pCO2 lines in these plots? Also label each panel with the appropriate sensor unit 9SEACOW-1 on top, SEACOW-4 on bottom.
-L250: talk more about this air leak, either here or in the Discussion. Do you have advice on how to avoid this in the future? How could someone trying to replicate this design avoid this problem?
-L256-259: I don’t see the described pattern. To my eye, the data in Figure 6 show decreasing pCO2 during the day and rising pCO2 at night.
-L261: I don’t see a big change in air pCO2 after addition of the DI water (can this addition be marked with a vertical line or something in the Figures?)
-Table 2- Discuss these data more, perhaps in the Discussion section. What does the characterization indicate about potential uses for the device, comparison to other sensors, or use of the data, perhaps in the context of the climate vs. weather data quality criteria proposed by the IWG-OA? Also discuss why two of the units failed and how others might avoid these problems.
-L280-289: this paragraph reads more like a part of the Introduction.
-L303-304: if someone building this device still needs to characterize the humidity response, then is it still useful to have the Drierite system? Would removing this drying system bring down the cost or complexity or increase deployment times?
-Figure S1 (caption)- I see the DO as red instead of purple.
Citation: https://doi.org/10.5194/egusphere-2024-3375-RC1 -
RC2: 'Comment on egusphere-2024-3375', Anonymous Referee #2, 25 Jan 2025
The manuscript describes the design and performance of a low-cost, Do-It-Yourself pCO2 sensor in the hope that it would be cheap enough and good enough that it would increase the spatial and temporal coverage of surface pCO2 measurements and thus decrease our uncertainty of the ocean sink, help with regional carbon studies and even help in the monitoring of marine carbon dioxide removal. The authors do a good job at describing the system which uses an infrared sensor used in other low-cost instruments, a suite of custom 3D printed parts to combine in a compact package, all of it very conveniently accessible on a GitHub repository for easy reproduction. The effort is a very worthwhile endeavor, as the number of companies trying to do the same shows, but I don’t believe, after reading this article, that the results are here to show that this sensor could perform at the level needed to do any of the activities it is meant for. The fact that the signal varies according to the diel cycle of a seagrass bed is not sufficient, especially in view of the several unexplained results. At this stage, I would argue that these results do not merit a publication until further improvements/characterization can be performed, as detailed below.
MAJOR ISSUES
1. The accuracy of the instrument seems to vary wildly in-between units.
The authors report an accuracy of 2.5% of the LICOR 850 reading (was that calibrated?). I assume this is an average. Using their numbers, I get a range of 1.3% to 3.3%, which I believe is erroneous (see “other issues #5 below). I think the values for the 3 SEACOWS that worked are: 0.8%, 1.9% and 7.3%. Since they probably were built the same way, it raises concern about the robustness of the calibration.
These big variations are for air measurements in a control environment, which does not bode well for uncontrolled air or seawater environments where temperature, pressure, biofouling will occur.
2. The instrument seems to fail easily.
Out of 4 instruments built, therefore brand new, half failed during testing period, in a controlled environment. Again, this does not bode well for deployed instruments.
No explanation or reason is given for the failures so it is hard to judge the gravity of it. I will assume the worse and say there is a fundamental issue with the design.
3. The behavior of the instrument is unpredictable, which makes me wonder if it is really measuring what we think it is.
Figure 4.B shows puzzling results, beyond the fact that the system equilibrated at around half the value that it should have (550 instead of 1000) .
The authors admit that they cannot explain the sudden very rapid jump in the signal and suggest maybe pressure is to blame. A jump from 450 uatm to 600 uatm would require an increase of 0.33 atm, which would occur if the unit was placed at 3 meters depth and the membrane was soft…neither is the case. The following slower decrease is also unexplained..
It would also be nice to know if that effect occurs every time the SEACOW is placed in water, or if it happens with all the SEACOWs or just this one…
Blaming it on “imperfections” of the tank (line 217) is like saying you get wrong results because you didn’t do the experiment correctly…this is not good enough for a publication. Also, “…small changes (..) happening faster than the SEACOW could measure…” (line 218-219) rapid spikes in the signal…on the contrary.
4. A DeltapCO2 instrument…?
I agree with the statement that if you use the same instrument to measure both atmospheric and seawater CO2, any drift will mostly cancel out when taking the difference. But I would argue that this is not the case here because the instrument is using 2 different membranes to measure the two. Yes, the material is the same but it is used in different shapes (flat and tubular), under very different conditions (air and water), which also means that they will undergo very different biofouling and therefore have even more diverging characteristics.
5. Design.
While the idea of measuring both atmospheric and water sides with the same instrument is good, and thoughts have been given to the design, I still see some potential issues which would put in question what the instrument is really measuring. I have not seen the full SEACOW built but from the text, I imagine the air-side membrane will reside close to the seawater surface, which means that, depending on the conditions, it will very likely be wet and covered with seawater, which will affect the measurements. This could be a major issue.
Another question I have relates to the use of the drierite with the nation tube. Not much detail is given but nation tubes have an internal membrane through which the water vapor permeates, but only if the pH2O is favorable. That means you need either lower pressure or lower water concentration on the outside of the membrane. This is why the Drierite is there, to “dry” the outside gas. But if the gas is not actively being circulated, it will quickly saturate with water and the effectiveness of the nafion will decrease. I doubt diffusion alone would be sufficient. The authors should get a feel for that looking at the Relative Humidity sensor data, which was not discussed here.
6. I question the statement on line 78 that “…the K30 measures pCO2…”, which seems to imply that the K30 signal is directly proportional to the pCO2 of the medium in contact with the membrane. I believe the authors are confusing this with the fact that the K30 signal is dependent on the pressure inside the sensor, which is very different. The pressure inside the sensor (monitored) is not the same as the one at the equilibration point next to the membrane (not monitored). The authors are just assuming they are, and they might be close but they are not the same.
The signal in the LI-850, or any LICOR who are also based on absorption, is a complex function of (cell) pressure, not directly proportional. The K30 is very likely similar.
When the authors calibrate their instrument using the calculated pCO2 with the xCO2 from the LI-850 (I assume that’s what they did), they effectively calibrated their instrument at one (atmospheric) pressure. Those coefficients would probably be different at another pressure.
And by the way, the signal is also probably temperature dependent, which has not been characterized here.
Given the accuracy level, this might not be major but it should be mentioned.
OTHER ISSUES
1. All the measurements of the SEACOWs are compared to those of the LI-850 which is used as the “true” value but we are not told if and how the LICOR has been calibrated.
2. The seagrass experiment has several shortfalls
- the tanks are open to the atmosphere in a lab where it seems people come in and out. The authors should not be surprised the air pCO2 changes. It would have been nice to monitor these changes with the LICOR.
- typically, air pCO2 goes up during the day when people are around and down at night when they leave. Here, it seems to be doing the opposite. It is clearly following a diel pattern but what caused it? Is it external to the sensor or is it something affecting the sensor (like the lights warming up the membrane) ?
- the authors are very careful to make the T, P conditions optimal to keep the growth rate of the seagrass steady (I assume to keep the CO2 respiration and photosynthesis constant), yet add these big volumes of DI water every 2-3 days, one addition in the middle of the experiment having a clear effect on the pCO2 of the water. Maybe trying to prevent evaporation would be a better way of controlling the salinity rather than perturbing the system with additions of DI water.
3. TA and DIC samples
First, if you want to calculate pCO2, TA and DIC is the pair that will propagate the most uncertainty in the calculated pCO2, especially if the DIC is of low accuracy. I understand the authors might have been limited by the available instruments available to them but thought I’d mention it here.
Second, the description of the sample collection doesn’t seem to indicate the greatest care in not contaminating the sample( “…dipped it underwater to fill…” or “…added the plunger while it was underwater…” on line 178).
Third, the amounts used (20 ml) seem very small, especially for TA and therefore have a lot of uncertainties. The authors should detail the methods.
At this stage, in view of the issues mentioned before, I believe the verification of the values output by the SEACOW is premature.
4. Humidity and pressure correction.
That section is not very clear. Section 2.2.1 states that it used equations (2-4), which are equations detailing the correction for the water signal, when section 2.2.1 was using dry gas…therefore not correcting for water. It should be clarified.
5. Table 1.
I believe the numbers listed in the “Post” columns are in error. If I understand correctly, they should be derived from the dry calibration coefficients, using the “Pre” values as K30raw. The numbers don’t match. As a consequence, I also get different rms errors. Calculations should be checked.
I would like to remind the authors that in such an exercise, it is necessary to have the LI-850 properly calibrated, something that has not been mentioned.
Another point I would like to make here is that the 3 SEACOWs have vastly different calibration coefficients, which is a bit surprising. If this is related to variations in membrane properties, is it far-fetched to think that they would also age very differently?
I think that for this instrument development to go forward, the sensor response and characteristics of the membranes need to be studied further, in more controlled and varied (T, P) conditions. Results need to be more detailed (calibrations, calculations, experimental procedures). And finally, evaluate the performance of the SEACOWs between themselves and in time. Experiments need to be more thought out and monitored (if a DO sensor gives bad values, change or fix it. Do not report the bad values)
MINOR ISSUES
Line 23: update the Friedlingstein reference
Line 38-39: CO2 fluxes are not measured by the “devices” you mention. These simply measure the pCO2 in water (and sometimes in air) and use satellite data and mapping techniques to obtain fluxes.
Line 79: why get rid of the ABC? Can you explain?
Line 81-82: not sure that’s relevant.
Line 84: you have not really described how you used that data, except for the RH. Nor the effect of the accuracy of the sensor on your results.
Line 93-93: one of the 2 “surface” is not needed.
Line 99: wouldn’t the sensing component respond faster if it was directly in contact with the air, rather than coverd with thermal epoxy?
Line 103: do you have the specs of the temperature sensor?
Line 110: are you using pure CO2? Purity?
Line 121: are you standing around the chamber, breathing, when you open the top of the box? Are other people around?
Line 164: “closed loop systems”. I do not understand what that means here. And why would that make them prone to evaporation? Do you mean “open”?
Line 170: if you measure for 60 min for one aquatic value, how do you account for the changes in aquatic CO2 during that hour? Doesn’t your method assume the CO2 is stable during the measurement?
Line 185: “…wrapped clockwise…” . Is this detail necessary?
Line 188: We usually refer to accuracy rather than inaccuracy for an instrument. You should also give their values.
Figures 5, 6, 7, S1 and S2: it would be best to put either minutes or days elapsed on the x-axis. It would help the reader with calculating the time difference between stages.
Line 253: the cycling in the water could also be influenced by the atmospheric cycling.
Line 256-261: First, rising CO2 during the day should not be unexpected in a lab with people working. Second, if the yellow bands represent the “daylight” hours, this is a pattern I do not see. I mostly see CO2 decreasing during the “day”.
Therefore, I do not see the change in pattern you mention…
Appendix B: I think all these complicated equations simplify to MFCCO2= c/(1-c) * MFCN2 with c=mole fraction (e.g. 250 10-6) . No need for R, T, P.
Citation: https://doi.org/10.5194/egusphere-2024-3375-RC2
Status: closed (peer review stopped)
-
RC1: 'Comment on egusphere-2024-3375', Anonymous Referee #1, 17 Dec 2024
This manuscript details the design of a Do-It-Yourself (DIY) pCO2 sensor. The design cleverly uses two sample loops to successively measure steams air and in-water CO2, which are passively equilibrated across membranes. The CO2 detector used in this work is an inexpensive infrared gas analyzer used in previous designs of low-cost CO2 sensors. The authors do a nice job of providing the details of their design, including parts lists, 3D drawings, and circuit design. They describe some tests of sensor performance, and then present results from a two-week mesocosm-type evaluation of sensor response using tanks with and without seagrass. The fundamentals of a good technical note and here, but I was left with several questions which I think should be addressed.
MAJOR POINTS
-Use of the proposed design is framed around coastal or perhaps estuarine environments, but why? In theory this design should be applicable to freshwaters as well, where direct measurements of pCO2 are lacking. Why not use this design to study lakes, for example? Is there concern about the equilibration time for higher-CO2 waters? I think the authors are limiting themselves a bit. Similarly, the authors repeatedly describe how using this design as a “delta-pCO2” device helps compensate for potential CO2 detector drift, which is a good point, but I imagine many researchers would want to use this device to measure absolute pCO2 for studying quantities other than air-sea fluxes.
-I have concerns about the results in Figure 4B. If I am reading the Methods correctly (section 2.2.2), these results are from measurements of 2L of DI water which was bubbled with 1,000 ppm CO2 for 24 hours. It seems that the pCO2 of this water should therefore be right around 1,000 uatm, but SEACOW3 is reading a pCO2 of only about 550 uatm. Is there a potential accuracy issue shown here which is not discussed, or am I misunderstanding what these data are showing? I know that these data are mostly intended to show the response time, but the accuracy discrepancy caught my eye.
-The Discussion is very perfunctory. I think there is more that the authors should examine, including potential reasons for SEACOW 2 and 4 failures, other deployment use cases, potential improvements to the sensor design, and putting the data quality in perspective (perhaps in the climate/weather framework like that suggested by the IWG-OA).
MINOR POINTS
-These references are very relevant to this manuscript:
Lee, D.J.J., Kek, K.T., Wong, W.W., Mohd Nadzir, M.S., Yan, J., Zhan, L. and Poh, S.-C. (2022), Design and optimization of wireless in-situ sensor coupled with gas–water equilibrators for continuous pCO2 measurement in aquatic environments. Limnol Oceanogr Methods, 20: 500-513. https://doi.org/10.1002/lom3.10500
Robison, A. L., L. E. Koenig, J. D. Potter, L. E. Snyder, C. W. Hunt, W. H. McDowell, and W. M. Wollheim. 2024. Lotic-SIPCO2: Adaptation of an open-source CO sensor system and examination of associated emission uncertainties across a range of stream sizes and land uses. Limnology and Oceanography: Methods 22: 191–207. doi:10.1002/lom3.10600
-L17 what about water-side pCO2 accuracy? This is an important metric
-L13 the concept of IoT is mentioned a few times, but I’m not sure what IoT capabilities are added in the design. Some sort of communication upgrade?
-L25 be clear that “sources” are sources of CO2 from aquatic to atmospheric reservoirs
-L29: how high of a temporal or spatial resolution is needed to constrain CO2 flux budgets?
-L38: Is the CO2-Pro a “delta-pCO2” device as defined in this manuscript? I though the CO2-Pro only measured in-water pCO2. Furthermore, I think the authors should explicitly define what makes a “delta-pCO2” device earlier than L53, as this is a term I haven’t seen in common use. I suggest not using this term in the title either, perhaps substituting it for “air-water CO2 fluxes”.
-Figure 1: I like this figure a lot. It would be nice to see what the device looks like with the housing as well. Also, could there be some sort of air-water boundary line included, and maybe blue shading below this line to easily indicate the water side?
-L80: why was the ABC turned off?
-L81-87: can you talk more about the logging and communication systems more?
-Figure 2: some more detail would be useful: which lines are air in or out, and what are the various wires? The housing also has the BME280 inside I believe. I think this figure could be expanded a lot with pictures of the components described in Lines 90-105 too. This Figure can be a really useful resource to help readers understand how the various components work together.
-L112: 250 ppm steps are pretty large for examining atmospheric CO2
-L114: This confuses me. Eq. 2 requires inputs of m(dry) and b(dry), but Lines 113-114 says these parameters are produced by applying Eq 2-4. I’m not sure how these calculations were done.
-L138: I need more description of how this K30 reading vs vapor pressure curve was done, and maybe even an example in the Supplementary.
-L147-149: if the drying system is only removing some of the water vapor, is it really needed? What’s the advantage in keeping the drying system, especially if the 100% humidity response can be accounted for?
-L161: I think this is a typo and should say “control tank”. My understanding is that two “tanks” were used, and each tank had several “containers” of sediment. Is that right?
-L163: is a “power filter” some sort of pumped filter unit?
-L165: the tanks were “closed-loop” with respect to water flow, but not to air exchange I believe. Were they covered in any way?
L167: can you rephrase this line?
L168: I believe each “tank” had four “containers” inside. So each “container” had one light?
L186-189: I’m not sure that the discrete sample pCO2 calculations add much to this manuscript. The discussion of them is pretty minimal and the calculated pCO2 seems hard to interpret (perhaps unsurprisingly). I would be OK with them being omitted as analytical issues seem to keep them from being useful. However, if the discrete data remain, then there needs to be a lot more analytical detail regarding the TA and DIC measurements (accuracy/precision? CRM used? Propagated uncertainties in calculated pCO2? etc).
-Figure 3: It’s very interesting that plumber’s tape could be used as a cheap (although not very durable) membrane- I never considered that!
L197-201: Tie this paragraph more explicitly to the Methods. Was the same chamber used as described in 2.2.2?
-Section 3.2: It looks like there was some variability in the response time among the SEACOW units. Can you put some uncertainties around the 1T/5T estimates?
-Figure 4B: Why is only one SEACOW unit shown here?
-L236-237: can you explain “difference in sensor gain could contribute to inaccuracy as delta-pCO2 increases” more? I don’t follow that statement.
-Figure 6: can uncertainty bounds be put around the pCO2 lines in these plots? Also label each panel with the appropriate sensor unit 9SEACOW-1 on top, SEACOW-4 on bottom.
-L250: talk more about this air leak, either here or in the Discussion. Do you have advice on how to avoid this in the future? How could someone trying to replicate this design avoid this problem?
-L256-259: I don’t see the described pattern. To my eye, the data in Figure 6 show decreasing pCO2 during the day and rising pCO2 at night.
-L261: I don’t see a big change in air pCO2 after addition of the DI water (can this addition be marked with a vertical line or something in the Figures?)
-Table 2- Discuss these data more, perhaps in the Discussion section. What does the characterization indicate about potential uses for the device, comparison to other sensors, or use of the data, perhaps in the context of the climate vs. weather data quality criteria proposed by the IWG-OA? Also discuss why two of the units failed and how others might avoid these problems.
-L280-289: this paragraph reads more like a part of the Introduction.
-L303-304: if someone building this device still needs to characterize the humidity response, then is it still useful to have the Drierite system? Would removing this drying system bring down the cost or complexity or increase deployment times?
-Figure S1 (caption)- I see the DO as red instead of purple.
Citation: https://doi.org/10.5194/egusphere-2024-3375-RC1 -
RC2: 'Comment on egusphere-2024-3375', Anonymous Referee #2, 25 Jan 2025
The manuscript describes the design and performance of a low-cost, Do-It-Yourself pCO2 sensor in the hope that it would be cheap enough and good enough that it would increase the spatial and temporal coverage of surface pCO2 measurements and thus decrease our uncertainty of the ocean sink, help with regional carbon studies and even help in the monitoring of marine carbon dioxide removal. The authors do a good job at describing the system which uses an infrared sensor used in other low-cost instruments, a suite of custom 3D printed parts to combine in a compact package, all of it very conveniently accessible on a GitHub repository for easy reproduction. The effort is a very worthwhile endeavor, as the number of companies trying to do the same shows, but I don’t believe, after reading this article, that the results are here to show that this sensor could perform at the level needed to do any of the activities it is meant for. The fact that the signal varies according to the diel cycle of a seagrass bed is not sufficient, especially in view of the several unexplained results. At this stage, I would argue that these results do not merit a publication until further improvements/characterization can be performed, as detailed below.
MAJOR ISSUES
1. The accuracy of the instrument seems to vary wildly in-between units.
The authors report an accuracy of 2.5% of the LICOR 850 reading (was that calibrated?). I assume this is an average. Using their numbers, I get a range of 1.3% to 3.3%, which I believe is erroneous (see “other issues #5 below). I think the values for the 3 SEACOWS that worked are: 0.8%, 1.9% and 7.3%. Since they probably were built the same way, it raises concern about the robustness of the calibration.
These big variations are for air measurements in a control environment, which does not bode well for uncontrolled air or seawater environments where temperature, pressure, biofouling will occur.
2. The instrument seems to fail easily.
Out of 4 instruments built, therefore brand new, half failed during testing period, in a controlled environment. Again, this does not bode well for deployed instruments.
No explanation or reason is given for the failures so it is hard to judge the gravity of it. I will assume the worse and say there is a fundamental issue with the design.
3. The behavior of the instrument is unpredictable, which makes me wonder if it is really measuring what we think it is.
Figure 4.B shows puzzling results, beyond the fact that the system equilibrated at around half the value that it should have (550 instead of 1000) .
The authors admit that they cannot explain the sudden very rapid jump in the signal and suggest maybe pressure is to blame. A jump from 450 uatm to 600 uatm would require an increase of 0.33 atm, which would occur if the unit was placed at 3 meters depth and the membrane was soft…neither is the case. The following slower decrease is also unexplained..
It would also be nice to know if that effect occurs every time the SEACOW is placed in water, or if it happens with all the SEACOWs or just this one…
Blaming it on “imperfections” of the tank (line 217) is like saying you get wrong results because you didn’t do the experiment correctly…this is not good enough for a publication. Also, “…small changes (..) happening faster than the SEACOW could measure…” (line 218-219) rapid spikes in the signal…on the contrary.
4. A DeltapCO2 instrument…?
I agree with the statement that if you use the same instrument to measure both atmospheric and seawater CO2, any drift will mostly cancel out when taking the difference. But I would argue that this is not the case here because the instrument is using 2 different membranes to measure the two. Yes, the material is the same but it is used in different shapes (flat and tubular), under very different conditions (air and water), which also means that they will undergo very different biofouling and therefore have even more diverging characteristics.
5. Design.
While the idea of measuring both atmospheric and water sides with the same instrument is good, and thoughts have been given to the design, I still see some potential issues which would put in question what the instrument is really measuring. I have not seen the full SEACOW built but from the text, I imagine the air-side membrane will reside close to the seawater surface, which means that, depending on the conditions, it will very likely be wet and covered with seawater, which will affect the measurements. This could be a major issue.
Another question I have relates to the use of the drierite with the nation tube. Not much detail is given but nation tubes have an internal membrane through which the water vapor permeates, but only if the pH2O is favorable. That means you need either lower pressure or lower water concentration on the outside of the membrane. This is why the Drierite is there, to “dry” the outside gas. But if the gas is not actively being circulated, it will quickly saturate with water and the effectiveness of the nafion will decrease. I doubt diffusion alone would be sufficient. The authors should get a feel for that looking at the Relative Humidity sensor data, which was not discussed here.
6. I question the statement on line 78 that “…the K30 measures pCO2…”, which seems to imply that the K30 signal is directly proportional to the pCO2 of the medium in contact with the membrane. I believe the authors are confusing this with the fact that the K30 signal is dependent on the pressure inside the sensor, which is very different. The pressure inside the sensor (monitored) is not the same as the one at the equilibration point next to the membrane (not monitored). The authors are just assuming they are, and they might be close but they are not the same.
The signal in the LI-850, or any LICOR who are also based on absorption, is a complex function of (cell) pressure, not directly proportional. The K30 is very likely similar.
When the authors calibrate their instrument using the calculated pCO2 with the xCO2 from the LI-850 (I assume that’s what they did), they effectively calibrated their instrument at one (atmospheric) pressure. Those coefficients would probably be different at another pressure.
And by the way, the signal is also probably temperature dependent, which has not been characterized here.
Given the accuracy level, this might not be major but it should be mentioned.
OTHER ISSUES
1. All the measurements of the SEACOWs are compared to those of the LI-850 which is used as the “true” value but we are not told if and how the LICOR has been calibrated.
2. The seagrass experiment has several shortfalls
- the tanks are open to the atmosphere in a lab where it seems people come in and out. The authors should not be surprised the air pCO2 changes. It would have been nice to monitor these changes with the LICOR.
- typically, air pCO2 goes up during the day when people are around and down at night when they leave. Here, it seems to be doing the opposite. It is clearly following a diel pattern but what caused it? Is it external to the sensor or is it something affecting the sensor (like the lights warming up the membrane) ?
- the authors are very careful to make the T, P conditions optimal to keep the growth rate of the seagrass steady (I assume to keep the CO2 respiration and photosynthesis constant), yet add these big volumes of DI water every 2-3 days, one addition in the middle of the experiment having a clear effect on the pCO2 of the water. Maybe trying to prevent evaporation would be a better way of controlling the salinity rather than perturbing the system with additions of DI water.
3. TA and DIC samples
First, if you want to calculate pCO2, TA and DIC is the pair that will propagate the most uncertainty in the calculated pCO2, especially if the DIC is of low accuracy. I understand the authors might have been limited by the available instruments available to them but thought I’d mention it here.
Second, the description of the sample collection doesn’t seem to indicate the greatest care in not contaminating the sample( “…dipped it underwater to fill…” or “…added the plunger while it was underwater…” on line 178).
Third, the amounts used (20 ml) seem very small, especially for TA and therefore have a lot of uncertainties. The authors should detail the methods.
At this stage, in view of the issues mentioned before, I believe the verification of the values output by the SEACOW is premature.
4. Humidity and pressure correction.
That section is not very clear. Section 2.2.1 states that it used equations (2-4), which are equations detailing the correction for the water signal, when section 2.2.1 was using dry gas…therefore not correcting for water. It should be clarified.
5. Table 1.
I believe the numbers listed in the “Post” columns are in error. If I understand correctly, they should be derived from the dry calibration coefficients, using the “Pre” values as K30raw. The numbers don’t match. As a consequence, I also get different rms errors. Calculations should be checked.
I would like to remind the authors that in such an exercise, it is necessary to have the LI-850 properly calibrated, something that has not been mentioned.
Another point I would like to make here is that the 3 SEACOWs have vastly different calibration coefficients, which is a bit surprising. If this is related to variations in membrane properties, is it far-fetched to think that they would also age very differently?
I think that for this instrument development to go forward, the sensor response and characteristics of the membranes need to be studied further, in more controlled and varied (T, P) conditions. Results need to be more detailed (calibrations, calculations, experimental procedures). And finally, evaluate the performance of the SEACOWs between themselves and in time. Experiments need to be more thought out and monitored (if a DO sensor gives bad values, change or fix it. Do not report the bad values)
MINOR ISSUES
Line 23: update the Friedlingstein reference
Line 38-39: CO2 fluxes are not measured by the “devices” you mention. These simply measure the pCO2 in water (and sometimes in air) and use satellite data and mapping techniques to obtain fluxes.
Line 79: why get rid of the ABC? Can you explain?
Line 81-82: not sure that’s relevant.
Line 84: you have not really described how you used that data, except for the RH. Nor the effect of the accuracy of the sensor on your results.
Line 93-93: one of the 2 “surface” is not needed.
Line 99: wouldn’t the sensing component respond faster if it was directly in contact with the air, rather than coverd with thermal epoxy?
Line 103: do you have the specs of the temperature sensor?
Line 110: are you using pure CO2? Purity?
Line 121: are you standing around the chamber, breathing, when you open the top of the box? Are other people around?
Line 164: “closed loop systems”. I do not understand what that means here. And why would that make them prone to evaporation? Do you mean “open”?
Line 170: if you measure for 60 min for one aquatic value, how do you account for the changes in aquatic CO2 during that hour? Doesn’t your method assume the CO2 is stable during the measurement?
Line 185: “…wrapped clockwise…” . Is this detail necessary?
Line 188: We usually refer to accuracy rather than inaccuracy for an instrument. You should also give their values.
Figures 5, 6, 7, S1 and S2: it would be best to put either minutes or days elapsed on the x-axis. It would help the reader with calculating the time difference between stages.
Line 253: the cycling in the water could also be influenced by the atmospheric cycling.
Line 256-261: First, rising CO2 during the day should not be unexpected in a lab with people working. Second, if the yellow bands represent the “daylight” hours, this is a pattern I do not see. I mostly see CO2 decreasing during the “day”.
Therefore, I do not see the change in pattern you mention…
Appendix B: I think all these complicated equations simplify to MFCCO2= c/(1-c) * MFCN2 with c=mole fraction (e.g. 250 10-6) . No need for R, T, P.
Citation: https://doi.org/10.5194/egusphere-2024-3375-RC2
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
140 | 49 | 10 | 199 | 33 | 7 | 6 |
- HTML: 140
- PDF: 49
- XML: 10
- Total: 199
- Supplement: 33
- BibTeX: 7
- EndNote: 6
Viewed (geographical distribution)
Country | # | Views | % |
---|---|---|---|
United States of America | 1 | 115 | 58 |
Germany | 2 | 9 | 4 |
China | 3 | 7 | 3 |
France | 4 | 7 | 3 |
India | 5 | 7 | 3 |
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
- 115