the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Learning-based prediction of the particles catchment area of deep ocean sediment traps
Abstract. The ocean biological carbon pump plays a major role in climate and biogeochemical cycles. Photosynthesis at the surface produces particles that are exported to the deep ocean by gravity. Sediment traps, which measure the deep carbon fluxes, help to quantify the carbon stored by this process. However, it is challenging to precisely identify the surface origin of particles trapped thousands of meters deep because of the influence of ocean circulation on the carbon sinking path. In this study, we conducted a series of numerical Lagrangian experiments in the Porcupine Abyssal Plain region of the North Atlantic and developed a machine learning approach to predict the surface origin of particles trapped in a deep sediment trap. Our numerical experiments support its predictive performance, and surface conditions appear to be sufficient to accurately predict the source area, suggesting a potential application with satellite data. We also identify potential factors that affect the prediction efficiency and we show that the best predictions are associated with low kinetic energy and the presence of mesoscale eddies above the trap. This new tool could provide a better link between satellite-derived sea surface observations and deep sediment trap measurements, ultimately improving our understanding of the biological carbon pump mechanism.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(34857 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(34857 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-2777', Anonymous Referee #1, 05 Jan 2024
The study is heavily based on the Lagrangian simulations of particle sinking based on circulation model results and described in Wang et al., 2022 (I think this should be stated more explicitly and include a brief summary of results. Best in Section 2). The authors use the results of these simulations to test the possibilities of using machine learning to locate the origin of particles caught in deep sea sediment traps. The aim is to use observed sea surface conditions (mainly satellite altimetry and SST) to deduce the source of particles.
The authors demonstrate that the machine learning algorithm vastly outperforms the baseline prediction, which uses the average distribution centred above the sediment trap. However, they should give a more detailed explanation of what are the benefits of using their algorithm over running the Lagrangian simulations. To put it in a anther way, why not simulate the particle paths using existing ocean reanalyses instead of ‘guessing’ the paths from surface observations? There are many regional model reanalyses of ocean currents available and e.g. CMEMS offers global reanalysis in roughly 8 km resolution which matches the one used in this study. It is clear that there are computational benefits, but one would think that with all the efforts invested in setting the sediment traps, dedicating some serious computing time shouldn’t be a problem. The answer might be obvious to the authors, but certainly not to the potential audience. The authors state in the introduction that there are uncertainties stemming from the ocean model errors. That is certainly true, but in such case I am missing at least a rough comparison of the accuracy of their method to the uncertainties in the prediction of the catchment areas using the Lagrangian backtracking.
I am not a native speaker, so my comments on the English grammar are merely suggestions.
Comments by lines:
L. 7: “surface conditions appear to be sufficient to accurately predict the source area” – Are they? What is considered sufficient?
L. 66: Why 50 m/day?
L. 76: “After spin-up, the chaotic evolution results in uncorrelated dynamics” – How are the initial and boundary conditions perturbed? Shouldn’t the same atmospheric conditions ensure similarity of both runs? Large distance from the boundary and same atmospheric forcing would imply very similar oceanographic conditions. This is very important. If the test conditions are related to the training conditions, the performance of the ML algorithm is overrated. This could also (at least partially) explain the results which are better “under weak/or stable dynamics”. Such conditions are likely also the situations when POLGYR1 and POLGYR2 currents would be most similar. This should be checked and thoroughly explained in the text.
L. 90: Why 10 km x 10 km patch? This paragraph is a bit confusing. 36 particles are released from 10 km x 10 km patches, but there are 36 STs in the model and they are 36 km apart. Is this right? So 36x36=1296 particles are released every 12 hours? Why not release them from 36 points instead of patches? To compensate for the lack of dispersion? The paragraph should be rewritten and clearer.L. 104: I think the delta sign is not a good choice for the number of points as it implies the distance between points.
L. 108: The authors should explain better the 8 km resolution. I think that the simulations run at 2 km resolution, but the end results are downscaled and so are the surface fields used for the machine learning. This would also match roughly with the resolution of the satellite altimetry. Am I right? If so, this should be clearer from the text.
Figure 3 caption: remove “the” from “the origins”. “from the left panel” instead of “of the left panel.”
L. 142 (something wrong with the line numbering here): “the training criterion”. Batthacharyya coefficient is used in the equation as BC and should be marked as such.
The sum (epsilon) in the equation is too small and is missing the summation index.
L. 145: “Empirically, this method improves the performance of the trained model compared to experiments where the training loss is based only on BL200m.” - This is very interesting. On one hand, one would expect only the end location to matter, on the other hand, the path is important. It would be interesting to know more.
L. 160: The values of BL seem a bit arbitrary. A visual analysis is kind of a weak argument. What does it mean in the practical sense? Is it possible to relate how much would a certain value of BL affect the content of the sediment trap? L. 184: “analysis” instead of “analyse”.
L. 189: “16 in)” – the “in” is kind of out of place here.
L. 219: Why the 20th day? Shouldn’t these be averaged over the average travel time?
L. 256: Maybe “collected” instead of “measured”.
L. 275-285: The biogeochemical model results and satellite chlorophyll concentration measurements show the spatial variability of phytoplankton biomass. Maybe this could serve as a measure of needed accuracy of the method? On the other hand, this paragraph focuses on primary production only and neglects other sources of particles such as zooplankton. The latter could be obtained from the biogeochemical model as well.
L. 298: “analysis” instead of “analyse”.
Citation: https://doi.org/10.5194/egusphere-2023-2777-RC1 -
AC3: 'Reply on RC1', Théo Picard, 31 May 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-2777/egusphere-2023-2777-AC3-supplement.pdf
-
AC3: 'Reply on RC1', Théo Picard, 31 May 2024
-
AC1: 'Comment on egusphere-2023-2777', Théo Picard, 31 Jan 2024
Dear reviewer,
Thank you very much for your clear and pertinent remarks. We would like here to give more precisions for two important points you highlighted. Concerning the other points you mentioned, we would be glad to provide answers as soon as the second review is available.
R : To put it in another way, why not simulate the particle paths using existing ocean reanalyses instead of ‘guessing’ the paths from surface observations? There are many regional model reanalyses of ocean currents available and e.g. CMEMS offers global reanalysis in roughly 8 km resolution which matches the one used in this study. It is clear that there are computational benefits, but one would think that with all the efforts invested in setting the sediment traps, dedicating some serious computing time shouldn’t be a problem. The answer might be obvious to the authors, but certainly not to the potential audience. The authors state in the introduction that there are uncertainties stemming from the ocean model errors. That is certainly true, but in such case I am missing at least a rough comparison of the accuracy of their method to the uncertainties in the prediction of the catchment areas using the Lagrangian backtracking.
A : The use of backtracking in reanalysis is indeed an attractive approach that can be used to predict the surface source of particles. However, ocean reanalyses are likely to have significant uncertainties in the mesoscale-to-submesoscale range, which seems to be key for the backtracking of catchment areas. Despite the 8 km grid resolution of ocean reanalyses, the smallest horizontal scales resolved by observation-driven and model-based products for sea surface dynamics are much coarser. They are typically below satellite altimetry scale (Fevre et al. 2023, table 2), which is about 100 km in our region (Ballarota et al. 2019). In addition, there are significant uncertainties associated with the dynamics computed in the subsurface, in particular the vertical structure of mesoscale-to-submesoscale structures, which play a critical role in particle trajectories. Thus, backtracking with reanalysis datasets is associated with significant uncertainties, which are difficult to assess due to the lack of synoptic-scale data.
Using machine learning to predict the location of particle sources simplifies setup and saves computational resources: once trained, the model can be applied directly to any period of available surface data, including the most recent one, without any backtracking simulation. By estimating the catchment area directly from satellite data, we also avoid the potential bias associated with reanalyses. Moreover, in contrast to reanalysis, we show in this study that our estimate can be associated with a confidence index that can depend on either local dynamical conditions (fig 8) or bootstrap model variability (fig 6. h.). Finally, to the best of our knowledge, no study has investigated how ocean surface conditions constrain particle trajectories. We believe that the comparison we made between the 4 layers and the surface Unet provides new significant information and, in particular, shows that in most cases surface information is sufficient to have a good estimate of the particle sources. Although hoped for, this result was not anticipated and was by no means obvious. Our approach is certainly one of the most relevant in reaching this conclusion.
We will clarify these aspects in a revised version of the introduction and conclusion section.
R : L. 76: “After spin-up, the chaotic evolution results in uncorrelated dynamics” – How are the initial and boundary conditions perturbed? Shouldn’t the same atmospheric conditions ensure similarity of both runs? Large distance from the boundary and same atmospheric forcing would imply very similar oceanographic conditions. This is very important. If the test conditions are related to the training conditions, the performance of the ML algorithm is overrated. This could also (at least partially) explain the results which are better “under weak/or stable dynamics”. Such conditions are likely also the situations when POLGYR1 and POLGYR2 currents would be most similar. This should be checked and thoroughly explained in the text.
A : The 2 runs use different initial and boundary conditions derived from parent North Atlantic runs with slightly different options. The most significant of these is a surface salinity restoring, which was added in the second version of the runs but not in the first, resulting in a slightly biased large-scale stratification in the region of interest. The freshwater forcing in the nests is also slightly different because of this restoring. However, since we still use the same atmospheric data for the forcings, POLGYR 1 and POLGYR 2 have similar evolution and statistics at large scales in terms of energy and variability. But the mesoscale and submesoscale structures, which are the main cause of particle displacement in our study, are not similar for the same given date due to their chaotic evolution (e.g. Zanna et al, 18)) . An example can be seen in this snapshot taken on July 24th 2023 (figure below). The eddies and frontal structures don't match, which leads to very different particle source areas. Therefore, we ensure that the score obtained with the test dataset is not biased by a dependence between the training and test databases. This can be confirmed by the score distribution obtained with the validation dataset, made with POLGYR 1 in 2009 (which is not in the training dataset). The distribution is similar to the score distribution of the test dataset (figure below), which reinforces our confidence in our training methodology.
We will clarify these aspects in a revised version, where we will emphasize the meso and submesoscale differences in the two simulations, and with a new annex containing the plots presented below.
Citation: https://doi.org/10.5194/egusphere-2023-2777-AC1 -
RC2: 'Comment on egusphere-2023-2777', Gael Forget, 04 Apr 2024
general comments
This paper presents a novel method for estimating the near surface origin of sinking particles that reach a sediment trap at 1000m. A neural network based on the U-Net architecture is trained to predict backward trajectories of sinking particles from environmental variables. The authors present the method and demonstrate its skill in a twin experiment setting. They also provide compelling analysis of the behavior of their estimation method depending on dynamical situation.
I feel this is a strong paper and recommend it to be accepted for publication after minor revision. The paper is generally very well written and clear. The method being introduced opens up interesting perspectives to help better interpret sediment trap observations, but the authors also acknowledge difficulties that lie ahead if such method was to be applied to real data. One of the strengths of this paper in my view is the analysis of results in relation to physical processes. A weakness is that we don't know how/if the method's skill could be quantified in the real world.
specific comments
- L66 : please provide a reference to justify the choice of "50m/day" as realistic on average in the real world, assuming that this indeed is the case. Is 50m/day representative of a large fraction of sinking biomass that sediment traps capture? Wang et al 2022, cited later, appears to be a modeling study by the same authors. Please state in the paper whether the choice of 50m/day is backed by real world observations, or otherwise.
- L107 : "based on the statistical results of Wang et al 2022" is vague. What statistical results?
- L138 : please explain and/or provide a reference for "Combining these steps with skip connections enables the detection of hydrodynamic structures at different spatial scales".
- Fig 4 : more explanation about "concatenation" and "channels" would also be useful in section 5.2 for the non specialist reader.
- L178 : can it be excluded that Unet4layers simlply benefits from a larger number of data constraints (Nx) that in UnetSurf rather than their subsurface location? I wonder if providing Nx time as many surface data to train UnetSurf would make it match the performance of Unet4layers.
- L220-223 : state how much of the training sets was "chaotic situations" versus nonchaotic ones. I wonder : if you trained a NN separately on just "chaotic situations", would you get improved performance? maybe match the performance obtained for nonchaotic ones.
- L294 : it seems to me that a confidence index would need to correctly account for several sources of model error (uncertain sinking velocity distributions, possible biases in POLGYR statistics, transport rates, etc) to avoid being misleading. Please discuss this in a bit more detail.
- in section 5 please describe the kind of real world observational experiment and data sets that would be needed to demonstrate / quantify the method's skill outside of a twin experiment configuration. I feel that many would be concerned with using real world results of such a method if its skill can only be assessed within a model world.technical corrections
L57 : "used to design"
L94 : space missing in "1000 m.Biological"
L96 : spell out what 3D+t means. also, consider saying 4D instead of 3D+t or 3D+T
L100 : "To avoid common particle between two experiments" is unclear and seems grammatically incorrect
L101 : rephrase as e.g. "to set up the patch centers 36km apart"
L104 : nx,ny would seem a better notation than deltax, deltay; no?
L109 : "for storage constrain" is vague and seems grammatically incorrect.
Fig 3 caption : "superimposed" (one "s" only)Citation: https://doi.org/10.5194/egusphere-2023-2777-RC2 -
AC1: 'Comment on egusphere-2023-2777', Théo Picard, 31 Jan 2024
Dear reviewer,
Thank you very much for your clear and pertinent remarks. We would like here to give more precisions for two important points you highlighted. Concerning the other points you mentioned, we would be glad to provide answers as soon as the second review is available.
R : To put it in another way, why not simulate the particle paths using existing ocean reanalyses instead of ‘guessing’ the paths from surface observations? There are many regional model reanalyses of ocean currents available and e.g. CMEMS offers global reanalysis in roughly 8 km resolution which matches the one used in this study. It is clear that there are computational benefits, but one would think that with all the efforts invested in setting the sediment traps, dedicating some serious computing time shouldn’t be a problem. The answer might be obvious to the authors, but certainly not to the potential audience. The authors state in the introduction that there are uncertainties stemming from the ocean model errors. That is certainly true, but in such case I am missing at least a rough comparison of the accuracy of their method to the uncertainties in the prediction of the catchment areas using the Lagrangian backtracking.
A : The use of backtracking in reanalysis is indeed an attractive approach that can be used to predict the surface source of particles. However, ocean reanalyses are likely to have significant uncertainties in the mesoscale-to-submesoscale range, which seems to be key for the backtracking of catchment areas. Despite the 8 km grid resolution of ocean reanalyses, the smallest horizontal scales resolved by observation-driven and model-based products for sea surface dynamics are much coarser. They are typically below satellite altimetry scale (Fevre et al. 2023, table 2), which is about 100 km in our region (Ballarota et al. 2019). In addition, there are significant uncertainties associated with the dynamics computed in the subsurface, in particular the vertical structure of mesoscale-to-submesoscale structures, which play a critical role in particle trajectories. Thus, backtracking with reanalysis datasets is associated with significant uncertainties, which are difficult to assess due to the lack of synoptic-scale data.
Using machine learning to predict the location of particle sources simplifies setup and saves computational resources: once trained, the model can be applied directly to any period of available surface data, including the most recent one, without any backtracking simulation. By estimating the catchment area directly from satellite data, we also avoid the potential bias associated with reanalyses. Moreover, in contrast to reanalysis, we show in this study that our estimate can be associated with a confidence index that can depend on either local dynamical conditions (fig 8) or bootstrap model variability (fig 6. h.). Finally, to the best of our knowledge, no study has investigated how ocean surface conditions constrain particle trajectories. We believe that the comparison we made between the 4 layers and the surface Unet provides new significant information and, in particular, shows that in most cases surface information is sufficient to have a good estimate of the particle sources. Although hoped for, this result was not anticipated and was by no means obvious. Our approach is certainly one of the most relevant in reaching this conclusion.
We will clarify these aspects in a revised version of the introduction and conclusion section.
R : L. 76: “After spin-up, the chaotic evolution results in uncorrelated dynamics” – How are the initial and boundary conditions perturbed? Shouldn’t the same atmospheric conditions ensure similarity of both runs? Large distance from the boundary and same atmospheric forcing would imply very similar oceanographic conditions. This is very important. If the test conditions are related to the training conditions, the performance of the ML algorithm is overrated. This could also (at least partially) explain the results which are better “under weak/or stable dynamics”. Such conditions are likely also the situations when POLGYR1 and POLGYR2 currents would be most similar. This should be checked and thoroughly explained in the text.
A : The 2 runs use different initial and boundary conditions derived from parent North Atlantic runs with slightly different options. The most significant of these is a surface salinity restoring, which was added in the second version of the runs but not in the first, resulting in a slightly biased large-scale stratification in the region of interest. The freshwater forcing in the nests is also slightly different because of this restoring. However, since we still use the same atmospheric data for the forcings, POLGYR 1 and POLGYR 2 have similar evolution and statistics at large scales in terms of energy and variability. But the mesoscale and submesoscale structures, which are the main cause of particle displacement in our study, are not similar for the same given date due to their chaotic evolution (e.g. Zanna et al, 18)) . An example can be seen in this snapshot taken on July 24th 2023 (figure below). The eddies and frontal structures don't match, which leads to very different particle source areas. Therefore, we ensure that the score obtained with the test dataset is not biased by a dependence between the training and test databases. This can be confirmed by the score distribution obtained with the validation dataset, made with POLGYR 1 in 2009 (which is not in the training dataset). The distribution is similar to the score distribution of the test dataset (figure below), which reinforces our confidence in our training methodology.
We will clarify these aspects in a revised version, where we will emphasize the meso and submesoscale differences in the two simulations, and with a new annex containing the plots presented below.
Citation: https://doi.org/10.5194/egusphere-2023-2777-AC1 -
AC2: 'Reply on RC2', Théo Picard, 31 May 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-2777/egusphere-2023-2777-AC2-supplement.pdf
-
AC1: 'Comment on egusphere-2023-2777', Théo Picard, 31 Jan 2024
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-2777', Anonymous Referee #1, 05 Jan 2024
The study is heavily based on the Lagrangian simulations of particle sinking based on circulation model results and described in Wang et al., 2022 (I think this should be stated more explicitly and include a brief summary of results. Best in Section 2). The authors use the results of these simulations to test the possibilities of using machine learning to locate the origin of particles caught in deep sea sediment traps. The aim is to use observed sea surface conditions (mainly satellite altimetry and SST) to deduce the source of particles.
The authors demonstrate that the machine learning algorithm vastly outperforms the baseline prediction, which uses the average distribution centred above the sediment trap. However, they should give a more detailed explanation of what are the benefits of using their algorithm over running the Lagrangian simulations. To put it in a anther way, why not simulate the particle paths using existing ocean reanalyses instead of ‘guessing’ the paths from surface observations? There are many regional model reanalyses of ocean currents available and e.g. CMEMS offers global reanalysis in roughly 8 km resolution which matches the one used in this study. It is clear that there are computational benefits, but one would think that with all the efforts invested in setting the sediment traps, dedicating some serious computing time shouldn’t be a problem. The answer might be obvious to the authors, but certainly not to the potential audience. The authors state in the introduction that there are uncertainties stemming from the ocean model errors. That is certainly true, but in such case I am missing at least a rough comparison of the accuracy of their method to the uncertainties in the prediction of the catchment areas using the Lagrangian backtracking.
I am not a native speaker, so my comments on the English grammar are merely suggestions.
Comments by lines:
L. 7: “surface conditions appear to be sufficient to accurately predict the source area” – Are they? What is considered sufficient?
L. 66: Why 50 m/day?
L. 76: “After spin-up, the chaotic evolution results in uncorrelated dynamics” – How are the initial and boundary conditions perturbed? Shouldn’t the same atmospheric conditions ensure similarity of both runs? Large distance from the boundary and same atmospheric forcing would imply very similar oceanographic conditions. This is very important. If the test conditions are related to the training conditions, the performance of the ML algorithm is overrated. This could also (at least partially) explain the results which are better “under weak/or stable dynamics”. Such conditions are likely also the situations when POLGYR1 and POLGYR2 currents would be most similar. This should be checked and thoroughly explained in the text.
L. 90: Why 10 km x 10 km patch? This paragraph is a bit confusing. 36 particles are released from 10 km x 10 km patches, but there are 36 STs in the model and they are 36 km apart. Is this right? So 36x36=1296 particles are released every 12 hours? Why not release them from 36 points instead of patches? To compensate for the lack of dispersion? The paragraph should be rewritten and clearer.L. 104: I think the delta sign is not a good choice for the number of points as it implies the distance between points.
L. 108: The authors should explain better the 8 km resolution. I think that the simulations run at 2 km resolution, but the end results are downscaled and so are the surface fields used for the machine learning. This would also match roughly with the resolution of the satellite altimetry. Am I right? If so, this should be clearer from the text.
Figure 3 caption: remove “the” from “the origins”. “from the left panel” instead of “of the left panel.”
L. 142 (something wrong with the line numbering here): “the training criterion”. Batthacharyya coefficient is used in the equation as BC and should be marked as such.
The sum (epsilon) in the equation is too small and is missing the summation index.
L. 145: “Empirically, this method improves the performance of the trained model compared to experiments where the training loss is based only on BL200m.” - This is very interesting. On one hand, one would expect only the end location to matter, on the other hand, the path is important. It would be interesting to know more.
L. 160: The values of BL seem a bit arbitrary. A visual analysis is kind of a weak argument. What does it mean in the practical sense? Is it possible to relate how much would a certain value of BL affect the content of the sediment trap? L. 184: “analysis” instead of “analyse”.
L. 189: “16 in)” – the “in” is kind of out of place here.
L. 219: Why the 20th day? Shouldn’t these be averaged over the average travel time?
L. 256: Maybe “collected” instead of “measured”.
L. 275-285: The biogeochemical model results and satellite chlorophyll concentration measurements show the spatial variability of phytoplankton biomass. Maybe this could serve as a measure of needed accuracy of the method? On the other hand, this paragraph focuses on primary production only and neglects other sources of particles such as zooplankton. The latter could be obtained from the biogeochemical model as well.
L. 298: “analysis” instead of “analyse”.
Citation: https://doi.org/10.5194/egusphere-2023-2777-RC1 -
AC3: 'Reply on RC1', Théo Picard, 31 May 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-2777/egusphere-2023-2777-AC3-supplement.pdf
-
AC3: 'Reply on RC1', Théo Picard, 31 May 2024
-
AC1: 'Comment on egusphere-2023-2777', Théo Picard, 31 Jan 2024
Dear reviewer,
Thank you very much for your clear and pertinent remarks. We would like here to give more precisions for two important points you highlighted. Concerning the other points you mentioned, we would be glad to provide answers as soon as the second review is available.
R : To put it in another way, why not simulate the particle paths using existing ocean reanalyses instead of ‘guessing’ the paths from surface observations? There are many regional model reanalyses of ocean currents available and e.g. CMEMS offers global reanalysis in roughly 8 km resolution which matches the one used in this study. It is clear that there are computational benefits, but one would think that with all the efforts invested in setting the sediment traps, dedicating some serious computing time shouldn’t be a problem. The answer might be obvious to the authors, but certainly not to the potential audience. The authors state in the introduction that there are uncertainties stemming from the ocean model errors. That is certainly true, but in such case I am missing at least a rough comparison of the accuracy of their method to the uncertainties in the prediction of the catchment areas using the Lagrangian backtracking.
A : The use of backtracking in reanalysis is indeed an attractive approach that can be used to predict the surface source of particles. However, ocean reanalyses are likely to have significant uncertainties in the mesoscale-to-submesoscale range, which seems to be key for the backtracking of catchment areas. Despite the 8 km grid resolution of ocean reanalyses, the smallest horizontal scales resolved by observation-driven and model-based products for sea surface dynamics are much coarser. They are typically below satellite altimetry scale (Fevre et al. 2023, table 2), which is about 100 km in our region (Ballarota et al. 2019). In addition, there are significant uncertainties associated with the dynamics computed in the subsurface, in particular the vertical structure of mesoscale-to-submesoscale structures, which play a critical role in particle trajectories. Thus, backtracking with reanalysis datasets is associated with significant uncertainties, which are difficult to assess due to the lack of synoptic-scale data.
Using machine learning to predict the location of particle sources simplifies setup and saves computational resources: once trained, the model can be applied directly to any period of available surface data, including the most recent one, without any backtracking simulation. By estimating the catchment area directly from satellite data, we also avoid the potential bias associated with reanalyses. Moreover, in contrast to reanalysis, we show in this study that our estimate can be associated with a confidence index that can depend on either local dynamical conditions (fig 8) or bootstrap model variability (fig 6. h.). Finally, to the best of our knowledge, no study has investigated how ocean surface conditions constrain particle trajectories. We believe that the comparison we made between the 4 layers and the surface Unet provides new significant information and, in particular, shows that in most cases surface information is sufficient to have a good estimate of the particle sources. Although hoped for, this result was not anticipated and was by no means obvious. Our approach is certainly one of the most relevant in reaching this conclusion.
We will clarify these aspects in a revised version of the introduction and conclusion section.
R : L. 76: “After spin-up, the chaotic evolution results in uncorrelated dynamics” – How are the initial and boundary conditions perturbed? Shouldn’t the same atmospheric conditions ensure similarity of both runs? Large distance from the boundary and same atmospheric forcing would imply very similar oceanographic conditions. This is very important. If the test conditions are related to the training conditions, the performance of the ML algorithm is overrated. This could also (at least partially) explain the results which are better “under weak/or stable dynamics”. Such conditions are likely also the situations when POLGYR1 and POLGYR2 currents would be most similar. This should be checked and thoroughly explained in the text.
A : The 2 runs use different initial and boundary conditions derived from parent North Atlantic runs with slightly different options. The most significant of these is a surface salinity restoring, which was added in the second version of the runs but not in the first, resulting in a slightly biased large-scale stratification in the region of interest. The freshwater forcing in the nests is also slightly different because of this restoring. However, since we still use the same atmospheric data for the forcings, POLGYR 1 and POLGYR 2 have similar evolution and statistics at large scales in terms of energy and variability. But the mesoscale and submesoscale structures, which are the main cause of particle displacement in our study, are not similar for the same given date due to their chaotic evolution (e.g. Zanna et al, 18)) . An example can be seen in this snapshot taken on July 24th 2023 (figure below). The eddies and frontal structures don't match, which leads to very different particle source areas. Therefore, we ensure that the score obtained with the test dataset is not biased by a dependence between the training and test databases. This can be confirmed by the score distribution obtained with the validation dataset, made with POLGYR 1 in 2009 (which is not in the training dataset). The distribution is similar to the score distribution of the test dataset (figure below), which reinforces our confidence in our training methodology.
We will clarify these aspects in a revised version, where we will emphasize the meso and submesoscale differences in the two simulations, and with a new annex containing the plots presented below.
Citation: https://doi.org/10.5194/egusphere-2023-2777-AC1 -
RC2: 'Comment on egusphere-2023-2777', Gael Forget, 04 Apr 2024
general comments
This paper presents a novel method for estimating the near surface origin of sinking particles that reach a sediment trap at 1000m. A neural network based on the U-Net architecture is trained to predict backward trajectories of sinking particles from environmental variables. The authors present the method and demonstrate its skill in a twin experiment setting. They also provide compelling analysis of the behavior of their estimation method depending on dynamical situation.
I feel this is a strong paper and recommend it to be accepted for publication after minor revision. The paper is generally very well written and clear. The method being introduced opens up interesting perspectives to help better interpret sediment trap observations, but the authors also acknowledge difficulties that lie ahead if such method was to be applied to real data. One of the strengths of this paper in my view is the analysis of results in relation to physical processes. A weakness is that we don't know how/if the method's skill could be quantified in the real world.
specific comments
- L66 : please provide a reference to justify the choice of "50m/day" as realistic on average in the real world, assuming that this indeed is the case. Is 50m/day representative of a large fraction of sinking biomass that sediment traps capture? Wang et al 2022, cited later, appears to be a modeling study by the same authors. Please state in the paper whether the choice of 50m/day is backed by real world observations, or otherwise.
- L107 : "based on the statistical results of Wang et al 2022" is vague. What statistical results?
- L138 : please explain and/or provide a reference for "Combining these steps with skip connections enables the detection of hydrodynamic structures at different spatial scales".
- Fig 4 : more explanation about "concatenation" and "channels" would also be useful in section 5.2 for the non specialist reader.
- L178 : can it be excluded that Unet4layers simlply benefits from a larger number of data constraints (Nx) that in UnetSurf rather than their subsurface location? I wonder if providing Nx time as many surface data to train UnetSurf would make it match the performance of Unet4layers.
- L220-223 : state how much of the training sets was "chaotic situations" versus nonchaotic ones. I wonder : if you trained a NN separately on just "chaotic situations", would you get improved performance? maybe match the performance obtained for nonchaotic ones.
- L294 : it seems to me that a confidence index would need to correctly account for several sources of model error (uncertain sinking velocity distributions, possible biases in POLGYR statistics, transport rates, etc) to avoid being misleading. Please discuss this in a bit more detail.
- in section 5 please describe the kind of real world observational experiment and data sets that would be needed to demonstrate / quantify the method's skill outside of a twin experiment configuration. I feel that many would be concerned with using real world results of such a method if its skill can only be assessed within a model world.technical corrections
L57 : "used to design"
L94 : space missing in "1000 m.Biological"
L96 : spell out what 3D+t means. also, consider saying 4D instead of 3D+t or 3D+T
L100 : "To avoid common particle between two experiments" is unclear and seems grammatically incorrect
L101 : rephrase as e.g. "to set up the patch centers 36km apart"
L104 : nx,ny would seem a better notation than deltax, deltay; no?
L109 : "for storage constrain" is vague and seems grammatically incorrect.
Fig 3 caption : "superimposed" (one "s" only)Citation: https://doi.org/10.5194/egusphere-2023-2777-RC2 -
AC1: 'Comment on egusphere-2023-2777', Théo Picard, 31 Jan 2024
Dear reviewer,
Thank you very much for your clear and pertinent remarks. We would like here to give more precisions for two important points you highlighted. Concerning the other points you mentioned, we would be glad to provide answers as soon as the second review is available.
R : To put it in another way, why not simulate the particle paths using existing ocean reanalyses instead of ‘guessing’ the paths from surface observations? There are many regional model reanalyses of ocean currents available and e.g. CMEMS offers global reanalysis in roughly 8 km resolution which matches the one used in this study. It is clear that there are computational benefits, but one would think that with all the efforts invested in setting the sediment traps, dedicating some serious computing time shouldn’t be a problem. The answer might be obvious to the authors, but certainly not to the potential audience. The authors state in the introduction that there are uncertainties stemming from the ocean model errors. That is certainly true, but in such case I am missing at least a rough comparison of the accuracy of their method to the uncertainties in the prediction of the catchment areas using the Lagrangian backtracking.
A : The use of backtracking in reanalysis is indeed an attractive approach that can be used to predict the surface source of particles. However, ocean reanalyses are likely to have significant uncertainties in the mesoscale-to-submesoscale range, which seems to be key for the backtracking of catchment areas. Despite the 8 km grid resolution of ocean reanalyses, the smallest horizontal scales resolved by observation-driven and model-based products for sea surface dynamics are much coarser. They are typically below satellite altimetry scale (Fevre et al. 2023, table 2), which is about 100 km in our region (Ballarota et al. 2019). In addition, there are significant uncertainties associated with the dynamics computed in the subsurface, in particular the vertical structure of mesoscale-to-submesoscale structures, which play a critical role in particle trajectories. Thus, backtracking with reanalysis datasets is associated with significant uncertainties, which are difficult to assess due to the lack of synoptic-scale data.
Using machine learning to predict the location of particle sources simplifies setup and saves computational resources: once trained, the model can be applied directly to any period of available surface data, including the most recent one, without any backtracking simulation. By estimating the catchment area directly from satellite data, we also avoid the potential bias associated with reanalyses. Moreover, in contrast to reanalysis, we show in this study that our estimate can be associated with a confidence index that can depend on either local dynamical conditions (fig 8) or bootstrap model variability (fig 6. h.). Finally, to the best of our knowledge, no study has investigated how ocean surface conditions constrain particle trajectories. We believe that the comparison we made between the 4 layers and the surface Unet provides new significant information and, in particular, shows that in most cases surface information is sufficient to have a good estimate of the particle sources. Although hoped for, this result was not anticipated and was by no means obvious. Our approach is certainly one of the most relevant in reaching this conclusion.
We will clarify these aspects in a revised version of the introduction and conclusion section.
R : L. 76: “After spin-up, the chaotic evolution results in uncorrelated dynamics” – How are the initial and boundary conditions perturbed? Shouldn’t the same atmospheric conditions ensure similarity of both runs? Large distance from the boundary and same atmospheric forcing would imply very similar oceanographic conditions. This is very important. If the test conditions are related to the training conditions, the performance of the ML algorithm is overrated. This could also (at least partially) explain the results which are better “under weak/or stable dynamics”. Such conditions are likely also the situations when POLGYR1 and POLGYR2 currents would be most similar. This should be checked and thoroughly explained in the text.
A : The 2 runs use different initial and boundary conditions derived from parent North Atlantic runs with slightly different options. The most significant of these is a surface salinity restoring, which was added in the second version of the runs but not in the first, resulting in a slightly biased large-scale stratification in the region of interest. The freshwater forcing in the nests is also slightly different because of this restoring. However, since we still use the same atmospheric data for the forcings, POLGYR 1 and POLGYR 2 have similar evolution and statistics at large scales in terms of energy and variability. But the mesoscale and submesoscale structures, which are the main cause of particle displacement in our study, are not similar for the same given date due to their chaotic evolution (e.g. Zanna et al, 18)) . An example can be seen in this snapshot taken on July 24th 2023 (figure below). The eddies and frontal structures don't match, which leads to very different particle source areas. Therefore, we ensure that the score obtained with the test dataset is not biased by a dependence between the training and test databases. This can be confirmed by the score distribution obtained with the validation dataset, made with POLGYR 1 in 2009 (which is not in the training dataset). The distribution is similar to the score distribution of the test dataset (figure below), which reinforces our confidence in our training methodology.
We will clarify these aspects in a revised version, where we will emphasize the meso and submesoscale differences in the two simulations, and with a new annex containing the plots presented below.
Citation: https://doi.org/10.5194/egusphere-2023-2777-AC1 -
AC2: 'Reply on RC2', Théo Picard, 31 May 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-2777/egusphere-2023-2777-AC2-supplement.pdf
-
AC1: 'Comment on egusphere-2023-2777', Théo Picard, 31 Jan 2024
Peer review completion
Journal article(s) based on this preprint
Data sets
Data for learning-based prediction of the particles catchment area of deep ocean sediment traps Picard Théo, Gula Jonathan, Fablet Ronan, Memery Laurent, Collin Jéremy https://doi.org/10.17882/97556
Model code and software
SPARO Picard Théo, Gula Jonathan, Fablet Ronan, Memery Laurent, Collin Jéremy https://github.com/TheoPcrd/SPARO
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
353 | 134 | 44 | 531 | 32 | 39 |
- HTML: 353
- PDF: 134
- XML: 44
- Total: 531
- BibTeX: 32
- EndNote: 39
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Jonathan Gula
Ronan Fablet
Jeremy Collin
Laurent Mémery
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(34857 KB) - Metadata XML