the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Sensitivity of source sediment fingerprinting modelling to tracer selection methods
Abstract. In a context of accelerated soil erosion and sediment supply to water bodies, sediment fingerprinting techniques have received an increasing interest in the last two decades. The selection of tracers is a particularly critical step for the subsequent accurate prediction of sediment source contributions. To select tracers, the most conventional approach is the so-called three-step method, although, more recently, the consensus method has also been proposed as an alternative. The outputs of these two approaches were compared in terms of identification of conservative properties, tracer selection, contribution modelling tendency and performance on a single dataset. As for the tree-step method, several range test criteria were compared, along with the impact of the discriminant function analysis (DFA).
The dataset was composed of tracing properties analysed in soil (through the consideration of three potential sources; n = 56) and sediment core samples (n = 32). Soil and sediment samples were sieved to 63 µm and analysed for organic matter, elemental geochemistry and diffuse visible spectrometry. Virtual mixtures (n = 138) with known source proportions were generated in order to assess model accuracy of each tracer selection method. The Bayesian un-mixing model MixSIAR was used to predict source contributions on virtual mixtures and actual sediments.
The different methods tested in the current research can be distributed into three groups according to their more or less restrictive identification of conservative properties, which were found to be associated with different sediment source contribution tendencies. The less restrictive selections of tracers were associated with a dominant and constant contribution of forests to sediment, whereas the most restrictive selections were associated with dominant and constant contributions of cropland to sediment. In contrast, intermediately restrictive selection of tracers led to more balanced contributions of both cropland and forest to sediment production. Virtual mixtures allowed to compute several evaluation metrics, which supported a better understanding of each tracer selection modelling accuracy. However, strong divergences were observed between the predicted contributions of virtual mixtures and the predicted sediment source contributions. These divergences may likely be attributed to the occurrence of a non-(fully) conservative behaviour of potential tracing properties during erosion, transport and deposition processes, which could not be reproduced when generated the virtual mixtures.
Among the compared tracer selection methods, the three-step method using the mean ± SD and hinge range test criteria provided the most reliable tracer selection methods. In the future, it would be fundamental to generate more reliable metrics to assess conservativeness, to support more reliable modelling and more realistic virtual mixture generation to correctly evaluate modelling accuracy. These improvements may contribute to trustworthy sediment fingerprinting techniques for supporting efficient soil conservation and watershed management.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(3703 KB)
-
Supplement
(8323 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(3703 KB) - Metadata XML
-
Supplement
(8323 KB) - BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-1970', Anonymous Referee #1, 07 Oct 2023
General Comments:
Identifying the sources of problematic sediment in watersheds is a rapidly growing area of research, given its potential to address issues associated with excessive soil erosion and the delivery of sediment (and associated contaminants) to rivers, lakes etc. This research paper aims to make the sediment fingerprinting method more suitable by use for watershed managers and researchers alike. More specifically, it focuses on the differences in outcomes based on tracer selection, emphasizing conservatism and discrimination of tracers. It explains and then uses various range test methods, followed by the Kruskal-Wallis H-test and explores the impacts of using a discriminant function analysis. Additionally, it uses a newer method, the consensus method to compare model outputs with those derived from the conventional three-step method (TSM). Finally, it uses virtual mixtures to compare model output, using data collected from a lake in Japan and the various sources contributing to the lake. The researchers found that the relatively new consensus method (CM) can be too restrictive, as can certain range tests, and that testing the output of models that used and didn’t use the DFA is advised. Overall, this is a very useful paper for the sediment fingerprinting community but would benefit from certain clarifications and changes as suggested below.
Specific comments:
One area that needs clarification is on the mixed use of the terms theoretical, predicted, and observed, particularly when referring to Figures 5-7. This is also unclear when comparing results from the virtual mixtures and the results from the sediment core, as the term ‘predicted results’ is used interchangeably. Finally, more clarification is needed when explaining the contributions from the sediment core modeling, you note that the contributions were outside of the range of predicted contributions on the virtual mixtures. I am unsure exactly what this means. This is primarily in section 3.2 (starting at line 377).
There is a lot of repetition, especially Section 1 (Introduction) and Section 2.5 (Tracer selection). The authors need to address this.
Line 73: if specifics around the sources that contribute to the formation of target sediments are going to be mentioned in the parenthetical, I would suggest adding more common sources, like banks or roads. Is suspended matter a source when often suspended sediments are the target sediment? This seems a little confusing.
Line 74: Is sediment transport the physical mechanism or is it a product of river/stream discharge and other erosive behaviour? My thinking is the latter.
Line 79: <63 µm is the most commonly used size fraction, but many studies use a wide range of sizes for different reasons (e.g., targeting rivers with high concentrations of fine sand, etc.). It would be useful to add why <63 µm is not always used.
Line 86 (and elsewhere): when giving details about each of the three steps in the TSM, it would be useful to continuously refer back to which step equates with each test. For example, in line 86, start with: “The first step in the TSM, which assesses property conservatiness,…” Referring back to paragraph that outlines each step (lines 66-69) should be done throughout section 1.
Line 94: the authors do a very good job of explaining the different statistical analyses for the source material, but do not mention the source samples and whether they are just looking at the mean sediment samples, the min/max, etc. I think most readers will understand all sediment samples should fall in between the range of source values, but it worth explicitly noting.
Line 98: I think more detail is needed as to what the results of the KW test look like and/or specifically do. Does a significant value for a single tracer denote that it is discriminatory across all sources? It’s also important to note the use, in some studies, of further post-hoc testing (such as the Dunns Test) to determine the discrimination potential of each individual source (e.g., forests vs. agriculture vs. roads).
Line 99: You should mention here, as you do later, that the Mann-Whitney U test can be used for 2 sources.
Line 100: While you do not use it, in this paragraph it is worth mentioning that other studies use PCA in place of DFA.
Line 106: It should be noted whether the consensus method uses the range test first or if it ignores the range test all together.
Line 128: It is unusual to divide research objectives into (a) and (b). Please re-assess this.
Line 138 etc: Where relevant, please ensure that percent totals = 100%
Figure 1. Given that FDNPP is not mentioned, please remove this from the caption.
Line 145: Does any of this precipitation fall as snow? If so, it may be worth mentioning this.
Line 161: There needs to be more explanation and justification why the 0-5 cm increments (i.e., most recent sediment) were not used for this study, and that the 6-36 cm depth range represents the most stable land use period. Was the core dated? If so, then please explain and provide this information.
Line 165: were any statistical tests run to show that these samples from the Niida River catchment were not different in the tested soil properties than those from the Hayama Lake catchment. Also, as others may disagree in principal with using samples from outside of the watershed, it may be worth citing the work by Williamson et al., (2023), which shows that source samples from elsewhere can be used in some circumstances
Williamson TN, Fitzpatrick FA, Kreiling RM (2023) Building a library of source samples for sediment fingerprinting – Potential and proof of concept. Journal of Environmental Management 333:117254. https://doi.org/10.1016/j.jenvman.2023.117254
Line 175: were there properties where concentrations fell below the detection limit? If so, what was done with those values? If not, ignore this comment.
Lines 177-179: This is confusing and needs clearer explanation.
Line 250: does virtual tracers mean virtual properties (i.e., elements, reflectance, etc.)?
Line 266: Why was a score above 70 chosen by Lizaga et al. (2020a), and is it a hard line? Additionally, there seems to be no mention of the issue of underdetermined models, which is avoided by using n-1 tracers (n=sources). Does this matter using the CM/FingerPro?
Line 274: The issue of normality comes up frequently in using MixSIAR, but it does not seem a consensus has been reached. From the cited paper (Laceby et al. 2021b), it seems that there was no significant difference in untransformed data. This might be something that should be included in the discussion.
Line 330: there should be more detail in this section as to which properties had values nearer to zero (and would have been kept) and which properties were far from zero. Based on my understanding from the explanation of the CM in the methods section, there would be a range of CI values. This is of particular interest because as you point out, there are many ways to implement the range test, but only one way to calculate the CI. Also, which of the four properties had the highest CR score? The lowest?
Line 340: What does ‘moderate’ mean when refereeing the effect of the use of the DFA? I might be inclined to remove that part of the sentence and just write “The effect of the DFA stepwise selection was to mainly modify the prediction…”
Line 342: I would add at the end of the sentence when the DFA was utilised.
Line 377: it gets a bit confusing as to what is the virtual mixture results vs. the sediment core results because of the use of ‘predicted source contributions for the sediment core samples’. It might be simpler to just refer to those as source contributions.
Line 379: this sentence is confusing. Does it mean that the contributions to the core samples were outside the range of the virtual mixture contributions?
Line 388: Please clarify what “the DFA stepwise selection tended to reduce the number of matching sediment sample predicted contributions” means. What is matching sediment sample predicted contributions?
Line 406: Was any dating done to determine the relative time period of sediment contributions? This would be interesting, and may explain changes in source contributions if there were any historic level flooding and/or land use changes.
Line 409: same intrinsic information as what? The logic here is difficult to follow.
Line 440: you mention the importance of grain size and in the methods note that you sieved to 63um, but did you also test to see if there was a difference in the D50 or SSA of the sediments vs. source samples?
Line 461: So the CI must be equal to 0 for the property to be used in modeling, but is that a choice made by the model developer or could that threshold be changed for other a priori conservative properties with a score close to 0, as you mention? For example, if more properties were needed to ensure that the model isn’t underdetermined, could a property with a score of 0.1 be included? This may be outside the scope of this paper, but a few sentences to this effect might be useful, perhaps as an extension to the statement on line 467.
Line 472: The correction factors were not necessarily considered useful, but understanding if there are significant particle size differences between source and sediment is important, particularly for certain geochemical properties, as you note in line 440.
Line 495: relevant in what regard? My assumption is that you are stating that researchers should measure relevance based on the outcomes of virtual mixtures run simultaneously with field data, but it needs clarification.
Line 566/567: this sentence needs clarification, ‘a greater or lesser number of sample predicted contributions fell outside the range …” What is meant by a greater or lesser number?
Technical corrections:
The English does require some improvement; below are some comments. For the reference list, please include all of the author’s initials. I think some of the figures could be improved in terms of differentiating lines etc. Some of the colours are too close to each other, and some symbols are not legible (e.g., sediment sample value and measurement uncertainty on Fig 9).
Line 55: remove ‘might’.
Line 69: The last sentence needs some editing – “the third step of this approach consists of selecting optimal tracers…” or something similar.
Line 71: I would consider changing it to under different ‘land usages or covers’.
Line 108: if acronyms are introduced, they should be used throughput.
Line 134 and throughout: use “Hayama Lake”.
Line 138: move ‘respectively’ to after 1%
Line 206: add ‘and’ after the comma after literature.
Line 274: change to MixSIAR.
Line 295: indicates ‘that the mean’…
Line 322: Remove ‘only’. No test only identified Ti as conservative, but many of them did identify it as such.
Line 374: lowest?
Line 366: This should be Fig 7?
Figure 6: The text in the caption and the axis should match. Predicted, theoretical and observed are all used. So I am assuming it is showing the virtual model mixtures (theoretical) vs. virtual model output (predicted).
Figure 7: Same issue, either use theoretical or observed.
Line 401: remove ‘really’, perhaps change to “did not have a strong impact on the trends…”
Line 412: “For most of them”, what is them referring to?
Line 425: Remove ‘a more or less’. Not clear what it means here.
Line 439: remove the comma and ‘and’ after sheets.
Citation: https://doi.org/10.5194/egusphere-2023-1970-RC1 -
AC1: 'Reply on RC1', Thomas Chalaux-Clergue, 18 Oct 2023
We would like to thank the reviewer for his or her encouraging comments and suggestions, which we will carefully take into account when we revise the manuscript, should the editor give us the opportunity to do so.
Although we will address and reply to each individual comment when revising the manuscript, we would like to provide some general comments as a reply to some of the reviewers’ specific questions.
Regarding the use of “theoretical”, “observed” and “predicted” contribution terms, when generating virtual mixtures, a set of contributions is defined. These contributions can be referred to as "observed" or, as in our manuscript, "theoretical". We prefer to use the term "theoretical" as the contributions are defined by the user, whereas – in our opinion – the use of the term "observed" would be more appropriate to refer to real observations. The "predicted" contribution refers to model outputs and can be associated with either virtual mixtures or field samples. In our opinion, making this distinction is needed when comparing "theoretical" and "predicted" contributions for virtual mixtures. We will pay more attention to the definition of these terms when revising the manuscript.
As for the sampling design, we have likely not well contextualized the objectives associated with the sampling strategy and the associated analysis, as they will be specifically addressed in another paper. However, if we have the opportunity to revise the manuscript, we will take more care to better contextualize the associated objectives, the source classification, the core sampling increment and the studied granulometric fraction.
Citation: https://doi.org/10.5194/egusphere-2023-1970-AC1
-
AC1: 'Reply on RC1', Thomas Chalaux-Clergue, 18 Oct 2023
-
RC2: 'Comment on egusphere-2023-1970', Anonymous Referee #2, 16 Oct 2023
General Comments:
Despite the large number of fingerprinting studies identifying and quantifying sources of sediment under different conditions and scenarios, authors highlight the need of keep on working in the field as its use is still limited due to the complexity of the approach and their inherent limitations. This manuscript presents a detailed study and comparison of some of the steps followed in this kind of studies and could be an initial step to make this approach suitable by use for researchers but still far for managers and farmers.
Specific comments:
Please, explain why only the 2000-63µm fraction is kept for evaluation in these studies (Introduction) and in this particular one (L 169). Have the authors made any kind of exploratory statistical analysis to evaluate possible differences between soil/sediment particle sizes before keeping only the 2000-63µm fraction? Could you provide any kind of information about the samples’ particle size distribution (soil and sediment)?
I miss how the sampling design (soil and sediment at spatial and temporal scales) is addressed in fingerprinting studies as another source of uncertainty/variability. Please, include general information (Introduction) but also a bit more of detail regarding this study in particular. In section 2.2 please explain better why these depth increments, what “stable land use period” means, etc.
The term “theoretical” is frequently used but it is not clear to me what the authors mean in each case. Observed from virtual mixtures? Observed from the field samples? Please, clarify the meaning respectively.
Please delete in Fig. 1 “FDNPP: Fukushima Dai-ichi Nuclear power plant”. There is no mention to it anywhere.
L 203-204/L 424-475: “To be conservative, all the sample property values should lie within the source range” but how conservativity in time is addressed in this study?
L 250: What does virtual tracer mean?
L 377-379. Could you please rewrite this statement? It is unclear.
L 409-410. Please, clarify.
Conclusions section is a brief summary of the manuscript. However, I could not find much about recommendations for practitioners (L 575-578) and how to make fingerprinting studies more usable. Please, expound on.
Please, improve the legibility of all figures.
Use the same format for the references list within the main text and also supplementary material. Please, change “Kanonika”.
English could be checked and improved to make it more formal avoiding colloquial expressions and trying to be more precise.
Citation: https://doi.org/10.5194/egusphere-2023-1970-RC2 -
AC2: 'Reply on RC2', Thomas Chalaux-Clergue, 18 Oct 2023
We would like to thank the reviewer for his or her encouraging comments and suggestions, which we will carefully take into account when we revise the manuscript, should the editor give us the opportunity to do so.
Although we will address and reply to each individual comment when revising the manuscript, we would like to provide some general comments as a reply to some of the reviewers’ specific questions.
Regarding the use of “theoretical”, “observed” and “predicted” contribution terms, when generating virtual mixtures, a set of contributions is defined. These contributions can be referred to as "observed" or, as in our manuscript, "theoretical". We prefer to use the term "theoretical" as the contributions are defined by the user, whereas – in our opinion – the use of the term "observed" would be more appropriate to refer to real observations. The "predicted" contribution refers to model outputs and can be associated with either virtual mixtures or field samples. In our opinion, making this distinction is needed when comparing "theoretical" and "predicted" contributions for virtual mixtures. We will pay more attention to the definition of these terms when revising the manuscript.
As for the sampling design, we have likely not well contextualized the objectives associated with the sampling strategy and the associated analysis, as they will be specifically addressed in another paper. However, if we have the opportunity to revise the manuscript, we will take more care to better contextualize the associated objectives, the source classification, the core sampling increment and the studied granulometric fraction.
Citation: https://doi.org/10.5194/egusphere-2023-1970-AC2
-
AC2: 'Reply on RC2', Thomas Chalaux-Clergue, 18 Oct 2023
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-1970', Anonymous Referee #1, 07 Oct 2023
General Comments:
Identifying the sources of problematic sediment in watersheds is a rapidly growing area of research, given its potential to address issues associated with excessive soil erosion and the delivery of sediment (and associated contaminants) to rivers, lakes etc. This research paper aims to make the sediment fingerprinting method more suitable by use for watershed managers and researchers alike. More specifically, it focuses on the differences in outcomes based on tracer selection, emphasizing conservatism and discrimination of tracers. It explains and then uses various range test methods, followed by the Kruskal-Wallis H-test and explores the impacts of using a discriminant function analysis. Additionally, it uses a newer method, the consensus method to compare model outputs with those derived from the conventional three-step method (TSM). Finally, it uses virtual mixtures to compare model output, using data collected from a lake in Japan and the various sources contributing to the lake. The researchers found that the relatively new consensus method (CM) can be too restrictive, as can certain range tests, and that testing the output of models that used and didn’t use the DFA is advised. Overall, this is a very useful paper for the sediment fingerprinting community but would benefit from certain clarifications and changes as suggested below.
Specific comments:
One area that needs clarification is on the mixed use of the terms theoretical, predicted, and observed, particularly when referring to Figures 5-7. This is also unclear when comparing results from the virtual mixtures and the results from the sediment core, as the term ‘predicted results’ is used interchangeably. Finally, more clarification is needed when explaining the contributions from the sediment core modeling, you note that the contributions were outside of the range of predicted contributions on the virtual mixtures. I am unsure exactly what this means. This is primarily in section 3.2 (starting at line 377).
There is a lot of repetition, especially Section 1 (Introduction) and Section 2.5 (Tracer selection). The authors need to address this.
Line 73: if specifics around the sources that contribute to the formation of target sediments are going to be mentioned in the parenthetical, I would suggest adding more common sources, like banks or roads. Is suspended matter a source when often suspended sediments are the target sediment? This seems a little confusing.
Line 74: Is sediment transport the physical mechanism or is it a product of river/stream discharge and other erosive behaviour? My thinking is the latter.
Line 79: <63 µm is the most commonly used size fraction, but many studies use a wide range of sizes for different reasons (e.g., targeting rivers with high concentrations of fine sand, etc.). It would be useful to add why <63 µm is not always used.
Line 86 (and elsewhere): when giving details about each of the three steps in the TSM, it would be useful to continuously refer back to which step equates with each test. For example, in line 86, start with: “The first step in the TSM, which assesses property conservatiness,…” Referring back to paragraph that outlines each step (lines 66-69) should be done throughout section 1.
Line 94: the authors do a very good job of explaining the different statistical analyses for the source material, but do not mention the source samples and whether they are just looking at the mean sediment samples, the min/max, etc. I think most readers will understand all sediment samples should fall in between the range of source values, but it worth explicitly noting.
Line 98: I think more detail is needed as to what the results of the KW test look like and/or specifically do. Does a significant value for a single tracer denote that it is discriminatory across all sources? It’s also important to note the use, in some studies, of further post-hoc testing (such as the Dunns Test) to determine the discrimination potential of each individual source (e.g., forests vs. agriculture vs. roads).
Line 99: You should mention here, as you do later, that the Mann-Whitney U test can be used for 2 sources.
Line 100: While you do not use it, in this paragraph it is worth mentioning that other studies use PCA in place of DFA.
Line 106: It should be noted whether the consensus method uses the range test first or if it ignores the range test all together.
Line 128: It is unusual to divide research objectives into (a) and (b). Please re-assess this.
Line 138 etc: Where relevant, please ensure that percent totals = 100%
Figure 1. Given that FDNPP is not mentioned, please remove this from the caption.
Line 145: Does any of this precipitation fall as snow? If so, it may be worth mentioning this.
Line 161: There needs to be more explanation and justification why the 0-5 cm increments (i.e., most recent sediment) were not used for this study, and that the 6-36 cm depth range represents the most stable land use period. Was the core dated? If so, then please explain and provide this information.
Line 165: were any statistical tests run to show that these samples from the Niida River catchment were not different in the tested soil properties than those from the Hayama Lake catchment. Also, as others may disagree in principal with using samples from outside of the watershed, it may be worth citing the work by Williamson et al., (2023), which shows that source samples from elsewhere can be used in some circumstances
Williamson TN, Fitzpatrick FA, Kreiling RM (2023) Building a library of source samples for sediment fingerprinting – Potential and proof of concept. Journal of Environmental Management 333:117254. https://doi.org/10.1016/j.jenvman.2023.117254
Line 175: were there properties where concentrations fell below the detection limit? If so, what was done with those values? If not, ignore this comment.
Lines 177-179: This is confusing and needs clearer explanation.
Line 250: does virtual tracers mean virtual properties (i.e., elements, reflectance, etc.)?
Line 266: Why was a score above 70 chosen by Lizaga et al. (2020a), and is it a hard line? Additionally, there seems to be no mention of the issue of underdetermined models, which is avoided by using n-1 tracers (n=sources). Does this matter using the CM/FingerPro?
Line 274: The issue of normality comes up frequently in using MixSIAR, but it does not seem a consensus has been reached. From the cited paper (Laceby et al. 2021b), it seems that there was no significant difference in untransformed data. This might be something that should be included in the discussion.
Line 330: there should be more detail in this section as to which properties had values nearer to zero (and would have been kept) and which properties were far from zero. Based on my understanding from the explanation of the CM in the methods section, there would be a range of CI values. This is of particular interest because as you point out, there are many ways to implement the range test, but only one way to calculate the CI. Also, which of the four properties had the highest CR score? The lowest?
Line 340: What does ‘moderate’ mean when refereeing the effect of the use of the DFA? I might be inclined to remove that part of the sentence and just write “The effect of the DFA stepwise selection was to mainly modify the prediction…”
Line 342: I would add at the end of the sentence when the DFA was utilised.
Line 377: it gets a bit confusing as to what is the virtual mixture results vs. the sediment core results because of the use of ‘predicted source contributions for the sediment core samples’. It might be simpler to just refer to those as source contributions.
Line 379: this sentence is confusing. Does it mean that the contributions to the core samples were outside the range of the virtual mixture contributions?
Line 388: Please clarify what “the DFA stepwise selection tended to reduce the number of matching sediment sample predicted contributions” means. What is matching sediment sample predicted contributions?
Line 406: Was any dating done to determine the relative time period of sediment contributions? This would be interesting, and may explain changes in source contributions if there were any historic level flooding and/or land use changes.
Line 409: same intrinsic information as what? The logic here is difficult to follow.
Line 440: you mention the importance of grain size and in the methods note that you sieved to 63um, but did you also test to see if there was a difference in the D50 or SSA of the sediments vs. source samples?
Line 461: So the CI must be equal to 0 for the property to be used in modeling, but is that a choice made by the model developer or could that threshold be changed for other a priori conservative properties with a score close to 0, as you mention? For example, if more properties were needed to ensure that the model isn’t underdetermined, could a property with a score of 0.1 be included? This may be outside the scope of this paper, but a few sentences to this effect might be useful, perhaps as an extension to the statement on line 467.
Line 472: The correction factors were not necessarily considered useful, but understanding if there are significant particle size differences between source and sediment is important, particularly for certain geochemical properties, as you note in line 440.
Line 495: relevant in what regard? My assumption is that you are stating that researchers should measure relevance based on the outcomes of virtual mixtures run simultaneously with field data, but it needs clarification.
Line 566/567: this sentence needs clarification, ‘a greater or lesser number of sample predicted contributions fell outside the range …” What is meant by a greater or lesser number?
Technical corrections:
The English does require some improvement; below are some comments. For the reference list, please include all of the author’s initials. I think some of the figures could be improved in terms of differentiating lines etc. Some of the colours are too close to each other, and some symbols are not legible (e.g., sediment sample value and measurement uncertainty on Fig 9).
Line 55: remove ‘might’.
Line 69: The last sentence needs some editing – “the third step of this approach consists of selecting optimal tracers…” or something similar.
Line 71: I would consider changing it to under different ‘land usages or covers’.
Line 108: if acronyms are introduced, they should be used throughput.
Line 134 and throughout: use “Hayama Lake”.
Line 138: move ‘respectively’ to after 1%
Line 206: add ‘and’ after the comma after literature.
Line 274: change to MixSIAR.
Line 295: indicates ‘that the mean’…
Line 322: Remove ‘only’. No test only identified Ti as conservative, but many of them did identify it as such.
Line 374: lowest?
Line 366: This should be Fig 7?
Figure 6: The text in the caption and the axis should match. Predicted, theoretical and observed are all used. So I am assuming it is showing the virtual model mixtures (theoretical) vs. virtual model output (predicted).
Figure 7: Same issue, either use theoretical or observed.
Line 401: remove ‘really’, perhaps change to “did not have a strong impact on the trends…”
Line 412: “For most of them”, what is them referring to?
Line 425: Remove ‘a more or less’. Not clear what it means here.
Line 439: remove the comma and ‘and’ after sheets.
Citation: https://doi.org/10.5194/egusphere-2023-1970-RC1 -
AC1: 'Reply on RC1', Thomas Chalaux-Clergue, 18 Oct 2023
We would like to thank the reviewer for his or her encouraging comments and suggestions, which we will carefully take into account when we revise the manuscript, should the editor give us the opportunity to do so.
Although we will address and reply to each individual comment when revising the manuscript, we would like to provide some general comments as a reply to some of the reviewers’ specific questions.
Regarding the use of “theoretical”, “observed” and “predicted” contribution terms, when generating virtual mixtures, a set of contributions is defined. These contributions can be referred to as "observed" or, as in our manuscript, "theoretical". We prefer to use the term "theoretical" as the contributions are defined by the user, whereas – in our opinion – the use of the term "observed" would be more appropriate to refer to real observations. The "predicted" contribution refers to model outputs and can be associated with either virtual mixtures or field samples. In our opinion, making this distinction is needed when comparing "theoretical" and "predicted" contributions for virtual mixtures. We will pay more attention to the definition of these terms when revising the manuscript.
As for the sampling design, we have likely not well contextualized the objectives associated with the sampling strategy and the associated analysis, as they will be specifically addressed in another paper. However, if we have the opportunity to revise the manuscript, we will take more care to better contextualize the associated objectives, the source classification, the core sampling increment and the studied granulometric fraction.
Citation: https://doi.org/10.5194/egusphere-2023-1970-AC1
-
AC1: 'Reply on RC1', Thomas Chalaux-Clergue, 18 Oct 2023
-
RC2: 'Comment on egusphere-2023-1970', Anonymous Referee #2, 16 Oct 2023
General Comments:
Despite the large number of fingerprinting studies identifying and quantifying sources of sediment under different conditions and scenarios, authors highlight the need of keep on working in the field as its use is still limited due to the complexity of the approach and their inherent limitations. This manuscript presents a detailed study and comparison of some of the steps followed in this kind of studies and could be an initial step to make this approach suitable by use for researchers but still far for managers and farmers.
Specific comments:
Please, explain why only the 2000-63µm fraction is kept for evaluation in these studies (Introduction) and in this particular one (L 169). Have the authors made any kind of exploratory statistical analysis to evaluate possible differences between soil/sediment particle sizes before keeping only the 2000-63µm fraction? Could you provide any kind of information about the samples’ particle size distribution (soil and sediment)?
I miss how the sampling design (soil and sediment at spatial and temporal scales) is addressed in fingerprinting studies as another source of uncertainty/variability. Please, include general information (Introduction) but also a bit more of detail regarding this study in particular. In section 2.2 please explain better why these depth increments, what “stable land use period” means, etc.
The term “theoretical” is frequently used but it is not clear to me what the authors mean in each case. Observed from virtual mixtures? Observed from the field samples? Please, clarify the meaning respectively.
Please delete in Fig. 1 “FDNPP: Fukushima Dai-ichi Nuclear power plant”. There is no mention to it anywhere.
L 203-204/L 424-475: “To be conservative, all the sample property values should lie within the source range” but how conservativity in time is addressed in this study?
L 250: What does virtual tracer mean?
L 377-379. Could you please rewrite this statement? It is unclear.
L 409-410. Please, clarify.
Conclusions section is a brief summary of the manuscript. However, I could not find much about recommendations for practitioners (L 575-578) and how to make fingerprinting studies more usable. Please, expound on.
Please, improve the legibility of all figures.
Use the same format for the references list within the main text and also supplementary material. Please, change “Kanonika”.
English could be checked and improved to make it more formal avoiding colloquial expressions and trying to be more precise.
Citation: https://doi.org/10.5194/egusphere-2023-1970-RC2 -
AC2: 'Reply on RC2', Thomas Chalaux-Clergue, 18 Oct 2023
We would like to thank the reviewer for his or her encouraging comments and suggestions, which we will carefully take into account when we revise the manuscript, should the editor give us the opportunity to do so.
Although we will address and reply to each individual comment when revising the manuscript, we would like to provide some general comments as a reply to some of the reviewers’ specific questions.
Regarding the use of “theoretical”, “observed” and “predicted” contribution terms, when generating virtual mixtures, a set of contributions is defined. These contributions can be referred to as "observed" or, as in our manuscript, "theoretical". We prefer to use the term "theoretical" as the contributions are defined by the user, whereas – in our opinion – the use of the term "observed" would be more appropriate to refer to real observations. The "predicted" contribution refers to model outputs and can be associated with either virtual mixtures or field samples. In our opinion, making this distinction is needed when comparing "theoretical" and "predicted" contributions for virtual mixtures. We will pay more attention to the definition of these terms when revising the manuscript.
As for the sampling design, we have likely not well contextualized the objectives associated with the sampling strategy and the associated analysis, as they will be specifically addressed in another paper. However, if we have the opportunity to revise the manuscript, we will take more care to better contextualize the associated objectives, the source classification, the core sampling increment and the studied granulometric fraction.
Citation: https://doi.org/10.5194/egusphere-2023-1970-AC2
-
AC2: 'Reply on RC2', Thomas Chalaux-Clergue, 18 Oct 2023
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
322 | 113 | 24 | 459 | 31 | 11 | 12 |
- HTML: 322
- PDF: 113
- XML: 24
- Total: 459
- Supplement: 31
- BibTeX: 11
- EndNote: 12
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Thomas Chalaux-Clergue
Pedro V. G. Batista
Nuria Martinez-Carreras
J. Patrick Laceby
Olivier Evrard
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(3703 KB) - Metadata XML
-
Supplement
(8323 KB) - BibTeX
- EndNote
- Final revised paper