the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Using Internal Standards in Time-resolved X-ray Micro-computed Tomography to Quantify Grain-scale Developments in Solid State Mineral Reactions
Abstract. X-ray computed tomography has established itself as a crucial tool in the analysis of rock materials, providing the ability to visualise intricate 3D microstructures and capture quantitative information about internal phenomena such as structural damage, mineral reactions, and fluid-rock interactions. The efficacy of this tool, however, depends significantly on the precision of image segmentation, a process that has seen varied results across different methodologies, ranging from simple histogram thresholding to more complex machine learning and deep learning strategies. The irregularity in these segmentation outcomes raises concerns about the reproducibility of the results, a challenge that we aim to address in this work.
In our study, we employ the mass balance of a metamorphic reaction as an internal standard to verify segmentation accuracy and shed light on the advantages of deep learning approaches, particularly their capacity to efficiently process expansive datasets. Our methodology utilises deep learning to achieve accurate segmentation of time-resolved volumetric images of the gypsum dehydration reaction, a process that traditional segmentation techniques have struggled with due to poor contrast between reactants and products. We utilise a 2D U-net architecture for segmentation and introduce machine learning-obtained labelled data (specifically, from random forest classification) as an innovative solution to the limitations of training data obtained from imaging. The deep learning algorithm we developed has demonstrated remarkable resilience, consistently segmenting volume phases across all experiments. Furthermore, our trained neural network exhibits impressively short run times on a standard workstation equipped with a Graphic Processing Unit (GPU). To evaluate the precision of our workflow, we compared the theoretical and measured molar evolution of gypsum to bassanite during dehydration. The errors between the predicted and segmented volumes in all time-series experiments fell within the 2 % confidence intervals of the theoretical curves, affirming the accuracy of our methodology. We also compared the results obtained by the proposed method with standard segmentation methods and found a significant improvement in precision and accuracy of segmented volumes. This makes the segmented CT images suited for extracting quantitative data, such as variations in mineral growth rate and pore size during the reaction.
In this work, we introduce a distinctive approach by using an internal standard to validate the accuracy of a segmentation model, demonstrating its potential as a robust and reliable method for image segmentation in this field. This ability to measure the volumetric evolution during a reaction with precision paves the way for advanced modelling and verification of the physical properties of rock materials, particularly those involved in tectono-metamorphic processes. Our work underscores the promise of deep learning approaches in elevating the quality and reproducibility of research in the geosciences.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(24031 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(24031 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-1819', Anonymous Referee #1, 22 Sep 2023
I believe the paper offers a significant contribution to scientific progress within the scope of the journal, and it will be of wide interest to the geoscience community, particularly in the field of 3D mineralogy and petrology. This paper introduces a new workflow for digital image processing, and aims to revolutionize current methodologies of XCT image processing in the field of in-situ time-evolving synchrotron XCT datasets, which are often very large in size and time-consuming to analyse due to the challenges in low contrast, noise, evolving mineral phases. I believe the proposed workflow is quite robust and valid, as the authors cross-check the accuracy of the proposed method against theoretical molar evolution of gypsum to bassanite during the reaction, and their measured values fall within 2% confidence intervals. In addition, they also provide a robust presentation and comparison with other more traditional methods of image segmentation, with and without data augmentation or machine-learning labelled ground truth data, and also provide suggestions to make the method even less time-consuming. Overall, I believe this method would be applicable in many fields and will greatly improve digital image analyses in time-evolving synchrotron XCT datasets or even standard XCT scans where mineral phases may overlap in intensity.
I would be interested in seeing how the accuracy of this workflow can be checked when there is no prior-knowledge of a reaction, or when there is no theoretical curve to check it against with. This might be the case for other geological processes, where for example different mechanisms play a role, and where the molar volumes of mineral phases are not so well known. How do the authors propose to check their workflow in those instances? It would be good to include this explanation in the discussion.
I found that the overall presentation quality could be improved as some concepts mentioned in the paper are too technical for a non – expert audience, and they need explanation.
For instance, many readers will not be familiar with terms such as “supervised deep learning”. What is a supervised deep learning method? How does it differ from an unsupervised one? The authors mentioned both concepts in the paper, yet they fail to explain what they are and how they differ. They also did not explain why they chose one rather than the other. It would be good to at least explain the difference between the 2 methods (since both are mentioned) and why the authors made that choice, so that the readers can better understand what may or may not work in other contexts where this workflow may help in the analysis.
Furthermore, when possible, terminology belonging to machine-learning should be avoided, as this journal covers a great variety of topics, and while some readers may be familiar with terms such as “(hyper)-parameters”, these may not always be clear to a non-expert reader. Why are they (hyper) parameters and not just parameters? I would suggest avoiding such technical terminology when possible, or if needed, then it needs some explanation. The authors explain what (hyper)-parameters they used, but it is not clear what (hyper)-parameters are.
Some concepts are introduced without explanation of if there is one, it is presented in different sections. I flagged in the commented text where I could: for instance, Random Forest is not cross-referenced with the section in the Appendix. I think introducing cross-referencing to these sections next to the concept would help non-expert readers (example: random forest, sec. 3.3, Appendix X).
The paper also contains some minor typos and small inaccuracies, like lack of introduction of acronyms.
Some of the figures could be improved and be bigger (not sure if it is the formatting of the generated pdf). It would be good to have an overall figure (with the grain of celestite) showing all the steps, including the post-processing cleaning up.
Overall, I think the paper is a great scientific contribution to the community. Providing minor revisions are made (specifically targeting the improvement of clarity for non-expert readers), I suggest that the paper is accepted for publication.
-
AC1: 'Reply on RC1', Roberto Emanuele Rizzo, 08 Dec 2023
We sincerely thank Reviewer #1 for their thorough review and positive feedback on our manuscript. We are pleased to know our work is considered a valuable contribution to the Geosciences’ community. We appreciate the constructive comments, which we have largely accepted and incorporated into the manuscript. We have addressed the comments in the attached PDF file, which also includes responses to the comments made on the PDF version of the manuscript. The text is colour coded with Reviewer's comments in black and our responses in blue.
Sincerely,
Roberto Rizzo
-
AC1: 'Reply on RC1', Roberto Emanuele Rizzo, 08 Dec 2023
-
RC2: 'Comment on egusphere-2023-1819', Richard A. Ketcham, 23 Oct 2023
This is a very nice study, using knowledge of the physical system as a means of benchmarking machine learning approaches to segmentation. It will certainly be of interest to the community, and is close to being ready to publish as-is. There are only a few items that might need a little more work.
Overall this method seems to work quite well on this system. However, it relies on manual segmentation to create the training data. This works fine for the more massive phases, but for the ones that have one or more small dimensions relative to the data (celestite, and more importantly water-filled porosity) many of the voxels suffer partial-volume or blurring effects. The segmentation of the porosity in Fig. 3e is very chunky in comparison to how the porosity really is (thin grain boundary layers). Basically, everything with a hint of darkness is being called porosity, but many of those voxels represent mixtures between water and solid material. This makes the method a bit less repeatable, and is likely why the water always comes out a bit high in Fig. 7. This issue is recognized by the authors in lines 341-345, but not given context on how it is affecting the analysis presented. Also, the calibration of CT values does not necessarily need to rely on external calibration – you can probably just measure the grayscales off your images (and use the volume balance in your system to check that your results are about right). Since this would essentially be a post-processing step reinterpreting some segmented voxels, it seems a complementary add-on.
Detailed comments:
[line 72] Replace “results” with “provides”
[lines 163-165] This sentence is garbled (“histogram thresholding… or… histogram thresholding”?). Rewrite.
[Table 1] It seems odd that the average DICE for Model E is higher than that for Model RF even though the former scores worse and much worse on two phases. How was the average calculated?
[line 314] What does “(imaged as porosity [in] the μCT data)” mean? Was the water missing/drained, or is the water just being called porosity because it’s pore-filling?
[line 331] Another issue with ML-segmentation is that the training may only work well for very similar conditions of material, geometry, imaging parameters, etc.
[line 344-345] Another approach might be re-interpretation of voxels using grayscale, so as to assign affected voxels partial values.
[line 377] “In future iterations of our method aims to expand…”: change to “we aim” or “Future iterations of our method will aim”
Citation: https://doi.org/10.5194/egusphere-2023-1819-RC2 -
AC2: 'Reply on RC2', Roberto Emanuele Rizzo, 08 Dec 2023
We thank Prof. Richard Ketcham for his comments and suggestions on our manuscript. We are pleased to learn that he found this work to be a valuable contribution. In the attached PDF file, the responses to his comments are colour-coded in blue. We hope our responses adequately address the issues raised.
Sincerely,
Roberto Rizzo
-
AC2: 'Reply on RC2', Roberto Emanuele Rizzo, 08 Dec 2023
-
RC3: 'Comment on egusphere-2023-1819', Luke Griffiths, 03 Nov 2023
Dear editor and authors,
Summary: This paper presents a workflow for multi-class segmentation of X-ray computed tomography images of rock during in situ testing. The authors aim to address the reproducibility issues commonly associated with this process through employing a deep learning approach, trained on the results of a feature-based random forest classification. The results of the U-net model are compared to a global thresholding method. They innovatively use mass balance to provide an external measure of the relative volume of constituents, allowing for a direct comparison with the segmentation results.
The paper is well-written and the background and motivation are clearly outlined. I have some comments on the methodology, for which some choices need to be explained in greater detail and potentially modified where appropriate to compare the models and evaluate their performance in a more robust way. I found the external verification of the multi-class segmentation to be a clear strong point of the paper, and I think this data could be further used as a priori information to guide the models during training and provide robust segmentation (but perhaps this is beyond the scope of this study).
I have provided comments below, and comments in the attached PDF on specific parts of the text.
I recommend publication after minor revisions, most importantly to address the questions on the segmentation methodology.
Comments:
- The distinction between pixel or feature-based methods and convolutional methods could perhaps be made clearer earlier on – the assumption is that convolutional methods work better because they are able to better capture spatial information but I was expecting this point to have more weight in the introduction.
- Was there any attempt to use histogram matching and/or calibration of greyscale to values of known materials (e.g. for pieces of steel that are not expected to change between scans)? This can work quite well to allow for direct comparison of time-lapse images and quantitative analysis.
- The ML approaches are compared to simple histogram (global?) thresholding – perhaps even adaptive/local thresholding or watershed segmentation would perform much better and might be a fairer comparison.
- How did you label the training data? Do you label all pixels within an image or a subset of pixels for which you are confident in their labelling (e.g. only pixels well within a grain)?
- How was the validation set chosen? Equally across all images from each time step? I would be interested to see how the model performs if the validation set contained all images of a given time step (therefore not present in the training set). As it is argued that the DL method is more robust and consistent, doing the test-train split in this way could better illustrate this point.
- How does the RF model alone compare to the U-net model? For example, compared to the manually labelled images.
- How were the features chosen for the RF model? How much better could the RF model become if more features are added and/or features with larger windows than 3x3? Could it rival the U-net model?
- Perhaps I misunderstand, but it is a bit confusing to use “ground truth” and validation data interchangeably (is this indeed the case?). For example, are the “ground truths” used for comparison in Table 1 the same for each method? I think, ideally, a ground truth should be the best possible segmentation (e.g., one that was labelled manually or using whichever is deemed the best result from all these models) and it should be the same reference for all models. It can be misleading to evaluate the models on how well they perform on their validation data, because the quality of the validation data varies depending on the method used to label it (i.e., thresholding or RF). At the very least, this should be made more clear by dropping the “ground truth” terminology. Ideally, the models should be re-evaluated against the same “best-case” segmentation.
- Are you able to give a formal comparison of the methods based on the ground truth from the calculated phase volume changes? That seems to be like a good way to evaluate their performance.
- More of an observation as it is perhaps beyond the scope of this work: it would be very interesting to see if the measured volumetric evolution could be used to constrain the models used for segmentation. This is what I was anticipating based on the title.
-
RC4: 'Reply on RC3', Luke Griffiths, 03 Nov 2023
"U-net model are compared to a global thresholding method" - should read "U-net models trained on RF-generated labels are compared to U-net models trained on labels made using thresholding"
Citation: https://doi.org/10.5194/egusphere-2023-1819-RC4 -
AC4: 'Reply on RC4', Roberto Emanuele Rizzo, 08 Dec 2023
Thanks for clarifying this. :-)
Citation: https://doi.org/10.5194/egusphere-2023-1819-AC4
-
AC4: 'Reply on RC4', Roberto Emanuele Rizzo, 08 Dec 2023
-
AC3: 'Reply on RC3', Roberto Emanuele Rizzo, 08 Dec 2023
We are sincerely grateful to Dr. Luke Griffith for his thorough and detailed review of our manuscript. We have addressed the comments in the PDF file attached here, also reporting the response to those comments made on the PDF version of the manuscript. Our responses are colour-coded in blue.
Sincerely,
Roberto Rizzo
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-1819', Anonymous Referee #1, 22 Sep 2023
I believe the paper offers a significant contribution to scientific progress within the scope of the journal, and it will be of wide interest to the geoscience community, particularly in the field of 3D mineralogy and petrology. This paper introduces a new workflow for digital image processing, and aims to revolutionize current methodologies of XCT image processing in the field of in-situ time-evolving synchrotron XCT datasets, which are often very large in size and time-consuming to analyse due to the challenges in low contrast, noise, evolving mineral phases. I believe the proposed workflow is quite robust and valid, as the authors cross-check the accuracy of the proposed method against theoretical molar evolution of gypsum to bassanite during the reaction, and their measured values fall within 2% confidence intervals. In addition, they also provide a robust presentation and comparison with other more traditional methods of image segmentation, with and without data augmentation or machine-learning labelled ground truth data, and also provide suggestions to make the method even less time-consuming. Overall, I believe this method would be applicable in many fields and will greatly improve digital image analyses in time-evolving synchrotron XCT datasets or even standard XCT scans where mineral phases may overlap in intensity.
I would be interested in seeing how the accuracy of this workflow can be checked when there is no prior-knowledge of a reaction, or when there is no theoretical curve to check it against with. This might be the case for other geological processes, where for example different mechanisms play a role, and where the molar volumes of mineral phases are not so well known. How do the authors propose to check their workflow in those instances? It would be good to include this explanation in the discussion.
I found that the overall presentation quality could be improved as some concepts mentioned in the paper are too technical for a non – expert audience, and they need explanation.
For instance, many readers will not be familiar with terms such as “supervised deep learning”. What is a supervised deep learning method? How does it differ from an unsupervised one? The authors mentioned both concepts in the paper, yet they fail to explain what they are and how they differ. They also did not explain why they chose one rather than the other. It would be good to at least explain the difference between the 2 methods (since both are mentioned) and why the authors made that choice, so that the readers can better understand what may or may not work in other contexts where this workflow may help in the analysis.
Furthermore, when possible, terminology belonging to machine-learning should be avoided, as this journal covers a great variety of topics, and while some readers may be familiar with terms such as “(hyper)-parameters”, these may not always be clear to a non-expert reader. Why are they (hyper) parameters and not just parameters? I would suggest avoiding such technical terminology when possible, or if needed, then it needs some explanation. The authors explain what (hyper)-parameters they used, but it is not clear what (hyper)-parameters are.
Some concepts are introduced without explanation of if there is one, it is presented in different sections. I flagged in the commented text where I could: for instance, Random Forest is not cross-referenced with the section in the Appendix. I think introducing cross-referencing to these sections next to the concept would help non-expert readers (example: random forest, sec. 3.3, Appendix X).
The paper also contains some minor typos and small inaccuracies, like lack of introduction of acronyms.
Some of the figures could be improved and be bigger (not sure if it is the formatting of the generated pdf). It would be good to have an overall figure (with the grain of celestite) showing all the steps, including the post-processing cleaning up.
Overall, I think the paper is a great scientific contribution to the community. Providing minor revisions are made (specifically targeting the improvement of clarity for non-expert readers), I suggest that the paper is accepted for publication.
-
AC1: 'Reply on RC1', Roberto Emanuele Rizzo, 08 Dec 2023
We sincerely thank Reviewer #1 for their thorough review and positive feedback on our manuscript. We are pleased to know our work is considered a valuable contribution to the Geosciences’ community. We appreciate the constructive comments, which we have largely accepted and incorporated into the manuscript. We have addressed the comments in the attached PDF file, which also includes responses to the comments made on the PDF version of the manuscript. The text is colour coded with Reviewer's comments in black and our responses in blue.
Sincerely,
Roberto Rizzo
-
AC1: 'Reply on RC1', Roberto Emanuele Rizzo, 08 Dec 2023
-
RC2: 'Comment on egusphere-2023-1819', Richard A. Ketcham, 23 Oct 2023
This is a very nice study, using knowledge of the physical system as a means of benchmarking machine learning approaches to segmentation. It will certainly be of interest to the community, and is close to being ready to publish as-is. There are only a few items that might need a little more work.
Overall this method seems to work quite well on this system. However, it relies on manual segmentation to create the training data. This works fine for the more massive phases, but for the ones that have one or more small dimensions relative to the data (celestite, and more importantly water-filled porosity) many of the voxels suffer partial-volume or blurring effects. The segmentation of the porosity in Fig. 3e is very chunky in comparison to how the porosity really is (thin grain boundary layers). Basically, everything with a hint of darkness is being called porosity, but many of those voxels represent mixtures between water and solid material. This makes the method a bit less repeatable, and is likely why the water always comes out a bit high in Fig. 7. This issue is recognized by the authors in lines 341-345, but not given context on how it is affecting the analysis presented. Also, the calibration of CT values does not necessarily need to rely on external calibration – you can probably just measure the grayscales off your images (and use the volume balance in your system to check that your results are about right). Since this would essentially be a post-processing step reinterpreting some segmented voxels, it seems a complementary add-on.
Detailed comments:
[line 72] Replace “results” with “provides”
[lines 163-165] This sentence is garbled (“histogram thresholding… or… histogram thresholding”?). Rewrite.
[Table 1] It seems odd that the average DICE for Model E is higher than that for Model RF even though the former scores worse and much worse on two phases. How was the average calculated?
[line 314] What does “(imaged as porosity [in] the μCT data)” mean? Was the water missing/drained, or is the water just being called porosity because it’s pore-filling?
[line 331] Another issue with ML-segmentation is that the training may only work well for very similar conditions of material, geometry, imaging parameters, etc.
[line 344-345] Another approach might be re-interpretation of voxels using grayscale, so as to assign affected voxels partial values.
[line 377] “In future iterations of our method aims to expand…”: change to “we aim” or “Future iterations of our method will aim”
Citation: https://doi.org/10.5194/egusphere-2023-1819-RC2 -
AC2: 'Reply on RC2', Roberto Emanuele Rizzo, 08 Dec 2023
We thank Prof. Richard Ketcham for his comments and suggestions on our manuscript. We are pleased to learn that he found this work to be a valuable contribution. In the attached PDF file, the responses to his comments are colour-coded in blue. We hope our responses adequately address the issues raised.
Sincerely,
Roberto Rizzo
-
AC2: 'Reply on RC2', Roberto Emanuele Rizzo, 08 Dec 2023
-
RC3: 'Comment on egusphere-2023-1819', Luke Griffiths, 03 Nov 2023
Dear editor and authors,
Summary: This paper presents a workflow for multi-class segmentation of X-ray computed tomography images of rock during in situ testing. The authors aim to address the reproducibility issues commonly associated with this process through employing a deep learning approach, trained on the results of a feature-based random forest classification. The results of the U-net model are compared to a global thresholding method. They innovatively use mass balance to provide an external measure of the relative volume of constituents, allowing for a direct comparison with the segmentation results.
The paper is well-written and the background and motivation are clearly outlined. I have some comments on the methodology, for which some choices need to be explained in greater detail and potentially modified where appropriate to compare the models and evaluate their performance in a more robust way. I found the external verification of the multi-class segmentation to be a clear strong point of the paper, and I think this data could be further used as a priori information to guide the models during training and provide robust segmentation (but perhaps this is beyond the scope of this study).
I have provided comments below, and comments in the attached PDF on specific parts of the text.
I recommend publication after minor revisions, most importantly to address the questions on the segmentation methodology.
Comments:
- The distinction between pixel or feature-based methods and convolutional methods could perhaps be made clearer earlier on – the assumption is that convolutional methods work better because they are able to better capture spatial information but I was expecting this point to have more weight in the introduction.
- Was there any attempt to use histogram matching and/or calibration of greyscale to values of known materials (e.g. for pieces of steel that are not expected to change between scans)? This can work quite well to allow for direct comparison of time-lapse images and quantitative analysis.
- The ML approaches are compared to simple histogram (global?) thresholding – perhaps even adaptive/local thresholding or watershed segmentation would perform much better and might be a fairer comparison.
- How did you label the training data? Do you label all pixels within an image or a subset of pixels for which you are confident in their labelling (e.g. only pixels well within a grain)?
- How was the validation set chosen? Equally across all images from each time step? I would be interested to see how the model performs if the validation set contained all images of a given time step (therefore not present in the training set). As it is argued that the DL method is more robust and consistent, doing the test-train split in this way could better illustrate this point.
- How does the RF model alone compare to the U-net model? For example, compared to the manually labelled images.
- How were the features chosen for the RF model? How much better could the RF model become if more features are added and/or features with larger windows than 3x3? Could it rival the U-net model?
- Perhaps I misunderstand, but it is a bit confusing to use “ground truth” and validation data interchangeably (is this indeed the case?). For example, are the “ground truths” used for comparison in Table 1 the same for each method? I think, ideally, a ground truth should be the best possible segmentation (e.g., one that was labelled manually or using whichever is deemed the best result from all these models) and it should be the same reference for all models. It can be misleading to evaluate the models on how well they perform on their validation data, because the quality of the validation data varies depending on the method used to label it (i.e., thresholding or RF). At the very least, this should be made more clear by dropping the “ground truth” terminology. Ideally, the models should be re-evaluated against the same “best-case” segmentation.
- Are you able to give a formal comparison of the methods based on the ground truth from the calculated phase volume changes? That seems to be like a good way to evaluate their performance.
- More of an observation as it is perhaps beyond the scope of this work: it would be very interesting to see if the measured volumetric evolution could be used to constrain the models used for segmentation. This is what I was anticipating based on the title.
-
RC4: 'Reply on RC3', Luke Griffiths, 03 Nov 2023
"U-net model are compared to a global thresholding method" - should read "U-net models trained on RF-generated labels are compared to U-net models trained on labels made using thresholding"
Citation: https://doi.org/10.5194/egusphere-2023-1819-RC4 -
AC4: 'Reply on RC4', Roberto Emanuele Rizzo, 08 Dec 2023
Thanks for clarifying this. :-)
Citation: https://doi.org/10.5194/egusphere-2023-1819-AC4
-
AC4: 'Reply on RC4', Roberto Emanuele Rizzo, 08 Dec 2023
-
AC3: 'Reply on RC3', Roberto Emanuele Rizzo, 08 Dec 2023
We are sincerely grateful to Dr. Luke Griffith for his thorough and detailed review of our manuscript. We have addressed the comments in the PDF file attached here, also reporting the response to those comments made on the PDF version of the manuscript. Our responses are colour-coded in blue.
Sincerely,
Roberto Rizzo
Peer review completion
Journal article(s) based on this preprint
Data sets
Deep learning model Roberto Emanuele Rizzo https://doi.org/10.7488/ds/7493
Volumetric data for sample VA17 Florian Fusseis https://doi.org/10.16907/8ca0995b-d09b-46a7-945d-a996a70bf70b
Volumetric data for sample VA19 Florian Fusseis https://doi.org/10.16907/a97b5230-7a16-4fdf-92f6-1ed800e45e37
Model code and software
Scripts and data for recreating the figures Roberto Emanuele Rizzo https://doi.org/10.7488/ds/7493
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
404 | 195 | 33 | 632 | 21 | 15 |
- HTML: 404
- PDF: 195
- XML: 33
- Total: 632
- BibTeX: 21
- EndNote: 15
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Roberto Emanuele Rizzo
Damien Freitas
James Gilgannon
Sohan Seth
Ian B. Butler
Gina Elisabeth McGill
Florian Fusseis
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(24031 KB) - Metadata XML