the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Short Communication: age2exhume – A Matlab script to calculate steady-state vertical exhumation rates from thermochronologic ages in regional datasets and application to the Himalaya
Abstract. Interpreting cooling ages from multiple thermochronometric systems and/or from steep elevation transects with the help of a thermal model can provide unique insights into the spatial and temporal patterns of rock exhumation. This information can, in turn, provide clues to the driving mechanisms of landscape evolution. Although several well-established thermal models allow for a detailed exploration of how cooling (and sometimes exhumation) rates evolved in a limited area or along a transect, information from large, regional datasets has been largely underutilized. Here, we present age2exhume, a thermal model in the form of a Matlab script, which can be used to rapidly provide a synoptic overview of exhumation rates from large, regional thermochronologic datasets. The model incorporates surface temperature based on a defined lapse rate and a local relief correction that is dependent on the thermochronometric system of interest. Other inputs include sample cooling age, uncertainty, thermochronometric system, and an initial (unperturbed) geothermal gradient. The model is simplified in that it assumes steady, vertical rock-uplift and unchanging topography when calculating exhumation rates. For this reason, it does not replace more powerful and versatile thermal-kinematic models, but it has the advantage of simple implementation and rapidly calculated results. In our example dataset, we show exhumation rates calculated from 1785 cooling ages from the Himalaya associated with five different thermochronometric systems. Despite the synoptic nature of the results, we show how they reflect known segmentation patterns and changing exhumation rates in areas that have undergone structural reorganization. Moreover, the rapidly calculated results enable an exploration of the sensitivity of the results to various input parameters, and an illustration of the importance of explicit modelling of thermal fields when calculating exhumation rates from thermochronologic data.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(4264 KB)
-
Supplement
(132 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(4264 KB) - Metadata XML
-
Supplement
(132 KB) - BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2022-888', Matthew Fox, 16 Sep 2022
The submitted manuscript by Peter van der Beek and Taylor Schildgen provides more detail on code that was published in a paper in 2018 (Schildgen et al., Nature, 2018). The method is already published and does not have any major flaws. Furthermore, it is based on earlier work by Mark Brandon (1997). Therefore, the paper could be published as is.
However, I would like to highlight some issues with this approach in general and think that mentioning these ideas would strengthen the paper.
The authors use the unperturbed initial geothermal gradient as input for calculating geothermal gradients and the modern geothermal gradient is not really discussed. Furthermore, there is no way to link samples in space. One of the advantages of codes designed to exploit age-elevation relation relationships (Pecube, QTQt, GLIDE) is that the results should be less sensitive to geothermal gradient and there is a means to estimate the geothermal gradient. Basing the interpretation on an unperturbed geotherm does not provide that advantage and I feel that we are losing useful information.
It is also unclear how well known this unperturbed geotherm can ever be. Or even whether this makes sense. For example, if a sample has a known ZFT age, a geotherm that is consistent with an initial value and exhumation rate can be determined using the code. If this same sample also has a AHe age, why would it make sense to use the initial geothermal gradient before any exhumation as opposed to the geothermal gradient inferred the older ZFT age? This geothermal gradient is surely an improvement over the unperturbed gradient.
This concept raises the issue that a single sample may predict different exhumation rates for the modern period but also completely different geothermal gradients. This means that if we are lucky enough to have an estimate of the modern geothermal gradient close by, we would have lots of internally inconsistent pieces of evidence on the exhumation rate. We would also have no way forward with this steady state approach.
The paper is very critical of GLIDE (Fox et al., Esurf, 2014) and does not really highlight our response to the earlier criticism of Schildgen et al., (Nature, 2018). For example, on line 125 please consider changing the language from “Moreover, it has been shown that the code translates abrupt spatial variations in thermochronological ages, such as across faults, into temporal increases in exhumation rates (Schildgen et al., 2018).” To “Moreover, it has been argued that the code translates abrupt spatial variations in thermochronological ages, such as across faults, into temporal increases in exhumation rates (Schildgen et al., 2018).” This is because, we have argued that this is not the case in Willett et al., (Esurf, 2020). Also, Fox and Carter (Geosciences, 2020) argued that Schildgen et al. (2018) did not show that most sampled regions on Earth may have insufficient data coverage for unbiased prediction of exhumation-rate histories using GLIDE. Instead, the synthetic data produced for analyses carried out by Schildgen et al., (2018) significantly changed the temporal and spatial resolution of the data with respect to the real data. In other words, the spatial and temporal resolution for the real ages from the Western Alps is sufficient because there are old and young ages either side of the fault. In contrast, the spatial and temporal resolution for synthetic ages from the Western Alps is insufficient because of the dramatically different age distributions. This is a complicated issue and it is not appropriate to simplify it to such a degree here. Finally, on line 129, the authors argue that GLIDE is as slow as Pecube. However this is almost certainly not the case if the areas are the same size and if Pecube is run in inverse mode. In fact, this is one of the reasons we developed GLIDE. GLIDE does not use the same half space solution as the Willett and Brandon code. It uses a numerical model with a flux boundary condition. Changing this to a fixed boundary condition requires uncommenting one or two lines.
It is unclear how \delta h is actually calculated. Where does \delta h appear in the flow chart in Figure 1B? This is crucial because the closure depth can change by over 100% during the iterative process. How is this accounted for during the whole process? The code requires that users extract \delta h from a different piece of software before the code can be used. It is unclear how long this takes so it is unfair to argue that the approach presented here is particularly fast when it is unclear how long it takes to run the other software.
I don’t understand why the upper boundary condition of the thermal model has a temperature evaluated at the sample elevation. Surely the thermal model should have an upper temperature boundary condition equated at h\_{ave}. Much more information is required to actually understand this part of the code. For example, if you have two samples from exactly the same geographic position but separated vertically by 2km, what is the thickness of the thermal model? If the surface temperature is used for the two different thermal models, the geothermal gradients will be different for the whole thickness of the crust. Why would the closure elevation (with respect to the centre of the earth) depend on the sample elevation? Given that most data are collected with the aim of producing age-elevation relationships, this needs to be discussed and considered. In addition, it is not clear why a constant thermal model thickness is used for the whole of the Himalayas. Surely if L represents a crustal thickness, the model should get much thicker as the topographic elevations increase to account for isostasy.
There is a lot of discussion on thermal boundary conditions. I don’t think detrital thermochronology tells us anything about how appropriate a fixed temperature at 30 km depth is over 10s of millions of years. Since the earliest studies in the Alps by Wagner, it has been shown that the focus of exhumation has been variable. Vernon et al., (EPSL, 2008) also showed that it was variable using iso-age surfaces. Even the results presented in Figure 4 show that exhumation rates have changed at specific locations. A constant lag time is more likely telling us that there is an area that is eroding at a common rate through time, but that that area might be variable. This seems reasonable given that the maximum erosion rates likely depend on long term tectonics and regional climate.
The thermal half space solution was also used by Moore and England 2001. They elegantly showed that by accounting for transient advection of heat, data that had been interpreted as recording accelerating erosion rates, were actually consistent with a constant exhumation rate. This study is not discussed at all but this result highlights the need to account for transient advection of heat. Without accounting for this, there will be the possibility of a systematic bias towards inferring accelerating exhumation rates as geotherms become higher during exhumation. This does not only happen when using a half space solution or a flux boundary condition, it is also predicted with a fixed boundary condition. We also discussed this in Fox and Carter (Geosciences, 2020).
Figure 4. I find the presentation of the results very hard to read. It is almost impossible to visualize whether rates are increasing or decreasing. It also requires people to be familiar with the closure temperatures of the different systems. The fact that a single sample can be associated with 5 different modern exhumation rates is clearly a problem and really defeats the purpose of collecting ages using different systems. Please make a map of the final geothermal gradients as well. How variable are they at a single location and is this a useful result?
Line 51: Please provide more details about what the original age2edot actually did. It is unclear whether it actually solved for ages given exhumation rates.
Figures 5 and 6. Please also show the actual values of the variables that are changing. It would be nice to know that a value changes from X to Y as opposed to just the percentage change.
Citation: https://doi.org/10.5194/egusphere-2022-888-RC1 - AC1: 'Reply on RC1', Peter van der Beek, 10 Oct 2022
-
RC2: 'Comment on egusphere-2022-888', David Whipp, 10 Oct 2022
Summary
van der Beek and Schildgen presents an overview of a numerical model written in Matlab for the purpose of rapidly calculating estimates of rates of exhumation from regional thermochronology datasets. Their new software uses a steady-state analytical solution to the heat transfer equation with fixed temperature boundary conditions in combination with analytical solutions for estimating thermochronometer ages, which allows for efficient calculation of effective closure temperatures to find exhumation rates that produce the input age(s). The model aims to fill a void in the available options for interpreting thermochronometer age data, providing estimates of exhumation rates for large thermochronometer age datasets (in contrast to models designed for calculating only thermal histories), but with less complexity and computational overhead than other popular software. They demonstrate both the models use for exploring exhumation rates in large datasets and its sensitivity to changes in the input parameters using a large age dataset from the Himalaya, showing major observations from more detailed, local studies are also captured in their model. Overall, the manuscript is clear and well written, and it will certainly appeal to readers of Geochronology. However, there are a few places where the text may be able to be improved in revision, which are detailed below.
Main comments
1. There are several places in the text where a choice in the design of age2exhume is suggested to be better than equivalent choices in other models, such as for the boundary conditions. While the authors do justify their reasoning for making such suggestions, model design choices are often based on what is most valid for the physical system being studied. Choosing to use a half-space thermal model with a fixed gradient surface boundary condition will ensure that irrespective of the input exhumation rate, the temperature gradient will remain fixed and honor, for example, borehole temperature measurements / surface heat flow observations. While this is a challenge when working with regional thermochron datasets, it is also an observation that can be used to define the range of allowable solutions that fit the observed age dataset. Because thermochronometer ages are non-unique, it can be otherwise difficult to exclude models that produce the correct ages, but violate other observations. Thus, it would be good to include some text in the discussion that notes some of the limitations of the choice of design in age2exhume for completeness.
2. Another consideration is the use of Dodson's method to estimate effective closure temperatures. While it is simple and efficient for such calculations, there are also behaviors that are observed in some thermochronometer systems that Dodson's approach does not capture, possibly resulting in difficulties interpreting some age data. For example, it is expected that the effective closure temperature for zircon (U-Th)/He ages will decrease with increasing cooling rate for low-eU zircons (e.g., Guenthner et al., 2013; Whipp, Kellett et al., 2022). This limitation is not discussed, and similar to the comment above, would be a good thing to mention in the discussion section somewhere.
3. Based on the two comments above, I would suggest adding a section to the Discussion and Conclusions, perhaps titled "Caveats (or assumptions or limitations) and recommended use cases". Here the authors could combine the points related to the limitations of the models to a single section, and also make suggestions for the best applications of the models by future studies. Some of the text from the current section 4.2 could be moved here (lines 317-330), as well as some additions to the text based on the comments above. Perhaps the authors could also comment on the lack of heat production in the geotherm calculation and how that might affect the estimates of exhumation rates. This is not strictly necessary, but may make it easier for users looking for a model to apply to their data to know what the limitations are and be able to easily evaluate whether age2exhume is suitable for their needs. I suspect age2exhume will be a valuable modelling contribution, and more users may choose to utilize the code if they clearly see it would be a good fit for interpreting their data.4. Finally, I appreciate the authors making the software source code available but also feel it is worth noting that the choice to provide the software as a Matlab script is somewhat unfortunate. Matlab is a commercial software product, and if the software was available in another language (such as Python or Julia), then it would accessible to a larger user base at no cost. Similarly, it would be nice to have instructions for calculating delta h using freely available GIS software such as QGIS. I am not requesting the authors rewrite their code, but wanted to emphasize there is a limitation in providing Matlab code.
Specific remarks (L = line number)
L15: I know it may not be easy, but is there some kind of clarification that could be used to distinguish a "limited area" from a regional dataset? Is there a certain spatial scale or dataset size that could be used to emphasize when your model should be considered?
L16: I suppose "largely underutilized" may be somewhat subjective, but perhaps the point here is that it can be challenging to interpret these larger datasets. People have done it (e.g., Herman et al., 2013; Thiede and Ehlers, 2013), but there are not so many tools that are easily applied for such a task (as you say in the introduction). Would it be better to emphasize the challenge and lack of tools here too?
L85: While I can see the authors' point here, particularly for large age datasets, it is also true that failing to consider the constraint the surface heat flow could provide in terms of allowable models that fit the data is also problematic. Heat flow is an observation that can be used to constrain the model parameter space, and it may be practically easier to exclude this due to the paucity of measurements, but that is not necessarily a better solution.
L89-90: I have not used the code cited here, but based on the plots in the manuscript I cannot see an obvious reason why more rapid exhumation rates would be problematic for code stability. Could you please clarify what the issue is here and possibly why it arises?
L101-102: This is again an issue about choice of model design and boundary conditions. While the point raised here about a half-space thermal model perhaps not reproducing constant lag times for regions with a constant rate of exhumation is valid, it is also true that it may be difficult to define a reference temperature in the crust or lithosphere that is fixed over the timescale of orogenic growth and exhumation. For instance, Moho temperatures can change significantly during orogenesis, which is an effect that requires a user to choose an initial geothermal gradient in order to produce a Moho temperature they feel is applicable to the study region. This alone may result in the need to have different initial geothermal gradients for different thermochronometer systems and effective closure temperatures.
L106: This is a nit-picky thing, but HeFTy is not exclusively an inverse model, but one that can be run as a forward or inverse model.
L116: "neighbourhood algorithm (Sambridge, 1999)" rather than "natural-neighbor algorithm".
L145: Perhaps it could be clarified that the closure temperature estimate is a value the user should provide, not a calculated value. The text says this, but it may help to note the user should select this as an input value (right?).
L148: Would it be better to say that "an estimate of Ts is calculated..." rather than "Ts is estimated..."?
L168: Is there any justification for the threshold value of the exhumation rate change here?
L174: Somewhere in the text above it may be helpful to clarify precisely how the initial geothermal gradient is used, to avoid any possible confusion about it being treated as an initial condition. As it is to me as a reader familiar with modelling, this is clear to me, but it is possible some less familiar readers could be confused. Thus, it may help to state it is used only to estimate zc at the start of the simulation and to define the basal boundary condition. Just a suggestion, but one that might help readers better understand how the model works.
L195-197: This seems like a very useful point and possible way in which new users would really apply the code. Would it be possible to somehow emphasize this point more in the text?
L216-217: Again, this is somewhat nit-picky, but the model does not make assumptions, the programmer does. In this case, I would suggest rephrasing to say "...time in the model." rather than "...time made by the model".
L243-244: Is the reference here citable?
L283-284: This effect is something that has been shown previously in, for example, Mancktelow and Grasemann (1997) and Stüwe et al. (1994). Perhaps it would be note the consistency with earlier work?
L303-304: I would guess that some readers might not immediately understand the point here about the Peclet number and how changes in L affect the exhumation rate estimates. Could you add a sentence to two to clarify why this happens?
Figures
Figures 2 and 3: Would it be possible to produce versions of these figures with a colored contour fill and black contour lines? It may make them easier to read, as the yellow color on a white background is somewhat hard to see. Also, the contour lines in Figure 2 get quite dense along the left side of each panel. Would it be possible to remove some to make it easier to see the remaining lines? Finally, it may be helpful to have a reference gridline for delta h = 0 on the plots.
Figure 4: Would it be possible to include an orogen-scale inset map somewhere in this figure showing the extents of the panels?
Citation: https://doi.org/10.5194/egusphere-2022-888-RC2 - AC2: 'Reply on RC2', Peter van der Beek, 27 Oct 2022
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2022-888', Matthew Fox, 16 Sep 2022
The submitted manuscript by Peter van der Beek and Taylor Schildgen provides more detail on code that was published in a paper in 2018 (Schildgen et al., Nature, 2018). The method is already published and does not have any major flaws. Furthermore, it is based on earlier work by Mark Brandon (1997). Therefore, the paper could be published as is.
However, I would like to highlight some issues with this approach in general and think that mentioning these ideas would strengthen the paper.
The authors use the unperturbed initial geothermal gradient as input for calculating geothermal gradients and the modern geothermal gradient is not really discussed. Furthermore, there is no way to link samples in space. One of the advantages of codes designed to exploit age-elevation relation relationships (Pecube, QTQt, GLIDE) is that the results should be less sensitive to geothermal gradient and there is a means to estimate the geothermal gradient. Basing the interpretation on an unperturbed geotherm does not provide that advantage and I feel that we are losing useful information.
It is also unclear how well known this unperturbed geotherm can ever be. Or even whether this makes sense. For example, if a sample has a known ZFT age, a geotherm that is consistent with an initial value and exhumation rate can be determined using the code. If this same sample also has a AHe age, why would it make sense to use the initial geothermal gradient before any exhumation as opposed to the geothermal gradient inferred the older ZFT age? This geothermal gradient is surely an improvement over the unperturbed gradient.
This concept raises the issue that a single sample may predict different exhumation rates for the modern period but also completely different geothermal gradients. This means that if we are lucky enough to have an estimate of the modern geothermal gradient close by, we would have lots of internally inconsistent pieces of evidence on the exhumation rate. We would also have no way forward with this steady state approach.
The paper is very critical of GLIDE (Fox et al., Esurf, 2014) and does not really highlight our response to the earlier criticism of Schildgen et al., (Nature, 2018). For example, on line 125 please consider changing the language from “Moreover, it has been shown that the code translates abrupt spatial variations in thermochronological ages, such as across faults, into temporal increases in exhumation rates (Schildgen et al., 2018).” To “Moreover, it has been argued that the code translates abrupt spatial variations in thermochronological ages, such as across faults, into temporal increases in exhumation rates (Schildgen et al., 2018).” This is because, we have argued that this is not the case in Willett et al., (Esurf, 2020). Also, Fox and Carter (Geosciences, 2020) argued that Schildgen et al. (2018) did not show that most sampled regions on Earth may have insufficient data coverage for unbiased prediction of exhumation-rate histories using GLIDE. Instead, the synthetic data produced for analyses carried out by Schildgen et al., (2018) significantly changed the temporal and spatial resolution of the data with respect to the real data. In other words, the spatial and temporal resolution for the real ages from the Western Alps is sufficient because there are old and young ages either side of the fault. In contrast, the spatial and temporal resolution for synthetic ages from the Western Alps is insufficient because of the dramatically different age distributions. This is a complicated issue and it is not appropriate to simplify it to such a degree here. Finally, on line 129, the authors argue that GLIDE is as slow as Pecube. However this is almost certainly not the case if the areas are the same size and if Pecube is run in inverse mode. In fact, this is one of the reasons we developed GLIDE. GLIDE does not use the same half space solution as the Willett and Brandon code. It uses a numerical model with a flux boundary condition. Changing this to a fixed boundary condition requires uncommenting one or two lines.
It is unclear how \delta h is actually calculated. Where does \delta h appear in the flow chart in Figure 1B? This is crucial because the closure depth can change by over 100% during the iterative process. How is this accounted for during the whole process? The code requires that users extract \delta h from a different piece of software before the code can be used. It is unclear how long this takes so it is unfair to argue that the approach presented here is particularly fast when it is unclear how long it takes to run the other software.
I don’t understand why the upper boundary condition of the thermal model has a temperature evaluated at the sample elevation. Surely the thermal model should have an upper temperature boundary condition equated at h\_{ave}. Much more information is required to actually understand this part of the code. For example, if you have two samples from exactly the same geographic position but separated vertically by 2km, what is the thickness of the thermal model? If the surface temperature is used for the two different thermal models, the geothermal gradients will be different for the whole thickness of the crust. Why would the closure elevation (with respect to the centre of the earth) depend on the sample elevation? Given that most data are collected with the aim of producing age-elevation relationships, this needs to be discussed and considered. In addition, it is not clear why a constant thermal model thickness is used for the whole of the Himalayas. Surely if L represents a crustal thickness, the model should get much thicker as the topographic elevations increase to account for isostasy.
There is a lot of discussion on thermal boundary conditions. I don’t think detrital thermochronology tells us anything about how appropriate a fixed temperature at 30 km depth is over 10s of millions of years. Since the earliest studies in the Alps by Wagner, it has been shown that the focus of exhumation has been variable. Vernon et al., (EPSL, 2008) also showed that it was variable using iso-age surfaces. Even the results presented in Figure 4 show that exhumation rates have changed at specific locations. A constant lag time is more likely telling us that there is an area that is eroding at a common rate through time, but that that area might be variable. This seems reasonable given that the maximum erosion rates likely depend on long term tectonics and regional climate.
The thermal half space solution was also used by Moore and England 2001. They elegantly showed that by accounting for transient advection of heat, data that had been interpreted as recording accelerating erosion rates, were actually consistent with a constant exhumation rate. This study is not discussed at all but this result highlights the need to account for transient advection of heat. Without accounting for this, there will be the possibility of a systematic bias towards inferring accelerating exhumation rates as geotherms become higher during exhumation. This does not only happen when using a half space solution or a flux boundary condition, it is also predicted with a fixed boundary condition. We also discussed this in Fox and Carter (Geosciences, 2020).
Figure 4. I find the presentation of the results very hard to read. It is almost impossible to visualize whether rates are increasing or decreasing. It also requires people to be familiar with the closure temperatures of the different systems. The fact that a single sample can be associated with 5 different modern exhumation rates is clearly a problem and really defeats the purpose of collecting ages using different systems. Please make a map of the final geothermal gradients as well. How variable are they at a single location and is this a useful result?
Line 51: Please provide more details about what the original age2edot actually did. It is unclear whether it actually solved for ages given exhumation rates.
Figures 5 and 6. Please also show the actual values of the variables that are changing. It would be nice to know that a value changes from X to Y as opposed to just the percentage change.
Citation: https://doi.org/10.5194/egusphere-2022-888-RC1 - AC1: 'Reply on RC1', Peter van der Beek, 10 Oct 2022
-
RC2: 'Comment on egusphere-2022-888', David Whipp, 10 Oct 2022
Summary
van der Beek and Schildgen presents an overview of a numerical model written in Matlab for the purpose of rapidly calculating estimates of rates of exhumation from regional thermochronology datasets. Their new software uses a steady-state analytical solution to the heat transfer equation with fixed temperature boundary conditions in combination with analytical solutions for estimating thermochronometer ages, which allows for efficient calculation of effective closure temperatures to find exhumation rates that produce the input age(s). The model aims to fill a void in the available options for interpreting thermochronometer age data, providing estimates of exhumation rates for large thermochronometer age datasets (in contrast to models designed for calculating only thermal histories), but with less complexity and computational overhead than other popular software. They demonstrate both the models use for exploring exhumation rates in large datasets and its sensitivity to changes in the input parameters using a large age dataset from the Himalaya, showing major observations from more detailed, local studies are also captured in their model. Overall, the manuscript is clear and well written, and it will certainly appeal to readers of Geochronology. However, there are a few places where the text may be able to be improved in revision, which are detailed below.
Main comments
1. There are several places in the text where a choice in the design of age2exhume is suggested to be better than equivalent choices in other models, such as for the boundary conditions. While the authors do justify their reasoning for making such suggestions, model design choices are often based on what is most valid for the physical system being studied. Choosing to use a half-space thermal model with a fixed gradient surface boundary condition will ensure that irrespective of the input exhumation rate, the temperature gradient will remain fixed and honor, for example, borehole temperature measurements / surface heat flow observations. While this is a challenge when working with regional thermochron datasets, it is also an observation that can be used to define the range of allowable solutions that fit the observed age dataset. Because thermochronometer ages are non-unique, it can be otherwise difficult to exclude models that produce the correct ages, but violate other observations. Thus, it would be good to include some text in the discussion that notes some of the limitations of the choice of design in age2exhume for completeness.
2. Another consideration is the use of Dodson's method to estimate effective closure temperatures. While it is simple and efficient for such calculations, there are also behaviors that are observed in some thermochronometer systems that Dodson's approach does not capture, possibly resulting in difficulties interpreting some age data. For example, it is expected that the effective closure temperature for zircon (U-Th)/He ages will decrease with increasing cooling rate for low-eU zircons (e.g., Guenthner et al., 2013; Whipp, Kellett et al., 2022). This limitation is not discussed, and similar to the comment above, would be a good thing to mention in the discussion section somewhere.
3. Based on the two comments above, I would suggest adding a section to the Discussion and Conclusions, perhaps titled "Caveats (or assumptions or limitations) and recommended use cases". Here the authors could combine the points related to the limitations of the models to a single section, and also make suggestions for the best applications of the models by future studies. Some of the text from the current section 4.2 could be moved here (lines 317-330), as well as some additions to the text based on the comments above. Perhaps the authors could also comment on the lack of heat production in the geotherm calculation and how that might affect the estimates of exhumation rates. This is not strictly necessary, but may make it easier for users looking for a model to apply to their data to know what the limitations are and be able to easily evaluate whether age2exhume is suitable for their needs. I suspect age2exhume will be a valuable modelling contribution, and more users may choose to utilize the code if they clearly see it would be a good fit for interpreting their data.4. Finally, I appreciate the authors making the software source code available but also feel it is worth noting that the choice to provide the software as a Matlab script is somewhat unfortunate. Matlab is a commercial software product, and if the software was available in another language (such as Python or Julia), then it would accessible to a larger user base at no cost. Similarly, it would be nice to have instructions for calculating delta h using freely available GIS software such as QGIS. I am not requesting the authors rewrite their code, but wanted to emphasize there is a limitation in providing Matlab code.
Specific remarks (L = line number)
L15: I know it may not be easy, but is there some kind of clarification that could be used to distinguish a "limited area" from a regional dataset? Is there a certain spatial scale or dataset size that could be used to emphasize when your model should be considered?
L16: I suppose "largely underutilized" may be somewhat subjective, but perhaps the point here is that it can be challenging to interpret these larger datasets. People have done it (e.g., Herman et al., 2013; Thiede and Ehlers, 2013), but there are not so many tools that are easily applied for such a task (as you say in the introduction). Would it be better to emphasize the challenge and lack of tools here too?
L85: While I can see the authors' point here, particularly for large age datasets, it is also true that failing to consider the constraint the surface heat flow could provide in terms of allowable models that fit the data is also problematic. Heat flow is an observation that can be used to constrain the model parameter space, and it may be practically easier to exclude this due to the paucity of measurements, but that is not necessarily a better solution.
L89-90: I have not used the code cited here, but based on the plots in the manuscript I cannot see an obvious reason why more rapid exhumation rates would be problematic for code stability. Could you please clarify what the issue is here and possibly why it arises?
L101-102: This is again an issue about choice of model design and boundary conditions. While the point raised here about a half-space thermal model perhaps not reproducing constant lag times for regions with a constant rate of exhumation is valid, it is also true that it may be difficult to define a reference temperature in the crust or lithosphere that is fixed over the timescale of orogenic growth and exhumation. For instance, Moho temperatures can change significantly during orogenesis, which is an effect that requires a user to choose an initial geothermal gradient in order to produce a Moho temperature they feel is applicable to the study region. This alone may result in the need to have different initial geothermal gradients for different thermochronometer systems and effective closure temperatures.
L106: This is a nit-picky thing, but HeFTy is not exclusively an inverse model, but one that can be run as a forward or inverse model.
L116: "neighbourhood algorithm (Sambridge, 1999)" rather than "natural-neighbor algorithm".
L145: Perhaps it could be clarified that the closure temperature estimate is a value the user should provide, not a calculated value. The text says this, but it may help to note the user should select this as an input value (right?).
L148: Would it be better to say that "an estimate of Ts is calculated..." rather than "Ts is estimated..."?
L168: Is there any justification for the threshold value of the exhumation rate change here?
L174: Somewhere in the text above it may be helpful to clarify precisely how the initial geothermal gradient is used, to avoid any possible confusion about it being treated as an initial condition. As it is to me as a reader familiar with modelling, this is clear to me, but it is possible some less familiar readers could be confused. Thus, it may help to state it is used only to estimate zc at the start of the simulation and to define the basal boundary condition. Just a suggestion, but one that might help readers better understand how the model works.
L195-197: This seems like a very useful point and possible way in which new users would really apply the code. Would it be possible to somehow emphasize this point more in the text?
L216-217: Again, this is somewhat nit-picky, but the model does not make assumptions, the programmer does. In this case, I would suggest rephrasing to say "...time in the model." rather than "...time made by the model".
L243-244: Is the reference here citable?
L283-284: This effect is something that has been shown previously in, for example, Mancktelow and Grasemann (1997) and Stüwe et al. (1994). Perhaps it would be note the consistency with earlier work?
L303-304: I would guess that some readers might not immediately understand the point here about the Peclet number and how changes in L affect the exhumation rate estimates. Could you add a sentence to two to clarify why this happens?
Figures
Figures 2 and 3: Would it be possible to produce versions of these figures with a colored contour fill and black contour lines? It may make them easier to read, as the yellow color on a white background is somewhat hard to see. Also, the contour lines in Figure 2 get quite dense along the left side of each panel. Would it be possible to remove some to make it easier to see the remaining lines? Finally, it may be helpful to have a reference gridline for delta h = 0 on the plots.
Figure 4: Would it be possible to include an orogen-scale inset map somewhere in this figure showing the extents of the panels?
Citation: https://doi.org/10.5194/egusphere-2022-888-RC2 - AC2: 'Reply on RC2', Peter van der Beek, 27 Oct 2022
Peer review completion
Journal article(s) based on this preprint
Data sets
Thermochronology dataset for Himalaya Schildgen, T. F., van der Beek, P. A. https://doi.org/10.5281/zenodo.7053115
Model code and software
age2exhume Matlab scripts van der Beek, P. A., Schildgen, T. F. https://doi.org/10.5281/zenodo.7053218
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
600 | 251 | 20 | 871 | 64 | 6 | 7 |
- HTML: 600
- PDF: 251
- XML: 20
- Total: 871
- Supplement: 64
- BibTeX: 6
- EndNote: 7
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Peter van der Beek
Taylor F. Schildgen
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(4264 KB) - Metadata XML
-
Supplement
(132 KB) - BibTeX
- EndNote
- Final revised paper