the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Anomalous fading correction in luminescence dating – a mathematical reappraisal
Abstract. Anomalous fading is a power law decay of trapped charge over time that is commonly observed in wide-bandgap semiconductors, leading to age underestimation in luminescence dating if unaccounted for. In this paper, we reappraise the mathematical foundations of luminescence signal correction and introduce two new closed-form analytical expressions for the two most commonly used age correction models, namely that of Huntley and Lamothe [Can. J. Earth Sci. 38, 1093–1106, 2001], and of Kars et al. [Radiat. Meas., 43, 786–790, 2008]. These expressions are amenable to straightforward uncertainty propagation, matching results from Monte-Carlo simulations at a fraction of the calculation cost. Additionally, we explore unorthodox combinations of signal fading and growth pathways, by coupling fading models to signal growth obeying general order kinetics, as well as the one-trap one-recombination center model.
Competing interests: Georgina King is a member of the GChron editorial board.
Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors. Views expressed in the text are those of the authors and do not necessarily reflect the views of the publisher.- Preprint
(1027 KB) - Metadata XML
-
Supplement
(11 KB) - BibTeX
- EndNote
Status: open (extended)
-
RC1: 'Comment on egusphere-2025-4186', Anonymous Referee #1, 20 Sep 2025
reply
-
CC1: 'Reply on RC1', Georgina E. King, 23 Sep 2025
reply
We thank reviewer 1 for their supportive review.
Citation: https://doi.org/10.5194/egusphere-2025-4186-CC1
-
CC1: 'Reply on RC1', Georgina E. King, 23 Sep 2025
reply
-
RC2: 'Comment on egusphere-2025-4186', Anonymous Referee #2, 30 Jan 2026
reply
This project is an important step forward in developing closed form expressions for values and uncertainty propagation for common fading corrections. I imagine this represents a significant amount of work to parse these equations into their present forms. Additionally, I enjoyed the having the historical progression of fading expressions laid out clearly. My comments below are less focused on these expressions, and more related to the editorial tone of the first half of the manuscript, which I found distracting. I also disagree with the authors' assertion that Visocekas meant to describe tunneling as a power-law phenomenon. Overall though, this is manuscript should be of great interest to the luminescence dating community.
Line edits:
l.27-31: The tone is unnecessarily pejorative. There is no need to characterize the broader community as confused, ignorant, and progress-avoidant with respect to fading phenomena. Nor to discredit efforts to provide transparent fading correction packages, such as those in the R 'Luminescence' package, for example, which implement several published correction routines.
l.38-46: The authors are shadowboxing a bit here. No one in the luminescence dating community is hampered by Wintle failing to develop a mathematical expression for anomalous fading in 1973. And Huntley and Lamothe's statement is true: their g-value correction, as formulated within Huntley and Lamothe (2001), is only appropriate for the linear portion of the dose response curve. In Lamothe et al. (2003), for example, they extend the framework to deal with samples nearing dose saturation, but reiterate that the expressions laid out in the appendix of Huntley and Lamothe (2001), apply only when luminescence intensity is proportional to dose. That is, the linear portion of the dose response curve. At higher doses, intensity is no longer linearly proportional to dose and therefore this correction would be inappropriate. This was echoed in Huntley and Lian (2006): "Huntley and Lamothe (2001) showed how to correct ages for tunneling for samples which are on the low-dose linear region of the dose response, but we do not yet know how to correct ages for older samples which are on the non-linear region of the dose response curve."
"which nevertheless became a legitimate excuse for when corrected ages fall short of their independent constraints." This seems to ignore the subsequent paper, Lamothe et al. (2003), which extended the g-value framework to the laboratory dose-response measurements, their so-called "dose-rate correction equation." On a related note, I am surprised to not see this Lamothe et al. (2003) correction engaged with more thoroughly in this study, as this was one of the approaches advocated by the original others when dealing with high-dose regions. And subsequent comparisons between nearest-neighbor models and g-value models have compared Kars et al. (2008) to Lamothe et al. (2003). For example, in Kars and Wallinga (2009).
l.73-77: The rescaling of kappa is reported in the R Luminescence package documentation for the calc_FadingCorr function, though I agree with the authors that this expression is difficult to find.
l.82-82: What about the semi-analytical solution described in Jain et al. (2012) (doi: 10.1088/0953-8984/24/38/385402)?
l.85-94: Here also I find the tone here to be extraneous. Impugning past workers ("taken as unquestionable truths" "to the best of its authors' understanding") is neither helpful nor appropriate.
l.110-117: I disagree with the authors' assertion that the "proliferation of these faux units in the literature ("% [loss] per decade")...leaves little doubt that "logarithmic decay"...was in fact a misnomer." Visocekas (1985)'s Fig. 2 plainly shows decay that is logarithmic, not power-law.
Maybe one could argue that there is ambiguity in the phrase "% loss per decade." For example, the phrase could be interpreted to mean "% (of original quantity) loss per decade" or "% (of remaining quantity) loss per decade." The authors view the latter concept as more useful, which is fine and the merits of this view should be developed. But based on the description, mathematical expressions and figures of Visocekas (1985), he intended to describe decay as logarithmic and he meant--unambiguously, I think--the former concept.
More critically, redefining kappa as the slope on a power-law plot, rather than the slope on a logarithmic (% remaining vs. log(t/tc)) plot, and then describing this expression as kappa "sensu Huntley and Lamothe, 2001" has the potential to deepen confusion.
l.127-129: "This 'feature' of the model bothered Visocekas (1985)..." I am not convinced that it did bother him. From his p. 524: "The case of large storage duration t which, from equation (3), would lead to negative values for the TL remaining, is the most easy to deal with...At a sufficient long time the holes [within which all nearby traps have been emptied] will start to intersect. When this occurs the centers are no longer independent and the (1/t) law does not apply. The fading rate approaches zero as the spheres fill the entire space and no trapped charge remains."
Based on passages like this, I interpret his view to be that logarithmic decay is a physically meaningful description, but only relevant when recombination centers are independent. It is a fair criticism that this leaves the behavior at long storage times undefined.
l.132: "Eq. (5) is the physical model that Visocekas probably stived for" The physical model described in his papers was of logarithmic decay. If the authors prefer another model, they should argue its benefits. The implication that the authors succeeded where Visocekas failed is gratuitous.
l.140: There is a factor of 100 missing from the definition of g-value.
l.183: Please expand on the meaning of 1.8. Ex., from Jain et al. (2012).
l.205-209: "such a definition is broad enough" I do not follow this statement. Surely D_e / Ddot_lab and t_lab = 30 days are very different numbers in this context? Please clarify.
Eqs 10, 11; 16, 17: These equations are very useful contributions!
Citation: https://doi.org/10.5194/egusphere-2025-4186-RC2 -
CC2: 'Comment on egusphere-2025-4186', Sebastian Kreutzer, 10 Feb 2026
reply
Dear Benny,
Dear Georgina,
Thank you for this interesting manuscript. I think you discuss some important aspects of the rather convoluted realm of fading correction.
I appreciate that you have provided a comprehensive overview and mathematical clarity. I agree that the subject can be daunting and that information is distributed across several articles.
As I am not engaged in a formal review, I will cherry-pick some paragraphs from the document I found less clear or misplaced.- The introduction comes across a little bit rough and to me they feel rather as an accusation of severe negligence in past work instead of a warranted neutral approach. Once the reader made it past the introduction, the manuscript is much for fun to read; the discussion certainly stands out here.
- On multiple occasions you are referring to “grey” literature, without ever writing what you have in mind. Perhaps you are referring to a particular Excel sheets/program you came across or some kind of instructions you consider at odds. As for those “grey” literature/tools, I can tell that I have seen many of them in various stages. All of them, however, only tried to implement the peer-reviewed models and only one I saw had a (minor) flaw. However, I would not consider the circulation of measurement protocols and spreadsheets “black-box” solutions in general. The opposite, usually they are quite thoroughly inspected by the people that use them. Either way, there are no “theoretical gaps” that led to the spreadsheets and software. There is simply the implementation of proposed models. Nothing more, nothing less.
- You correctly stated the rescaling of g-values with different references times is somewhat non-obvious in the original article, however, no “reverse-engineering” is required. The code was a various of those “grey” literature spreadsheets I came across, and we have it also in the R package ‘Luminescence’ since many years, where the code is fully accessible and transparent.
- Another statement (lines 143 to 144) flags the lethargy to report the t_c value. While I agree that this could be done better, in many cases it simply does not matter because we have the g-values reported normalized to 2-days. A situation where somebody would assign ‘ka’ instead of ‘s’ to tc appears made up with little practical relevance.
- More science related, with reference to Fig. 1b, I am wondering how big the problem really is. Are samples with fading rates >5%/decade really something we suggest using to build robust chronologies?
- When I look at Fig. 4c it appears that the current Huntley/Kars approach does not perform so badly and Fig. 5 shows that differences will likely not change the general outcome and geoscientific interpretation. Better accuracy is always welcomed; though. My impression is that you must have come to the same conclusion given lines 517 to 520 in the discussion.
- What I find more puzzling than the question of which model was used is what you show in Fig. 4a. The real problem there does not seem to be the fit, and then correction but the scatter of the data for the g-value measurement. To me this is the elephant in the room that should be, at least briefly, addressed and discussed but this aspect does not receive the appropriate attention in the manuscript (afterthought note: it indeed does seem to bother only me, but not the reviewers).
- There seems to exist an odd understanding about how fading measurements can be carried out, and although your manuscript does not spare the readers with critics, you leave this out and instead even echo this procedure in Fig. 4. What is the rationale of having short-term fading measurements over only a few hours with a few data points instead, for instance, at least five days and more data points? I am aware of the usual explanation that would block the measurement equipment too long. But this is just a cheap excuse without merit. In some readers it is only less convenient to perform in other machines, but proper measurement design enables you to measure much longer times without easting up reader time. Certainly, this is not on you, but I wonder whether we put a lot of attention on the post-processing of data, instead of a proper measurement design. Note: This should not give the impression that "all" fading measurements are flawed per se, I just think (this includes me) that we can do better. See also https://doi.org/10.5194/gchron-2020-3. Also apparently we never managed to get it formally published, the measurement results are regardlessly valid.
Im summary, I think you could make a good manuscript excellent and gain broad reach and appreciation with a few changes that likely won’t take you much time. Once the manuscript is fully published, we will certainly implement your suggestions in ‘Luminescence’. Having said that, thanks once more for the effort and the nice work.
Sebastian Kreutzer, Heidelberg, February 10th, 2026.
Citation: https://doi.org/10.5194/egusphere-2025-4186-CC2
Viewed
| HTML | XML | Total | Supplement | BibTeX | EndNote | |
|---|---|---|---|---|---|---|
| 648 | 129 | 28 | 805 | 33 | 30 | 33 |
- HTML: 648
- PDF: 129
- XML: 28
- Total: 805
- Supplement: 33
- BibTeX: 30
- EndNote: 33
Viewed (geographical distribution)
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
In this manuscript, the authors report on <<Anomalous fading correction in luminescence dating – a mathematical reappraisal>>. In my opinion, this manuscript is very interesting, as it addresses age correction in relation to the anomalous fading effect. The complexity of the expressions is not strictly mathematical but rather based on extended algebra, which makes them both accessible and applicable. I am confident that future readers will enjoy the manuscript and benefit from its content. Therefore, I recommend its publication in its present form.