the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Subauroral Crosstalk in POES/Metop TED Channels
Abstract. Particle measurements from the Polar Operational Environmental Satellites (POES) and their successor, the Meteorological Operational (Metop) satellite program, are widely used for various scientific applications. While most studies focus on the Medium Energy Proton and Electron Detector (MEPED), the low-energy (eV and keV) counterpart, the Total Energy Detector (TED), has received comparatively less attention. However, the recent rise in the altitudes considered in ionization and climate models has increased interest in low-energy particle measurements as inputs for atmospheric ionization models.
This study analyzes TED particle data (along with selected MEPED channels) from 2001 to 2018 and demonstrates that, in particular, the TED 0° proton channels – and, to a lesser extent, other TED channels as well as the MEPED proton channels P1, P2, and P3 – are contaminated by energetic electrons at L < 6 (with the exception of TED electron band 4). In some cases, the contaminated fluxes exceed typical auroral flux levels. The affected regions were cross-validated using auroral UV emissions and occurrences of GNSS derived S4 index to rule out the possibility that the observed fluxes correspond to real particle precipitation.
Additionally, we established a Kp- and channel-dependent latitude boundary that may serve as a simple cut-off criterion for the contaminated regions. Furthermore, we propose a more general flux correction approach based on background count measurements.
- Preprint
(3059 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on egusphere-2025-1256', Anonymous Referee #1, 08 Jul 2025
The authors have presented a work investigating the "cross-talk" (commonly referred to as "contamination" elsewhere in the literature) of the POES TED and MEPED instruments as part of the POES SEM-2 instrument suite. The authors provide qualitative evidence to support the conclusion that the TED instruments suffer from significant contamination, in the L<6 region of the radiation belts.
On the whole I do not have too many major issues with this paper. My biggest gripe concerns the inclusion of the MEPED instruments in the authors' investigations. As far as I can tell, none of the authors conclusions regarding the MEPED instrument are new or novel. Indeed, most of them can be reached by a careful examination of Yando et al. [2011], and follow-up works such as Whittaker et al. [2014] and Peck et al. [2015]. This is not to diminish the TED-related work, which I think has the potential to be a useful addition to the literature, however I think the MEPED instrument additions just distract from this and are unnecessary.
I will also note that the authors should take care to carefully read through the paper for spelling and grammar issues, of which there are many. Not enough to decrease the readability of the paper, but enough to be distracting.
The following are a list of relatively minor issues with the paper, in addition to those noted above:
1. Lns 80-81 - The MetOp program was not a successor to the POES constellation; POES was a NOAA initiative, MetOp was EUMetSat -- they're complimentary (indeed, MetOP-A was launched before NOAA-19).
2. Lns 83-84 - While it is true that, averaged over time, the POES satellites offer near-complete local time coverage, I think it is important to highlight the deficiencies in this; namely that there is a blind-spot around 12 MLT. Although not really too important for this paper, I think it is necessary to avoid misinformation regarding the coverage.
3. Lns 96-97 - This claim regarding the 0-deg telescope measuring trapped particles should be cited, for instance Fig 1. of Rodger et al [2010] (10.1029/2008JA014023).
4. The authors attempt to create differential flux channels from the E1-3 integral channels is not particularly useful. The complex response of the electron telescopes to both electrons and protons make this qualitative at best, particularly for the "E2-E3" channel. The authors note this later on, but related to my point above, I do not this that this is particularly novel.
5. Related to the above, it is not clear from the instrumentation section if the authors are using decontaminated MEPED electron flux data or not. Fig 5. of Yando et al. [2011] shows that the E1 and E2 channels are strongly contaminated by roughly >200-300 keV protons, and the E3 by > 400 keV protons. Comparison with the P7 proton channel, which measures >35 MeV protons, is useful for removing solar proton events and the SAA, but not more general proton contamination. It should be noted that, depending on the version of data used, the older .CDF file data was typically roughly corrected for proton contamination, but using pre-Yando methods (.BIN files, from which the CDF data was derived, were not corrected). The newer .nc fluxes are not corrected for proton contamination: specifically, on page 32 of the MEPED Telescope ATBD, it is stated:
"Other factors not considered here that may significantly affect
performance are the intercalibration of the satellites, the degradation of the
sensors, and cross species contamination. These additional factors are
discussed further in section 6.1 but accounting for these issues is beyond the
scope of the current processing level. Users should consider how these
limitations in the data accuracy might impact their use or interpretation of the
data."If the authors _have_ corrected the electron data, for instance using Ethan Peck's algorithm, they should mention this. Otherwise, any results derived from MEPED are suspect.
6. The authors mention Figure 3 before Figure 2 -- these should most likely be swapped.
7. Figure 4 - in the top left panel, the authors have plotted MEPED 90-deg E3 -- surely this should be the 0-deg channel?
8. Figure 4 - I would request that the authors choose a more colourblind-friendly colour scheme for their plots. Varying shades of blue and red can be very difficult for people with different types of colourblindness to distinguish.
9. Lns 241, 245, etc. The authors quite often state that values are "significantly" different -- or instance, that the subauroral peak in MEPED 0-deg P1 and P2 is significantly smaller than in the TED protons. Unless the authors have statistical bases for these claims, they shouldn't use the term "significantly".
10. Lns 261-262 - The authors should be careful attributing any particular measurements from POES as strictly trapped. As can be seen in Figure 1 of Crack et al. [2025] (10.1029/2024JA033158), the vast majority of the time the POES 90 degree detectors are measuring at the very least the drift-loss cone, and often include the bounce-loss cone as well.
11. Ln 268-269 - while its true that geomagentic storms can result in decreases in flux, its not necessarily accurate to say definitively that this always occurs; as Reeves et al. [2003] noted, only roughly a quarter of storms resulted in decreases to radiation belt relativistic electron populations, while roughly half resulted in increases in fluxes.
12. Ln 277 - Saying that any electrons have "higher energy than ... E3" doesn't really make sense -- E3 is an integral channel, and measures all electrons in the >300 keV range. Similarly, Ethan's E4/P6 channel is not the same energy range as E3 -- typically the lower limit is considered to be 500-800 keV (it's a relatively smooth gradient, so its up to interpretation exactly where it starts).
13. Section 4.4 -- This entire section is essentially just a rehashing of Yando et al. [2011], and is misleading in some parts as well. For instance, the authors make the argument that subtracting, for instance, E2 from E1 gives you a 30-100 keV channel, but this is not strictly true, as E1 and E2 respond differently to contaminating protons. Without removing this contamination (an imprecise art at best), E2-E1 does not give you anything particularly useful. I would recommend removing this section entirely, as it does not add anything novel to the paper.
To conclude, I think that the authors work regarding the TED portion of the POES SEM-2 instrument suite is useful, and a welcome addition to the literature. I do not think their MEPED work is particularly novel, however, and would recommend rewriting the paper to remove it. Following this, and the addressing of the above points, I think this work should be publishable.
Citation: https://doi.org/10.5194/egusphere-2025-1256-RC1 -
RC2: 'Comment on egusphere-2025-1256', Anonymous Referee #2, 23 Jul 2025
The paper investigates the contamination of particle measurements from the Total Energy Detector (TED) on POES and Metop satellites due to energetic electron contamination, particularly at subauroral latitudes. The study spans data from 2001 to 2018 and proposes correction methods to address these contamination issues. The authors have looked at the POES Metop TED channels and the contamination within them as well as comparing to other infered data on electron and proton precipitaiton. Previous work has taken closer looks at the contamination in the MEPED channels, with some work looking at the TED channels. The paper could use some work to become more clear as well as showcase the results. This lessens the impact of this paper, though it still provides useful information to be included in the literature.
Overall comments:
I’m not sure why the authors use the term “crosstalk”. Contamination seems to be a clearer word to describe what is happening. Crosstalk can potentially be confusing to the reader as it may mean that the crosstalk is intentional and intended – which I don’t think is the case for POES/MetOp.
Specifics:
Line 78 – while not very important, it would be nice to see more of an intro for this section.
Line 80 – MetOp was not a successor to POES – POES was from NOAA and MetOp is from the UK. Though they are very complementary, I believe as planned.
Line 82 – The description of the orbital precession seems a bit misleading. What time period can all local times be covered by a single satellite? How does that compare to different types of events which may cause contamination? Also – was any intercalibration done on the different satellites? Can you use the precession from one spacecraft only to look at the contamination across all the sensors– or can you use data from only one satellite to determine if a sensor on that satellite is contaminated?
Paragraph starting at line 89 – this needs references for the different look directions and the following paragraph as well could use references. The relevant ones are mostly already included in other locations, but the reader would be helped by having them here – or even a figure, or reference pointing to a figure, showing the geometry.
Line 101 – I’m a bit confused. The whole point of the paper seems to be to provide new corrections, but then the authors state that the differential TED bands will not be corrected. I’m certain that I am just missing something here, so hopefully the authors can easily clarify.
Paragraph starting at line 113 – Another reference showing that the authors are using a more up to date version of IGRF than 1972 would be great. A figure showing the differences in the assumptions described in this and the following paragraphs would help to make it easier for the reader to follow. Along with this, a discussion about why getting the coordinate system right is important would be beneficial to add. Does this impact the pitch angles assumed to be within the loss cone and thus the likely fraction of the field of view inside or outside of the loss cone? Does this impact what cut offs and rigidities are determined for SEP events? If we’re off by just a bit, how does that change the interpretation of the measurements?
Line 130 – Is Kp the right index to use for this? More motivation for why using Kp instead of Sym-H, solar wind data, or another proxy would be highly beneficial here.
Section 2.4 – Along with the discussions in this section about the derived flux and some of the discussion of its validation, it would be beneficial to include a discussion about the height that the derived fluxes are assumed to be at. The satellite is flying at 850 km, but the derived flux signal may be from 100km or above. The derived flux is likely from an integrated region, over the field of view – thus how does that then relate to the near point measurement of the satellite as it integrates its counts in a longitudinal sweep over the integration time in the detector. The SSUSI and EISCAT images are including both integration within a longitudinal and latitudinal area whereas the particle detector is closer to a narrow cross section along a longitudinal path (~120km in 16 seconds at about 800km altitude).
Line 158 – The sentence stating “Concerning this paper we may just conclude that…” It would be good to know if you did conclude this or not.
Section 2.5 Thus far into the paper, there is not a discussion about S4. I would add something about comparing to S4 and SUSSI in the last paragraph of your introduction (around Line 70) as you do the Kp index. I would also discuss in here how the S4 index would be used for calibration or identification that there is proton precipitation (thus a more complicated period to determine if there was also electron contamination). As this is also an inferred response, a discussion about how to compare, or why you would compare the two datasets would be beneficial.
Figure 1 – this figure would benefit from having the continents shown. It looks like what is being presented is the slot region and the filling of the slot region as geomagnetic activity becomes more active. It might help to compare how this looks to similar energies observed in the inner magnetosphere, e.g. from the HOPE, RBSPICE, MagEIS, and REPT instruments from the Van Allen Probes depending on the energy range you are focused on.
It would also help to have 0deg in the middle of the plot so that the SAA was not split along the edges. I would also double check the mapping as it seems different than previous studies (but difficult to tell based on how it is currently plotted). See for example Figure 1 in https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2023SW003664
Line 190 – this is likely due to the SAA vs the region of the local bounce loss cone and there are references showing this that should be added to this sentence/section.
Line 201 – I would point the authors to studies for electrons and protons of similar energies observed in the geomagnetic equator related to the inner and outer radiation belts as well as the slot region and how those distributions change (or not) with geomagnetic activity. (e.g. https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2022SW003257 or https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2016GL071324) For some of the energies, the local time is also likely important so averaging over local time in your geographic bins may impact how the data is viewed.
Section 4.1 it might be useful for the authors to pull out the times when the satellites are flying through the SAA for the L-shell plots (using a geographic definition instead of just P6 filtering) to separate out those periods vs other periods where there may be contamination. It seems that the authors may have done some of this, but it’s unclear to me if the authors removed the data from the SAA in Figure 4. If the SAA is removed from those figures, knowing how close the enhanced regions are to the SAA would provide an understanding of if the satellite was clipping that region or not and thus if the filtering was just inefficient.
Figure 4 – How much of these differences is from the individual satellites flying through the radiation belts at different times? Are some of the differences due to some satellites sampling the peak in the belts due to a geomagnetic storm and others not quiet capturing that time frame? How much time is included in each of these plots? Are you taking both the northern and southern hemispheres together? Are some satellites having a bias between the conditions in the different hemispheres? E.g. Is there more time at <0.7Kp in the northern vs southern hemisphere or over the SAA for the different satellites? I’m assuming this is the average/mean flux value, what are the quartiles and median?
Line 233 (and Line 244, and 249) – by (non-)Vertical TED proton (electrons) fluxes do the authors mean the integral channels or trapped distribution?
241 – Given that the MEPED and TED channels are looking at different energies, would we expect the response to be the same? What is expected when comparing the different sensors?
244, 249 – Please define Non-Vertical Fluxes. If this is the differential fluxes that weren’t calibrated or had corrections – how does this impact the results here?
Section 4.1 – How do these results compare to observations of trapped or lost populations from other missions/results.
Line 254 – The authors state that the MEPED electron channels are not affected “as well”, but it seems that in the previous paragraph the TED measurements are affected, but possibly not to the thresholds of other electron loss events, but the response is changed. Some clearer language here would be useful.
Paragraph starting line 259 – Would you expect TED and MEPED electrons to respond similarly? They are different energies.
Line 264 – I’m not sure which data set this sentence is discussing.
Line 274 – Radiation belt losses are due to many phenomena beyond hiss and EMIC waves and are dependent on energy of particle they are interacting with, geomagnetic activity, local plasma conditions, the stretch of the tail, plasmapause and magnetopause location as well as dependent on L and MLT, among many other things. Since this study considers a statistical look, I would compare to statistical Van Allen Probe studies and not case studies.
Section 4.3 – The discussion of shielding and what is observed by the TED instrument should go up in the data section as it is relevant to the entire process and any confidence in the data being returned. The paragraph starting on line 287 also makes it seem that a first step would be to work on the intercalibration of the sensors instead of combining them in some of the previous figures.
4.4 – It seems that previous studies have looked extensively at the contamination of the MEPED sensors, it would be beneficial at the start of this discussion if there could be a discussion about those previous results and if this studies results agree or disagree before going into the specifics from this study.
Figure 5. – are the TED proton band also limited to nighttime? I’m assuming S4 is an integrated signal/proxy that was also not filtered by Kp. It would likely be useful to then turn the TED proton band into a similar integrated measurement for a more direct comparison.
I’m not sure what is meant by the wave like structure – this looks like it might just be the impact from the location of the SAA? It might help to plot this is geomagnetic coordinates instead of geographic.
Paragraph starting in line 339 – This result makes it seem like a better color bar might help with the above result as the “wave structure” seems to greatly disappear. I would suggest finding a more linear color bar for the middle plot. Such as the cividis color bar in python https://matplotlib.org/stable/users/explain/colors/colormaps.html
Line 346 – I’m not sure if this would be a “detector artifact” or contamination from the SAA.
Figure 6 and Section 4.5.2– the labels for the top three panels could be improved for clarity. I’m assuming those are the TED and MEPED. Are they from all the satellites? How does the lack of intercalibration impact this result? If this is all the instruments together, is this an impact from one instrument or one channel/data product? It seems that this is also averaged across longitude. I would also make all the x-axis in the same units. How comparable are the expected/inferred energy ranges between SSUSI and TED/MEPED? Are these sensors more sensitive to different energies so when combining them are all energies likely able to be combined in this fashion?
Section 5. More discussion could be used here. I’m not sure I follow how the conclusion is formed. I believe the authors may be correct that there is cross contamination between the channels, but a deeper discussion instead of pointing to sections would be helpful. What results from those sections are leading to this conclusion?
Citation: https://doi.org/10.5194/egusphere-2025-1256-RC2
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
712 | 40 | 14 | 766 | 55 | 66 |
- HTML: 712
- PDF: 40
- XML: 14
- Total: 766
- BibTeX: 55
- EndNote: 66
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1