the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Evaluating InSAR-derived rates of surface-elevation change along the central U.S. Gulf Coast
Abstract. Interferometric Synthetic Aperture Radar (InSAR) is widely used to monitor surface-elevation change in subsiding coastal regions, but inconsistencies between studies hinder understanding of the processes driving vertical land motion (VLM). Here we compare two recent InSAR datasets from the central U.S. Gulf Coast which yield similar mean rates (−2.8 ± 2.8 and −3.3 ± 1.8 mm yr⁻¹) but show negligible spatial correlation (R² = 0.05), except in medium to highly developed urban areas (R² > 0.5). Using 41 Global Navigation Satellite System records from adjacent Pleistocene uplands with minimal shallow subsidence and sediment accretion, we find a median VLM of −1.2 mm yr⁻¹, largely driven by glacial isostatic adjustment that is higher than previously believed. InSAR data exhibit larger uncertainties and are presently unable to capture this rate. Given the struggles of InSAR in vegetated landscapes, we recommend that vertical velocities below 5 mm yr⁻¹ are interpreted with utmost caution.
- Preprint
(4756 KB) - Metadata XML
-
Supplement
(26700 KB) - BibTeX
- EndNote
Status: open (extended)
-
CC1: 'Comment on egusphere-2026-824: Evaluating InSAR derived rates of surface elevation change along the central US Gulf Coast.', Falk Amelung, 11 Mar 2026
reply
-
AC1: 'Reply on CC1', Guandong Li, 17 Mar 2026
reply
We appreciate the comments by Falk Amelung which echo several of the concerns that we raised in our manuscript. We agree that comparisons between InSAR solutions derived from different processing approaches should be carried out with utmost caution, particularly in challenging settings such as densely vegetated environments. This is the main take-away of our paper.
Products such as OPERA and TRE typically apply conservative coherence thresholds and filtering, which prioritize high-confidence pixels but often result in limited spatial coverage in low-coherence areas. In contrast, the approaches used for the two datasets evaluated in our study aim to recover signals in these environments, potentially extending coverage into areas where conventional methods tend to discard pixels. The algorithm used in W24 will be released as a software package hopefully this summer, which will improve the transparency of this approach and allow others to reproduce the results.
A direct comparison with OPERA is not straightforward because the current OPERA products are provided along the Line-Of-Sight direction rather than in the vertical direction. Additionally, OPERA measures relative change between pixels that is not tied to the same vertical reference frame as used in our study. The TRE vertical dataset may be more comparable; however, its spatial coverage in the vegetated coastal zone of our study area is even more limited.
Instead, we use GNSS observations as an independent benchmark. GNSS provides well-established vertical land motion estimates, with long-term velocity uncertainties typically below 1 mm yr⁻¹ at most of the permanent GNSS station. This is confirmed by our finding that subsidence due to glacial isostatic adjustment of just over 1 mm yr-1 can be detected by GNSS but not by InSAR. Thus, we use GNSS observations to evaluate the accuracy of the InSAR results rather than comparing different InSAR solutions that tend to have uncertainties larger than 1 mm yr-1.
We agree that systematic intercomparisons among different InSAR processing frameworks (e.g., OPERA, TRE, plus newer approaches) would be valuable for the community. However, such an analysis is beyond the scope of the present study. We will revise the manuscript to clarify these points.
Citation: https://doi.org/10.5194/egusphere-2026-824-AC1 -
CC2: 'Reply on AC1', Falk Amelung, 17 Mar 2026
reply
Thank you for your explanation. I don't follow why the comparison with the OPERA products is difficult. You would of course use your radar line-of-sight displacements for the comparison, not the vertical displacements. If the W24 data and OPERA data agree at "difficult" InSAR locations like Grand Isles, Port Fourchon and Venice we know that the W24 InSAR processing is reliable. If they don't agree that does not mean the W24 data are wrong. The OPERA products can also have issues. But it would provide some indication on how much to put into the final results. The integration with the GNSS is a different topic which introduces additional uncertainties.
Citation: https://doi.org/10.5194/egusphere-2026-824-CC2 -
AC2: 'Reply on CC2', Guandong Li, 31 Mar 2026
reply
We agree with Falk Amelung that comparing different InSAR products is important, but a full evaluation of the underlying processing algorithms involves numerous additional factors and is beyond the scope of the present study. We hope that future studies, perhaps conducted by independent InSAR experts, can carry out such in-depth comparisons.
More specifically, even a limited comparison between OPERA and W24 is not straightforward. For example, W24 and O24 only presented the vertical deformation maps, while OPERA maps are in LOS. Furthermore, the LOS definitions in different radar processors differ, and OPERA also needs to be recalibrated using the same reference GNSS sites as W24. A comparison between OPERA and O24 would likely require recalibration as well.
Here, we present a targeted comparison between TRE, O24, and W24 focused on the specific region highlighted in your comment (near New Orleans). We visually examine the consistency of spatial patterns where TRE, O24, and W24 points overlap. This simple analysis suggests slightly better alignment between TRE and W24 in the overlapping region, allowing us to address your comment constructively without introducing the broader set of issues involved in a full algorithmic comparison. In addition, the different time window for TRE may also partially explain the observed discrepancies.
-
AC2: 'Reply on CC2', Guandong Li, 31 Mar 2026
reply
-
CC2: 'Reply on AC1', Falk Amelung, 17 Mar 2026
reply
-
AC1: 'Reply on CC1', Guandong Li, 17 Mar 2026
reply
-
RC1: 'Comment on egusphere-2026-824', Anonymous Referee #1, 25 Mar 2026
reply
General evaluation
By analyzing the differences between two InSAR velocity fields in the north-central Gulf of Mexico, this manuscript highlights the reproducibility challenges arising from methodological choices in data processing, including filtering and instrumental, environmental and geophysical corrections. Such discrepancies have been noted in previous studies. The authors advocate for the development of a harmonized processing framework and for cross-validation exercises within the InSAR community, similar to existing initiatives in other Earth sciences areas such as climate science (e.g., CMIP for climate models) or geodesy (e.g., IGS for the GNSS), where such efforts have led to the definition of minimum agreed-upon modelling standards. An open question, however, is whether the InSAR community is currently sufficiently mature and organized to mobilize the resources required to establish such a framework.
In the absence of such coordinated international efforts, meaningful inter-comparisons will remain challenging, given the inherent complexity of InSAR processing the multitude of possible methodological differences. The authors make a commendable effort to identify which discrepancies can be considered negligible in order to enable a meaningful comparison. While I am not fully convinced that all such differences are adequately accounted for, their attempt is valuable and their experience worth sharing.
In this context, I have several concerns regarding the GNSS data used to assess the two InSAR velocity fields. First, it is unclear whether the GNSS observations used for validation are fully independent from those used to calibrate the respective InSAR products. More specifically, are the 41 GNSS stations used in the comparison distinct from those employed to align the InSAR products to their reference frames?
Second, regarding the use of NGL GNSS velocities, it would be useful to assess how these compare with independent solutions, particularly those produced by groups using different processing strategies, software, or orbit and clock products (e.g., not relying on JPL products).
Another point concerns temporal variability and the use of two different time spans (2007- 2020 and 2017-2020). The authors conclude that the impact of this difference can be neglected. However, Figure S7 appears to show a systematic bias, with the shorter time span exhibiting larger subsidence rates than the longer one. Unlike the more localized (individual) discrepancies, this overall shift seems significant. It could even be more pronounced if time periods of equal duration were compared (e.g., 2007-2010 versus 2017 to 2020) if the underlying geological process is accelerating. I encourage the authors to further investigate and discuss this aspect.
Recommendations for improvement (specific points)
To strengthen the manuscript and improve readability, I suggest addressing the following points (minor).
Table 1 appears both in the main ma,nuscript and in the Supplemental Material.
Figure S1: please consider adding GNSS station identifiers to the map (see data file in Supp.). Given the relatively limited number of stations (41), the map should remain readable while providing useful detail for reproducibility. A similar comment applies to Figure 5, which includes even fewer stations.
Figures S3: confusing, please revisit panels and captions.Please verify the reference frame of the daily GNSS data from NGL. I found indications that they may be aligned to IGS20 rather than IGS14. While differences between these frames are likely negligible compared to other sources of uncertainty, this should be clarified.
Section 3.2: the mean record length is reported, but the median record length would be a more robust and informative statistic.
Concluding remark
Overall, the manuscript is well written, clear, and logically structured, making it easy to follow. The topic and the material are stimulating, and the authors draw on extensive experience and expertise in assessing the respective strengths and limitations of InSAR and GNSS. Their effort to compare products that are inherently difficult to reconcile, due to the complexity of InSAR processing, is commendable. The manuscript provides interesting insights, particularly regarding land coverage distinction and the specific GIA context, both of which merit further investigation. The statistical analyses are applied rigorously and appeared sound overall. I believe the manuscript can be further strengthened by addressing the points raised above, as well as the more detailed comments provided.
Citation: https://doi.org/10.5194/egusphere-2026-824-RC1
Viewed
| HTML | XML | Total | Supplement | BibTeX | EndNote | |
|---|---|---|---|---|---|---|
| 266 | 160 | 19 | 445 | 62 | 9 | 21 |
- HTML: 266
- PDF: 160
- XML: 19
- Total: 445
- Supplement: 62
- BibTeX: 9
- EndNote: 21
Viewed (geographical distribution)
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
I am short in time and can't read the entire paper but since this is an important topic I thought I add my 2 cents. Please ignore if this is addressed in the paper.
I don't think it makes much sense to compare data products from two analysis approaches that are not (yet?) well accepted by the community. Below are two screenshots from two accepted approaches, (1) the OPERA displacement data portal at the ASF (displacement.asf.alaska.edu) and (2) the TRE-processed data (2016-2023) available from NOAA (we have ingested them into our data portal for visualization). These maps don't show much data near the coast. This is expected as InSAR does not work in this area, with a few exceptions. That the two studies which are compared in the paper have data in no-data areas is suspicious. I hope there are explanations for this in the original papers -- my apologies that I did not read them in detail. Another possible explanation for discrepancies is that the authors were too generous with threshold values used to separate valid from invalid pixels. In contrast, both TRE and OPERA are conservative with threshold selection.
A useful approach would be (1) a comparison between the two reference datasets, and (2) pointing out differences between both O24 and W24 with respect to the reference datasets. If O24 or W24 have credible explanations this is important to know and will lead to the adaptation of their approaches by the community. If there are no satisfactory explanations, the honorable thing for these authors to do would be to qualify their results with post-publication public letter, or papers retraction.
I recently reviewed to papers which presented InSAR timeseries using new analysis approaches which are not (yet) well established. To be publishable I requested a comparison with well established data products and explanations for the differences -- if here are.
In the present case another complication is that data are referred to GNSS data. I hope/trust that this is not an independent source for discrepancies between datasets.