the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
ITS_LIVE global glacier velocity data in near real time
Abstract. Glaciers and ice sheets cover some 15 million square kilometres of the Earth’s surface, shaping continental landscapes and modifying climate on a global scale. Recent decades of atmospheric and oceanic warming have induced rapid glacier loss worldwide that has caused sea level rise, flooding, changes to Earth’s overall energy balance and changes in water resources. Accounting for the total impact of glacier change requires observations on a global scale, and planning for future change will require improved understanding of the physical controls that govern glacier change. One key factor that dictates glacier and ice sheet loss is changes in rates of ice flow, the physics of which remain poorly constrained. Our physical understanding of ice flow can be advanced with high resolution monitoring of glacier flow, in near real time. Automated tracking of glacier flow from space became possible with the launch of Landsat 4 in 1982. Since then, an increasing number of optical and radar satellite sensors have now provided a full decade of year-round, global data coverage. This recent plethora of data has introduced new challenges for efficiently processing such large and myriad data streams, in a standardized manner, with low latency. Here we present the NASA MEaSUREs Inter-mission Time Series of Land Ice Velocity and Elevation (ITS_LIVE) global glacier velocity dataset, which is freely available to the public and is currently on major release version 2.0. ITS_LIVE has computed surface velocities using every, excluding those with high cloud cover, available image from Landsat 4 through 9 and Sentinel 1 & 2, creating a global glacier velocity record of over 36 million image pairs dating back to 1982. The ITS_LIVE processing chain automatically performs feature tracking on more than 20,000 image pairs per day, within minutes of image availability, and will soon include data from Sentinel 1C and NISAR satellites. This paper describes the ITS_LIVE processing chain and provides guidance for working with the cloud-optimized velocity data it produces. All ITS_LIVE velocity data can now be accessed freely, without login credentials or any other barriers, through https://its-live.jpl.nasa.gov/.
- Preprint
(4790 KB) - Metadata XML
- BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on egusphere-2025-392', Ellyn Enderlin, 28 Mar 2025
Summary: The manuscript describes modifications made to the ITS_LIVE global glacier velocity datasets to increase uptake by the scientific community. Although some data are presented to give an overview of the sorts of insights that can be gleaned from such a vast and comprehensive dataset, the focus of the manuscript is really on how the dataset was created (and is continuously updated) and the creation of higher-level data products. I am happy to see that the dataset will be expanded into the future in near real-time, using NISAR data when available, and that there are numerous metrics included as higher-level data products.
The manuscript is easy to read and very straight-forward. I have a few minor comments that should be addressed below but I have no major recommendations.
Minor Comments:
- lines 35-37: I recommend rephrasing this sentence because it currently anthropomorphizes glaciers a bit too much, albeit unintentionally.
- line 84: I found this to be the most confusing statement in the processing description: “Each new image may pair with up to 35 previous images and create 35 new velocity granules”. Later in the main text and in the appendix you describe velocity pairing in more detail and the focus is already on time separation, not the number of images. Why do you limit the search based on the number of images here? Since you look for up to 35 images, does that really define a maximum time frame (at least for the same path-row)? Please clarify.
- lines 150-151: I do not see how it is possible the generate velocities on a uniform grid without any resampling or interpolation when you are bringing together such different datasets. A detailed explanation of geogrid is outside the scope of this paper but it would be helpful to clarify this statement about geocoding of the autoRIFT outputs so that the process is more transparent for people who use this data product.
- lines 163-171: A small table describing each dataset would be really helpful. The time period of data, band name and wavelength or frequency as appropriate, and spatial resolution are all really helpful parameters to know.
- line 190: What source are used for reference velocities? Also, you mention that a DEM can be used in autoRIFT when describing HyP3 autoRIFT. Do you use a DEM? If so, from what source(s)? Does the DEM also provide geographic constraints on the search? Please explain.
- line 202: I’d move this sentence to the start of the next paragraph since that paragraph focuses on the differences between optical feature tracking and speckle tracking.
- lines 217-218: Landsat 8 is mentioned twice and one instance has to be a typo.
- line 246: Typo “compositing”
Citation: https://doi.org/10.5194/egusphere-2025-392-RC1 -
AC1: 'Reply on RC1', Alex Gardner, 01 May 2025
Dear Dr. Enderlin,
Thank you for your time, kind words and thoughtful review. All your suggestions were great. We’ve addressed all as follows:
Minor Comments:
• lines 35-37: I recommend rephrasing this sentence because it currently anthropomorphizes glaciers a bit too much, albeit unintentionally.
We agree and have revised the starting sentences of the introduction to be more focused.
• line 84: I found this to be the most confusing statement in the processing description: “Each new image may pair with up to 35 previous images and create 35 new velocity granules”. Later in the main text and in the appendix you describe velocity pairing in more detail and the focus is already on time separation, not the number of images. Why do you limit the search based on the number of images here? Since you look for up to 35 images, does that really define a maximum time frame (at least for the same path-row)? Please clarify.
Yes, this is confusing. The “35 previous images” is simply a function of the allowed time separation between images when forming an image pair. We’ve removed this sentence as it is redundant with later text and is out of place in the “monitoring” section.
• lines 150-151: I do not see how it is possible the generate velocities on a uniform grid without any resampling or interpolation when you are bringing together such different datasets. A detailed explanation of geogrid is outside the scope of this paper but it would be helpful to clarify this statement about geocoding of the autoRIFT outputs so that the process is more transparent for people who use this data product.
We agree that our description is not easy to intuit. To address this we’ve replaced the url to the Geogrid repo with the citation to the paper in which the algorithm is described in detail. We’ve also added an additional sentence that hopefully adds clarity: “This is achieved by centring search chips on a predefined grid then mapping these locations to native image coordinates, accounting for rotations and distortions between mappings”.
• lines 163-171: A small table describing each dataset would be really helpful. The time period of data, band name and wavelength or frequency as appropriate, and spatial resolution are all really helpful parameters to know.
Great idea. We’ve added a new table with this information.
• line 190: What source are used for reference velocities? Also, you mention that a DEM can be used in autoRIFT when describing HyP3 autoRIFT. Do you use a DEM? If so, from what source(s)? Does the DEM also provide geographic constraints on the search? Please explain.
We’ve added information on the reference velocity: “Our reference velocity is derived a synthesis of Version 1 MEaSUREs ITS_LIVE Regional Glacier and Ice Sheet Surface Velocities (Gardner et al. 2022), MEaSUREs Version 1 of the Multi-year Greenland Ice Sheet Velocity Mosaic (Joughin et al. 2016), and Version 1 MEaSUREs Phase-Based Antarctica Ice Velocity Map (Mouginot et al., 2019).”
And on the DEM used: “We use the Global Copernicus GLO-30 Digital Elevation Model in our SAR processing.”
• line 202: I’d move this sentence to the start of the next paragraph since that paragraph focuses on the differences between optical feature tracking and speckle tracking.
Good catch, thanks.
• lines 217-218: Landsat 8 is mentioned twice and one instance has to be a typo.
Fixed
• line 246: Typo “compositing”
FixedSincerely,
Alex and co-authors.Citation: https://doi.org/10.5194/egusphere-2025-392-AC1
-
RC2: 'Comment on egusphere-2025-392', Anonymous Referee #2, 21 Apr 2025
This manuscript documents the ITS_LIVE global glacier velocity dataset and outlines the processing workflow in detail. The authors use nearly all freely accessible satellite remote sensing data since the 1980s to calculate and track global ice velocity and plan to incorporate NISAR data in the future. The manuscript clearly explains the methodology, current processing framework, and the long-term vision for the dataset. Given the importance of glacier velocity in cryospheric science, this product will be a valuable resource for the community — both for large-scale assessments and as a contextual dataset for localized studies. I look forward to future developments, particularly with the inclusion of NISAR data.
The manuscript is generally well-structured and clearly written. I have only a few minor comments related to clarifications, primarily around abbreviations and some technical aspects of the SAR processing. Addressing these will help improve clarity for a broader readership.
Comments to the Authors:
- The manuscript notes that ITS_LIVE uses both optical and SAR data, but it would be helpful to explain more clearly the advantages of incorporating SAR. Is it for higher temporal/spatial coverage, better performance in cloudy regions, or increased measurement accuracy? A short statement on this would help clarify the role and complementarity of SAR within the dataset.
- Section 3.1.2 (Sentinel-1 processing) is relatively brief compared to the optical processing discussion. For example, while the use of a 21×21 Wallis operator is noted, there may be additional reasons beyond local variability in radar backscatter caused by topography. Is this choice optimized for a particular spatial resolution (e.g., 120 m in this case) or signal characteristic? Additionally, clarifying the resolution differences between the input datasets (optical vs. SAR) and how they are reconciled would improve reader understanding.
- The term “SLC” appears in two different contexts: “Scan Line Corrector failure (SLC-off)” on line 176 and “SLC (Level 1.1)” on line 203. Since the latter often refers to Single Look Complex data in SAR terminology, this could confuse readers. Please consider clarifying the intended meaning in each case.
- Please ensure that all abbreviations are defined on first use, including: MEaSUREs, NISAR, AWS SNS, AWS SQS, and USGS STAC. While many readers may recognize them, others may not. For example, the NISAR acronym is explained in line 425, but it is first mentioned in line 106 — consider moving the definition earlier.
Citation: https://doi.org/10.5194/egusphere-2025-392-RC2 -
AC2: 'Reply on RC2', Alex Gardner, 01 May 2025
Dear Reviewer,
Thank you for taking the time to review our manuscript and for the kind words. We’ve addressed all your comments in the revised manuscript, which we detail here:
Comments to the Authors:
- The manuscript notes that ITS_LIVE uses both optical and SAR data, but it would be helpful to explain more clearly the advantages of incorporating SAR. Is it for higher temporal/spatial coverage, better performance in cloudy regions, or increased measurement accuracy? A short statement on this would help clarify the role and complementarity of SAR within the dataset.
Good point. We’ve added the following sentence the start of section 3.1.2
"The ITS_LIVE project also includes velocity products derived from Synthetic Aperture Radar (SAR) imagery. SAR imagery has qualities that are valuable for imaging of polar glaciers and ice sheets as retrievals are not obscured by cloud or limited by solar illumination. These capabilities are highly complementary to optical retrievals."
- Section 3.1.2 (Sentinel-1 processing) is relatively brief compared to the optical processing discussion. For example, while the use of a 21×21 Wallis operator is noted, there may be additional reasons beyond local variability in radar backscatter caused by topography. Is this choice optimized for a particular spatial resolution (e.g., 120 m in this case) or signal characteristic? Additionally, clarifying the resolution differences between the input datasets (optical vs. SAR) and how they are reconciled would improve reader understanding.
We’ve added a sentence that points the reader to our previous publication on the SAR processing: “See Lei et al. (2022) for a more detailed description of the Sentinel-1processing.”. We had not made it clear that a more detailed description of the processing has already been published. We’ve also added a new table (Table 1) that lists the characteristics of the source imagery so that it is now easier for the reader to identify differences in input imagery. As for the high-pass filter (21×21 Wallis operator)… unlike optical, SAR derived velocities are relatively insensitive to choice of high-pass filter owing to the speckled nature of SAR imagery. The only reason for its application is to protect against possible biases introduced by gradients in brightness due to topography and illumination.
- The term “SLC” appears in two different contexts: “Scan Line Corrector failure (SLC-off)” on line 176 and “SLC (Level 1.1)” on line 203. Since the latter often refers to Single Look Complex data in SAR terminology, this could confuse readers. Please consider clarifying the intended meaning in each case.
Good point. This isn’t ideal. We’ve removed the acronym “SLC-off” as it was defined but not used again. We’ve also added a “Single Look Complex” definition for SLC when it is first used in the description of the radar processing.
- Please ensure that all abbreviations are defined on first use, including: MEaSUREs, NISAR, AWS SNS, AWS SQS, and USGS STAC. While many readers may recognize them, others may not. For example, the NISAR acronym is explained in line 425, but it is first mentioned in line 106 — consider moving the definition earlier.
It looks like we were missing a lot of acronym definitions… we’ve reviewed the paper in full and made sure that the first occurrence of an acronym is proceeded by its definition.
Sincerely,
Alex and co-authorsCitation: https://doi.org/10.5194/egusphere-2025-392-AC2
Status: closed
-
RC1: 'Comment on egusphere-2025-392', Ellyn Enderlin, 28 Mar 2025
Summary: The manuscript describes modifications made to the ITS_LIVE global glacier velocity datasets to increase uptake by the scientific community. Although some data are presented to give an overview of the sorts of insights that can be gleaned from such a vast and comprehensive dataset, the focus of the manuscript is really on how the dataset was created (and is continuously updated) and the creation of higher-level data products. I am happy to see that the dataset will be expanded into the future in near real-time, using NISAR data when available, and that there are numerous metrics included as higher-level data products.
The manuscript is easy to read and very straight-forward. I have a few minor comments that should be addressed below but I have no major recommendations.
Minor Comments:
- lines 35-37: I recommend rephrasing this sentence because it currently anthropomorphizes glaciers a bit too much, albeit unintentionally.
- line 84: I found this to be the most confusing statement in the processing description: “Each new image may pair with up to 35 previous images and create 35 new velocity granules”. Later in the main text and in the appendix you describe velocity pairing in more detail and the focus is already on time separation, not the number of images. Why do you limit the search based on the number of images here? Since you look for up to 35 images, does that really define a maximum time frame (at least for the same path-row)? Please clarify.
- lines 150-151: I do not see how it is possible the generate velocities on a uniform grid without any resampling or interpolation when you are bringing together such different datasets. A detailed explanation of geogrid is outside the scope of this paper but it would be helpful to clarify this statement about geocoding of the autoRIFT outputs so that the process is more transparent for people who use this data product.
- lines 163-171: A small table describing each dataset would be really helpful. The time period of data, band name and wavelength or frequency as appropriate, and spatial resolution are all really helpful parameters to know.
- line 190: What source are used for reference velocities? Also, you mention that a DEM can be used in autoRIFT when describing HyP3 autoRIFT. Do you use a DEM? If so, from what source(s)? Does the DEM also provide geographic constraints on the search? Please explain.
- line 202: I’d move this sentence to the start of the next paragraph since that paragraph focuses on the differences between optical feature tracking and speckle tracking.
- lines 217-218: Landsat 8 is mentioned twice and one instance has to be a typo.
- line 246: Typo “compositing”
Citation: https://doi.org/10.5194/egusphere-2025-392-RC1 -
AC1: 'Reply on RC1', Alex Gardner, 01 May 2025
Dear Dr. Enderlin,
Thank you for your time, kind words and thoughtful review. All your suggestions were great. We’ve addressed all as follows:
Minor Comments:
• lines 35-37: I recommend rephrasing this sentence because it currently anthropomorphizes glaciers a bit too much, albeit unintentionally.
We agree and have revised the starting sentences of the introduction to be more focused.
• line 84: I found this to be the most confusing statement in the processing description: “Each new image may pair with up to 35 previous images and create 35 new velocity granules”. Later in the main text and in the appendix you describe velocity pairing in more detail and the focus is already on time separation, not the number of images. Why do you limit the search based on the number of images here? Since you look for up to 35 images, does that really define a maximum time frame (at least for the same path-row)? Please clarify.
Yes, this is confusing. The “35 previous images” is simply a function of the allowed time separation between images when forming an image pair. We’ve removed this sentence as it is redundant with later text and is out of place in the “monitoring” section.
• lines 150-151: I do not see how it is possible the generate velocities on a uniform grid without any resampling or interpolation when you are bringing together such different datasets. A detailed explanation of geogrid is outside the scope of this paper but it would be helpful to clarify this statement about geocoding of the autoRIFT outputs so that the process is more transparent for people who use this data product.
We agree that our description is not easy to intuit. To address this we’ve replaced the url to the Geogrid repo with the citation to the paper in which the algorithm is described in detail. We’ve also added an additional sentence that hopefully adds clarity: “This is achieved by centring search chips on a predefined grid then mapping these locations to native image coordinates, accounting for rotations and distortions between mappings”.
• lines 163-171: A small table describing each dataset would be really helpful. The time period of data, band name and wavelength or frequency as appropriate, and spatial resolution are all really helpful parameters to know.
Great idea. We’ve added a new table with this information.
• line 190: What source are used for reference velocities? Also, you mention that a DEM can be used in autoRIFT when describing HyP3 autoRIFT. Do you use a DEM? If so, from what source(s)? Does the DEM also provide geographic constraints on the search? Please explain.
We’ve added information on the reference velocity: “Our reference velocity is derived a synthesis of Version 1 MEaSUREs ITS_LIVE Regional Glacier and Ice Sheet Surface Velocities (Gardner et al. 2022), MEaSUREs Version 1 of the Multi-year Greenland Ice Sheet Velocity Mosaic (Joughin et al. 2016), and Version 1 MEaSUREs Phase-Based Antarctica Ice Velocity Map (Mouginot et al., 2019).”
And on the DEM used: “We use the Global Copernicus GLO-30 Digital Elevation Model in our SAR processing.”
• line 202: I’d move this sentence to the start of the next paragraph since that paragraph focuses on the differences between optical feature tracking and speckle tracking.
Good catch, thanks.
• lines 217-218: Landsat 8 is mentioned twice and one instance has to be a typo.
Fixed
• line 246: Typo “compositing”
FixedSincerely,
Alex and co-authors.Citation: https://doi.org/10.5194/egusphere-2025-392-AC1
-
RC2: 'Comment on egusphere-2025-392', Anonymous Referee #2, 21 Apr 2025
This manuscript documents the ITS_LIVE global glacier velocity dataset and outlines the processing workflow in detail. The authors use nearly all freely accessible satellite remote sensing data since the 1980s to calculate and track global ice velocity and plan to incorporate NISAR data in the future. The manuscript clearly explains the methodology, current processing framework, and the long-term vision for the dataset. Given the importance of glacier velocity in cryospheric science, this product will be a valuable resource for the community — both for large-scale assessments and as a contextual dataset for localized studies. I look forward to future developments, particularly with the inclusion of NISAR data.
The manuscript is generally well-structured and clearly written. I have only a few minor comments related to clarifications, primarily around abbreviations and some technical aspects of the SAR processing. Addressing these will help improve clarity for a broader readership.
Comments to the Authors:
- The manuscript notes that ITS_LIVE uses both optical and SAR data, but it would be helpful to explain more clearly the advantages of incorporating SAR. Is it for higher temporal/spatial coverage, better performance in cloudy regions, or increased measurement accuracy? A short statement on this would help clarify the role and complementarity of SAR within the dataset.
- Section 3.1.2 (Sentinel-1 processing) is relatively brief compared to the optical processing discussion. For example, while the use of a 21×21 Wallis operator is noted, there may be additional reasons beyond local variability in radar backscatter caused by topography. Is this choice optimized for a particular spatial resolution (e.g., 120 m in this case) or signal characteristic? Additionally, clarifying the resolution differences between the input datasets (optical vs. SAR) and how they are reconciled would improve reader understanding.
- The term “SLC” appears in two different contexts: “Scan Line Corrector failure (SLC-off)” on line 176 and “SLC (Level 1.1)” on line 203. Since the latter often refers to Single Look Complex data in SAR terminology, this could confuse readers. Please consider clarifying the intended meaning in each case.
- Please ensure that all abbreviations are defined on first use, including: MEaSUREs, NISAR, AWS SNS, AWS SQS, and USGS STAC. While many readers may recognize them, others may not. For example, the NISAR acronym is explained in line 425, but it is first mentioned in line 106 — consider moving the definition earlier.
Citation: https://doi.org/10.5194/egusphere-2025-392-RC2 -
AC2: 'Reply on RC2', Alex Gardner, 01 May 2025
Dear Reviewer,
Thank you for taking the time to review our manuscript and for the kind words. We’ve addressed all your comments in the revised manuscript, which we detail here:
Comments to the Authors:
- The manuscript notes that ITS_LIVE uses both optical and SAR data, but it would be helpful to explain more clearly the advantages of incorporating SAR. Is it for higher temporal/spatial coverage, better performance in cloudy regions, or increased measurement accuracy? A short statement on this would help clarify the role and complementarity of SAR within the dataset.
Good point. We’ve added the following sentence the start of section 3.1.2
"The ITS_LIVE project also includes velocity products derived from Synthetic Aperture Radar (SAR) imagery. SAR imagery has qualities that are valuable for imaging of polar glaciers and ice sheets as retrievals are not obscured by cloud or limited by solar illumination. These capabilities are highly complementary to optical retrievals."
- Section 3.1.2 (Sentinel-1 processing) is relatively brief compared to the optical processing discussion. For example, while the use of a 21×21 Wallis operator is noted, there may be additional reasons beyond local variability in radar backscatter caused by topography. Is this choice optimized for a particular spatial resolution (e.g., 120 m in this case) or signal characteristic? Additionally, clarifying the resolution differences between the input datasets (optical vs. SAR) and how they are reconciled would improve reader understanding.
We’ve added a sentence that points the reader to our previous publication on the SAR processing: “See Lei et al. (2022) for a more detailed description of the Sentinel-1processing.”. We had not made it clear that a more detailed description of the processing has already been published. We’ve also added a new table (Table 1) that lists the characteristics of the source imagery so that it is now easier for the reader to identify differences in input imagery. As for the high-pass filter (21×21 Wallis operator)… unlike optical, SAR derived velocities are relatively insensitive to choice of high-pass filter owing to the speckled nature of SAR imagery. The only reason for its application is to protect against possible biases introduced by gradients in brightness due to topography and illumination.
- The term “SLC” appears in two different contexts: “Scan Line Corrector failure (SLC-off)” on line 176 and “SLC (Level 1.1)” on line 203. Since the latter often refers to Single Look Complex data in SAR terminology, this could confuse readers. Please consider clarifying the intended meaning in each case.
Good point. This isn’t ideal. We’ve removed the acronym “SLC-off” as it was defined but not used again. We’ve also added a “Single Look Complex” definition for SLC when it is first used in the description of the radar processing.
- Please ensure that all abbreviations are defined on first use, including: MEaSUREs, NISAR, AWS SNS, AWS SQS, and USGS STAC. While many readers may recognize them, others may not. For example, the NISAR acronym is explained in line 425, but it is first mentioned in line 106 — consider moving the definition earlier.
It looks like we were missing a lot of acronym definitions… we’ve reviewed the paper in full and made sure that the first occurrence of an acronym is proceeded by its definition.
Sincerely,
Alex and co-authorsCitation: https://doi.org/10.5194/egusphere-2025-392-AC2
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
380 | 126 | 17 | 523 | 15 | 21 |
- HTML: 380
- PDF: 126
- XML: 17
- Total: 523
- BibTeX: 15
- EndNote: 21
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1