Array processing in cryoseismology
Abstract. Seismicity at glaciers, ice sheets and ice shelves provides observational constraint of a number of glaciological processes. Detecting and locating this seismicity, specifically icequakes, is a necessary first step in studying processes such as basal slip, crevassing, and imaging ice fabric, for example. Most glacier deployments to date use conventional seismic networks, comprised of seismometers distributed over the entire area of interest. However, smaller aperture seismic arrays can also be used, which are typically sensitive to seismicity distal from the array footprint and require a smaller number of instruments. Here, we investigate the potential of arrays and array-processing methods to detect and locate seismicity in the cryosphere, benchmarking performance against conventional seismic network-based methods. We also provide an array-processing recipe for cryosphere applications. Results from an array and network deployed at Rutford Ice Stream, Antarctica, show that arrays and networks both have strengths and weaknesses. Arrays can detect icequakes from further distances whereas networks outperform arrays for more comprehensive studies of a process within the network extent, due to greater hypocentral constraint and a smaller magnitude of completeness. We also gain new insights into seismic behaviour at Rutford Ice Stream. The array detects basal icequakes in what was previously interpreted to be an aseismic region of the bed, as well as new icequake observations at the ice stream shear-margins, where it would be challenging to deploy instruments. Finally, we make some practical recommendations for future array deployments at glaciers.
Thomas Samuel Hudson et al.
Status: final response (author comments only)
- RC1: 'Comment on egusphere-2023-657', Andreas Köhler, 01 May 2023
- RC2: 'Comment on egusphere-2023-657', Anonymous Referee #2, 11 May 2023
Thomas Samuel Hudson et al.
Thomas Samuel Hudson et al.
Viewed (geographical distribution)
In this study the authors apply seismic array processing to data recorded on an ice stream in Antarctica. The goal is to investigate the benefit of array processing (plane wave beamforming) compared to network processing for detecting and locating icequakes (migration and stacking). I do appreciate this study very much since I believe that array processing has a huge potential for analyzing complex and signals-rich cryoseismological data sets. A study comparing array and distributed network deployments on glaciers has not been done previously to my knowledge and is therefore very timely. It can be beneficial for researchers planning future field campaigns on glaciers where the number of sensors is limited, due to logistical reasons, to find the most optimal configuration for their research objectives. This is in contrast to recent large-N deployments on glacier, where multiple processing approaches can be tested after the measurements.
(1) One concern I have is not so much about the methods and results, but how this work is introduced and set in a broader context. The title seen alone suggests either a review article or the very first application of array processing for cryosphere monitoring. This is misleading since several studies have already been using arrays in this context for a while, and these works are not referred to in this manuscript. Array processing has been used on Alpine glaciers, in Greenland, in Svalbard and Antarctica. Please check the list of references below. Not all might be relevant and there might be even more. Some of these studies used permanent arrays (Antarctica; Svalbard). Others used temporary arrays deployments on or close to glaciers for the purpose of studying cryogenic seismic signals (icequakes, calving events, tremors, ...). Some do classic plane wave array processing; others apply matched field processing / general beamforming. The definition of what is array processing and what network processing in these studies may not be so clear always, however, I do believe these studies are related to your topic. Given these previous works and the sole focus on Antarctica in your study, I strongly suggest modifying the title to better reflect the content and contribution of this paper, and also update the text (mainly introduction) and references accordingly.
Examples of papers using array processing, beamforming, and FK analysis in cryoseismology:
(2) In Chapter 3.1.1 the array processing pipeline is introduced. Automatic array processing including phase detection by beamforming, phase classification by F-K analysis, and association using back-azimuth constraints, similar to your method, is routinely done at several national seismic data centers and at the IDC. This should be mentioned. See for example Chapter 9.9 in this one: https://doi.org/10.2312/GFZ.NMSOP_r1_ch9
(3) You use beam power time series P_Z for detecting P waves and P_H for S waves. As far as I understand the slowness is not used to actually confirm that a P or S waves are detected, which is usually done in automatic array processing. Have you considered this? Theoretically, S waves can also be triggered by P_Z, and P waves by P_H, depending on the incident angle of the phase arrivals. Could it for example happen that a horizontally arriving P wave leads to a peak in P_H which you would miss because you are looking only for peaks in P_Z followed by P_H? I acknowledge that your procedure is more sensitive to detect deep icequakes, which is maybe what you intended. Also, there is of course the ray bending due to the firn layer. Some comments on that would be appreciated.
(4) I am not sure about the motivation for Chapter 3.1.4. It is not clear where you do time-domain beamforming at this point. You described that you choose to do array processing in the frequency domain, i.e., F-K analysis in continuous data to measure the slowness vector, but nothing is mentioned about time-domain stacking in the detection and location procedure. Please clarify.
(5) I find it difficult to understand why your 3D location procedure fails so clearly, i.e., all events end up on a vertical line below the array. From the description of the methodology, I had the impression that the synthetic take-of angle and distance PDF should enable reasonable results. Could you add some sort of synthetic resolution test for the 3D method to investigate this? Your approach is by the way similar to how teleseismic events can be located using a single array. Using the 1D velocity models of the Earth, we can predict the horizontal slowness (ray parameter) of a P or S wave at a given distance. With the back-azimuth, we then have the location. Even if your firn model in not perfect, I expect more spread-out locations.
(6) Figure 6: Seeing these waveform examples makes me wonder if there could be many mis-associations in your detection list. Visually it is difficult to identify and relate the indicated P and S arrivals. I acknowledge that the authors have experience in identifying icequakes in that area. However, some proof for that these are not coincidental detections and associations would be appreciated. So how confident are you in the array-detected events?
Chapter 1: I would already here emphasize that the set of 16 sensors is used for classic network processing later and add the dimension of it in the text, and that the set of 10 sensors used for array processing are located in the centre of the network. This would prepare the reader better for the comparison later. Side note: The 16 sensors together could also be considered as an array for processing longer wavelength signals, for example regional and teleseismic arrivals.
Line 4: Add calving as another source of cryoseismicity.
Line 100: You do you not explicitly write how you measure the take off angle. It is clear that it must come from the observed horizontal slowness and an assumed P/S wave velocity in ice, but good to mention this for the reader.
Line 105: How do you obtain the path-averaged velocities?
Caption figure 3: I think this piece of text has to be deleted: “n of an event alysis (FTAN)”
Line 172: The comparison of network and array processing on the array is very interesting. But you could already mention here that the target signals are of course very different. For plane wave array processing you need events at some distance from the array, whereas the migration stacking is expected to work best for events inside the network or at close distance.
Line 181: Here it would be very helpful to distinguish between the two network processing approaches (all sensor and array only). I would assume that network processing outperforming the array processing for small magnitudes is mainly due to icequakes detected inside the dense array. This is expected since array processing needs plane waves as mentioned above. This discussion is missing and should be added.
Chapter 4.1.1.: You give a good explanation for the peaks in the magnitude distribution. If my guess that the missing small events in the array processing results come mainly from events inside the array is correct, then these peaks could also originate from very close events which arrive with non-planar wavefronts. For evaluating this it would be very helpful to have the sensor location plotted in Fig 4c, and also show a zoom of array-detected events inside the network.
Line 213: double “with”
Chapter 4.2. As you write the comparison of the methods is difficult because not the same qc criteria are applied. One candidate for qc-ing array results would be the coherency or semblance for each detected phase. You could increase the threshold to see of the better constrained events resemble the network locations.
Fig 5: It looks like you show the cumulative and non-cumulative distributions, but the axes labels just says “cumulative”.
Line 265: “… icequakes from” ?