the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Application of quality-controlled sea level height observation at the central East China Sea: Assessment of sea level rise
Abstract. This study presents the state-of-the-art quality control (QC) process for sea level height (SLH) time series observed at the Ieodo Ocean Research Station (I-ROS) in the central East China Sea, a unique in-situ measurement in the open sea for over two decades with a 10-minute interval. The newly developed QC procedure called the Temporally And Locally Optimized Detection (TALOD) method has two notable differences in characteristics from the typical ones: 1) spatiotemporally optimized local range check based on the high-resolution tidal prediction model TPXO9, 2) considering the occurrence rate of a stuck value over a specific period. Besides, the TALOD adopts an extreme event flag (EEF) system to provide SLH characteristics during extreme weather. A comparison with the typical QC process, satellite altimetry, and reanalysis products demonstrates that the TALOD method can provide reliable SLH time series with few misclassifications. Through budget analysis, it was determined that the sea level rise at I-ORS is primarily caused by the barystatic effect, and the trend differences between observations, satellite, and physical processes are related to vertical land motion. It was confirmed through GNSS that ground subsidence of -0.89±0.47 mm/yr is occurring at I-ORS. As a representative of the East China Sea, this qualified SLH time series makes dynamics research possible spanning from a few hours of nonlinear waves to a decadal trend, along with simultaneously observed environmental variables from the air-sea monitoring system in the research station. This TALOD QC method is designed for SLH observations in the open ocean, but it can be generally applied to SLH data from tidal gauge stations in the coastal region.
- Preprint
(2014 KB) - Metadata XML
- BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on egusphere-2024-3380', Anonymous Referee #1, 05 Dec 2024
Review of “Application of quality-controlled sea level height observation at the central East China Sea: Assessment of sea level rise”
The authors propose a new method, called TALOD, to perform quality control on the sea level height (SLH) measured by the radar located in I-ORS. TALOD detects problems with metadata, out of range data, spikes and stuck data. This new method is compared to the existing IOC method. The quality-controlled data is compared to HYCOM, GLORYS and ERAS5. Thus, TALOD proves to work correctly. The good data is further used to compute sea level rise (SLR), which leads to the conclusion that SLR in this location is due to vertical land movement (VLM).
This manuscript is clear and well written. I appreciate that the authors are adapting the TALOD method for the sea level height observations in open ocean. Also, the type of bad data is even further categorized, which brings much information on how SLH behaves.
Detailed comments:
-Please do an analysis on the tides.
-Please specify the constituents considered in this study.
Fig. 1: characters should be larger.
L250-253: There really should not be so many stuck errors in a modern sensor, is there an explanation for that?
L270: Yes, recurrent spikes make the automatic detection “think” they are good values. Good job, there, solving the issue by computing a local bias.
L420 -HYCOM shows a trend in SLH of -23.86mm/yr, which is quite high. Please consider whether it is a good model to compare to.
Sec 3.3 Great analysis of the contribution to sea level rise on this site.
Citation: https://doi.org/10.5194/egusphere-2024-3380-RC1 -
AC1: 'Reply on RC1', Jae-Ho Lee, 22 May 2025
- Please do an analysis on the tides.
: Harmonic analysis was conducted on the SLH observations during the well-observed period from March to June 2021. The M2 tide exhibits the largest amplitude of 0.62 m, with a signal-to-noise ratio (SNR) exceeding 10³. This tide is followed by S2 (0.32 m), K1 (0.20 m), N2 (0.16 m), and O1 (0.15 m). The mean amplitude of these primary constituents was 0.28 m, with an average SNR of approximately 3,000, notably higher than that of the remaining 31 constituents with amplitudes under 0.1 m (mean amplitude: 0.01 m, mean SNR: 6.01). We have reflected this tidal analysis result in section 2.1.2.
-Please specify the constituents considered in this study.
: We have specified the 15 tidal components for this study ¾M2, S2, N2, K2, 2N2, K1, O1, P1, Q1, Mf, Mm, M4, MN4, MS4, and S1¾ in the revised manuscript.
-Fig. 1: characters should be larger.
: Acknowledged. We have revised the font size in Figure 1 of the revised manuscript.
- L250-253: There really should not be so many stuck errors in a modern sensor, is there an explanation for that?
: Initially, we attempted to solve this issue by contacting the MIROS company, the manufacturer of the range finder. Then, they requested access to all related raw data to look into the cause of this unrealistic error. Unfortunately, we are not able to provide this access due to contractual restrictions. The technicians working at the Korea Hydrographic and Oceanographic Agency (KHOA), in charge of managing the ocean research station, suggested that the unstable power supply might cause the errors. In addition, we cannot rule out the possibility that biological floats contribute, at least in part, to this phenomenon, as their increase of stuck errors was observed during warm seasons, specifically spring (16,536), summer (7,985), autumn (3,067), and winter (5,795). This seasonality may indicate an impact of surface-drifting plankton (or material) on the rangefinder's reflection rate, presumably resulting in these recurrent stuck errors. To assess this hypothesis, we require an in-depth study on the relationship between floating materials and their impact on sea level observations with taking into account the electrical power level of this station.
- L270: Yes, recurrent spikes make the automatic detection “think” they are good values. Good job, there, solving the issue by computing a local bias.
: Thank you for your comment. These recurrent spikes and stuck values can be falsely classified as real, good data by a classical quality control process. The TALOD approach effectively addresses this issue by calculating non-tidal residuals in order to distinguish real observations from outliers. This improvement reduces false detections and enhances the reliability of observed sea levels.
- L420 -HYCOM shows a trend in SLH of -23.86mm/yr, which is quite high. Please consider whether it is a good model to compare to.
: HYCOM is a reanalysis dataset that assimilates various observational data, including temperature, salinity, and sea level height, into the NCODA system. This reanalysis data has been widely used for both initial and boundary conditions of numerical simulations, as well as for assessing model performance in the broad fields of oceanography and climate studies. When calculating a linear trend, the HYCOM reanalysis rendered an unrealistically high negative trend of -23.86 mm/yr; this impractical trend is primarily due to the robust sea level falling in the HYCOM’s non-assimilation simulation during the recent period since 2017. Therefore, in this study, we divided the 2003–2022 period into an assimilation-applied phase (HYCOM-R) and a non-assimilated phase (HYCOM-S) to discuss relevant issues in HYCOM’s performance (L321-L326). Our sea level analysis may reflect that the recent data of the HYCOM product is unsuitable for studying circulation and associated water properties, at least, in the East China Sea, although in-depth research is needed to assess the comprehensive skills of HYCOM compared to other reanalysis datasets (i.e., GLORYS12, BRAN2020, and ORAS5).
- Sec 3.3 Great analysis of the contribution to sea level rise on this site.
: We appreciate the compliment. Sea level rise is a critical topic linked to climate change. This study aimed to comprehensively analyse not only the sea level trend observed from fixed platforms but also the main contributions to sea level rise, including barostatic and steric effects, as well as vertical land motion. We hope that these findings will contribute to a deeper understanding of regional sea level changes and their implications for climate research.
Citation: https://doi.org/10.5194/egusphere-2024-3380-AC1
-
AC1: 'Reply on RC1', Jae-Ho Lee, 22 May 2025
-
RC2: 'Comment on egusphere-2024-3380', Anonymous Referee #2, 05 Feb 2025
This study proposes a method of quality control for sea level time series called TALOD, which uses a variety of checks to remove bad data. The study indicates that it performs well against ‘IOC’ QC methods and uses the resulting good data (averaged to daily means) along with various altimetry and model datasets to infer something about long term sea level rise and its forcing factors. The various checks seem to be slight variants of IOC methods. However, a fundamental step of QC seems to have been overlooked in TALOD , which is harmonic analysis. This step removes the dominant tidal variability and makes suspect data more easily identifiable from true observations. In addition, it appears that some good data, associated with extreme event, have been removed which may bias any resulting tends that are inferred. The authors use 20 years of data to infer trends, which is a rather short time series to deduce a robust long-term trend. Assumptions are made about vertical land motion, which seem tenuous.
Line 75 – Open ocean tides are generally easier to analyse than those at the coast, where shallow water effects can distort the tide.
Line 152 S 2.2.1 Meta Check – This terminology is confusing. The term ‘metadata’ seems to be used in a non-traditional way here. It seems that what the authors are actually describing are cross checks between instrumental maintenance records and sea level time series. I would therefore give this check an alternative name
Lines 156-163 are confusing, There is stated to be no maintenance record for the station, but then it is claimed that the sensor was relocated twice and swapped out on another occasion. How did the authors deduce this in the absence of maintenance records?
Line 168 – S 2.2.2 Stuck Check. It is unclear whether this check is performed manually or is automated. In any event, given that step 1 is a manual check, why would these ‘stuck data’ checks not be identified during step 1? Figure 5 d shows that they are quite obvious.
Line 177 – S 2.2.3 Range Check. I don’t understand why the authors would go to the trouble of using predictions from a global tidal model to identify the tidal range within a given month, to then remove an offset to move the model closer to observations and then smooth the model tide to compare it to observations. Surely it would be far simpler to perform Classical Harmonic Analysis of the tide gauge observations to generate a non-tidal residual time series in which any suspect datapoints will be immediately obvious, because they are not masked by the dominant tidal variability? This is the principle on which conventional QC of sea level time series is built and by omitting this step, the authors are making life much more difficult for themselves and may indeed overlook some suspect data.
Figure 5(b) and (c) It isn’t clear to me that the range or spike check has worked as some of the yellow boxes appear visually to be within range. If they are truly out-of-range, that will be apparent in the non-tidal residual time series, which should be presented in Figure 5 instead of the total water level.
Figure 6 I’m not clear on the purpose of the EEF flag. Are the authors trying to remove real variability that is due to typhoons? Some of those datapoints that are flagged look reasonable but whether or not this is truly the case can only be demonstrated in a non-tidal residual time series.
Line 235 the authors state that they have compared their process with the IOC standard methodology, but they do not provide a reference for the IOC QC regimen that has been used nor do they describe the software that was used to do so. Given that harmonic analysis is a fundamental component of the IOC QC methodology, I can’t see that such a comparison is valid.
Line 260 – Are the authors simply flagging real extreme events as bad?
Line 273 mention automatic QC and it isn’t clear to me whether the TALOD QC method is manual or automatic, nor whether the IOC QC protocol that has been used is an automatic or delayed mode process. I’m not sure that the 2 systems are comparable.
Figure 8 – the results reported to be from the IOC methodology do not look correct to me. I would recommend that the authors consult the IOC manual of QC Quality control of in situ sea level observations: a review and progress towards automated quality control, volume 1 - UNESCO Digital Library. A recent publication which might also help is here OS - Delayed-mode reprocessing of in situ sea level data for the Copernicus Marine Service
Line 378-392 – The observed VLM at a tide gauge site from the GNSS receiver is a better indicator than differencing altimetry etc, but in any event the observed VLM from GNSS appear to act in the opposite sense to the one the authors have derived.
Citation: https://doi.org/10.5194/egusphere-2024-3380-RC2 -
AC2: 'Reply on RC2', Jae-Ho Lee, 28 May 2025
1. Line 75 – Open ocean tides are generally easier to analyze than those at the coast, where shallow water effects can distort the tide.
Ans: We may have unintentionally given the impression that our main challenge was related to tidal complexity. Our primary focus is on how to classify error-like outliers in observed sea level data, rather than on the tidal complexity. The I-ORS data obtained through the rangefinder contains a considerable amount of unrealistic values, including overshooting-like errors, spikes, and a new form of stuck values, unlike coastal tide gauges that generally provide continuous and reliable measurements. These many error-like values cause frequent disruptions in the time series, making it difficult to extract consistent tidal components through harmonic analysis. This study aims to develop a quality control framework suitable for the error characteristics of range finder data in the open ocean, thereby preserving as much qualified data as possible.
2. Line 152 S 2.2.1 Meta Check – This terminology is confusing. The term ‘metadata’ seems to be used in a non-traditional way here. It seems that what the authors are actually describing are cross checks between instrumental maintenance records and sea level time series. I would therefore give this check an alternative name
Ans: Agreed. The term “meta-check” can be confusing to readers; hence, we renamed it “manual check” in the revised manuscript.
3. Lines 156-163 are confusing, There is stated to be no maintenance record for the station, but then it is claimed that the sensor was relocated twice and swapped out on another occasion. How did the authors deduce this in the absence of maintenance records?
Ans: You are right. There were no formal records detailing the relocations, configuration changes, or cleansing for this rangefinder sensor during most periods of operation, except for the recent few years. We collected maintenance information through personal discussions with technicians from KHOA and the commissioned company, responsible for managing this station. They reported two critical instances of sensor maintenance: first, a change in the data recording method on 12 December 2007, and later a sensor replacement due to its malfunction in 2016. These changes were confirmed with the recorded data.
4. Line 168 – S 2.2.2 Stuck Check. It is unclear whether this check is performed manually or is automated. In any event, given that step 1 is a manual check, why would these ‘stuck data’ checks not be identified during step 1? Figure 5 d shows that they are quite obvious.
Ans: The stuck check in TALOD is an automatic process to particularly to detect a new form of stuck values. The manual check in step 1 was designed to ensure the accuracy of residual component estimation by removing non-realistic, error-like patterns that last over a day. This process manually flags only the periods that need to be removed for the next steps in the QC procedure. When attempting to detect stuck errors in Figure 5d by adopting manual or typical qc process, either the entire dataset from May 5 to 6 is flagged as errors or fails to detect this unique type of error, thus tending to classify them as good data. Meanwhile, the newly designed stuck check, an automatic process in TALOD QC, allows us to retain most observed data successfully by flagging these unique stuck values. These results can be found in Figure S1 included in the supplementary material.
5. Line 177 – S 2.2.3 Range Check. I don’t understand why the authors would go to the trouble of using predictions from a global tidal model to identify the tidal range within a given month, to then remove an offset to move the model closer to observations and then smooth the model tide to compare it to observations. Surely it would be far simpler to perform Classical Harmonic Analysis of the tide gauge observations to generate a non-tidal residual time series in which any suspect datapoints will be immediately obvious, because they are not masked by the dominant tidal variability? This is the principle on which conventional QC of sea level time series is built and by omitting this step, the authors are making life much more difficult for themselves and may indeed overlook some suspect data.
Ans: This study aims to develop a quality control procedure that is both applicable to data obtained from the I-ORS’ rangefinder, which observes sea level with a substantial amount of error-like data, and generalizable to observations from a typical tidal gauge in the coastal region. Due to the limited length, coverage, or poor quality of observations such as those in this study, many researchers struggle to obtain accurate tidal components through a typical harmonic analysis. Practically speaking, we initially adopted the least squares method and harmonic analysis to process our observations at an early stage; this conventional approach, however, did not yield adequate residuals for a local range check (see the attached supplementary Figures S1 and S2, as well as the response provided below for a detailed discussion). To address this limitation, we took advantage of a well-validated tide prediction model. Some may point out the drawback of restricting the standalone operation of this approach, as it relies on the output of an external tidal model, as well as a script for extracting tides from the model. However, the outputs and processing scripts of various (or localized) tidal models are publicly provided and easily accessible, for instance, from https://www.tpxo.net/global of Oregon State University. We believe that using tidal models may be a practical alternative to the conventional method, as in our specific situation, where power supply issues accompany large stuck and spike errors. It is noteworthy to point out that this study uses a double smoothing technique to improve the performance of the spike check that follows the range check. This approach allows us to set thresholds more narrow to 0.5 meters and considerably increases the spike-detection rate by reducing misclassifications caused by frequent overshooting.
6. Figure 5(b) and (c) It isn’t clear to me that the range or spike check has worked as some of the yellow boxes appear visually to be within range. If they are truly out-of-range, that will be apparent in the non-tidal residual time series, which should be presented in Figure 5 instead of the total water level.
Ans: The range check in TALOD is performed on residuals after removing tides; the spike check is also performed based on the square of the change rate of those residuals. This discrepancy appears to invoke confusion about whether the quality control process is being conducted correctly. We checked the non-tidal residual time series and confirmed that the range and spike checks were functioning properly. In response to your comment, we have added the residual one into the supplementary material to aid readers’ understanding (see the attached figure S1).
7. Figure 6 I’m not clear on the purpose of the EEF flag. Are the authors trying to remove real variability that is due to typhoons? Some of those datapoints that are flagged look reasonable but whether or not this is truly the case can only be demonstrated in a non-tidal residual time series.
Ans: The purpose of the EEF flag is not to remove data but to provide users with information, thereby promoting its usability by informing them that these observations, even those marked with precedent qc flags for spikes and out-of-range observations, may still be good data. We manually assigned an extreme event flag to data observed during extreme weather periods. Then, the users have the right to choose the data for their scientific objectives. We calculated the non-tidal residuals and confirmed that the outliers were flagged by the range and spike checks in response to the reviewer’s comment. Additionally, we have included the residual time series for Figure 6 in the supplementary material (Figure S3).
8. Line 235 the authors state that they have compared their process with the IOC standard methodology, but they do not provide a reference for the IOC QC regimen that has been used nor do they describe the software that was used to do so. Given that harmonic analysis is a fundamental component of the IOC QC methodology, I can’t see that such a comparison is valid.
Ans: Appreciate this valuable comment. The Korea Hydrographic and Oceanographic Agency (KHOA) has conducted quality control on the observational data in accordance with the IOC manuals1,2 and the NOAA handbook3. This study utilized the processed data provided by KHOA, which we refer to as IOC QC. The KHOA's methodology is for a near-real-time process after a real-time automatic QC process. They used only four major tidal components, which were estimated from long-term sea level data, to calculate tide residuals, then flag residual components that exceed thresholds. And we have added the relevant details in Section 3.1 and Table 4. To avoid confusion, we replaced ‘IOC’ with ‘KHOA’ in the revised version and included the corresponding references.
1 IOC, GTSPP Real-Time Quality Control Manual, 1990
2 IOC, Manual of Quality Control Procedures for Validation of Oceanographic Data, 1993
3 NOAA, NDBC Handbook of Automated Data Quality Control Checks and Procedures, 20099. Line 260 – Are the authors simply flagging real extreme events as bad?
Ans: No. As mentioned above, data observed during extreme events may exhibit large and abrupt variability, but be realistic in-situ observations. Therefore, we manually assigned the Extreme Event Flag (EEF) to provide users with an option to utilize the data based on their scientific objectives.
10. Line 273 mention automatic QC and it isn’t clear to me whether the TALOD QC method is manual or automatic, nor whether the IOC QC protocol that has been used is an automatic or delayed mode process. I’m not sure that the 2 systems are comparable.
Ans: The TALOD method automates procedures except for the “Meta Check (manual check in the revised version)” and the “Extreme Event Flag” as the last step. The KHOA QC process encompasses both manual and automatic processes. In this study, we focused on comparing and analyzing the results obtained by the automatic processes of both QC methods.
11. Figure 8 – the results reported to be from the IOC methodology do not look correct to me. I would recommend that the authors consult the IOC manual of QC Quality control of in situ sea level observations: a review and progress towards automated quality control, volume 1 - UNESCO Digital Library. A recent publication which might also help is here OS - Delayed-mode reprocessing of in situ sea level data for the Copernicus Marine Service
Ans: As recommended by the reviewer, we adopted SELENE’s quality control and tide-surge module to examine the results, with a particular focus on the first (Fig. 8a, 8b) and third (Fig. 8e, 8f) cases (see the attached pdf file). The first case was a stuck error in which the values NaN, 0.088, and 0.090 alternated repeatedly. Unlike the TALOD, the SELENE one did not detect such errors. The third case corresponds to the range and spike checks. When two or more overshooting values occurred consecutively, the SELENE tends to result in misclassification or detection failure. We confirmed that the SELENE performs well for most datasets, but a specific case, such as our observations, seems to require additional handling. The comparison results of the three QC methods, including the SELENE, are provided in supplementary Figure S1.
12. Line 378-392 – The observed VLM at a tide gauge site from the GNSS receiver is a better indicator than differencing altimetry etc, but in any event the observed VLM from GNSS appear to act in the opposite sense to the one the authors have derived.
Ans: All three VML estimations do appear negative trends, i.e., obtained by subtracting the difference between satellite altimetry and observations -2.51 ± 0.62 mm/yr, by summing the VLM of processes -2.17 ± 0.89 mm/yr, and from using a GNSS sensor -0.89 ± 0.47 mm/yr. These trends consistently reflect the subsidence of the I-ORS’ ground and do not act in the opposite sense.
-
AC2: 'Reply on RC2', Jae-Ho Lee, 28 May 2025
Status: closed
-
RC1: 'Comment on egusphere-2024-3380', Anonymous Referee #1, 05 Dec 2024
Review of “Application of quality-controlled sea level height observation at the central East China Sea: Assessment of sea level rise”
The authors propose a new method, called TALOD, to perform quality control on the sea level height (SLH) measured by the radar located in I-ORS. TALOD detects problems with metadata, out of range data, spikes and stuck data. This new method is compared to the existing IOC method. The quality-controlled data is compared to HYCOM, GLORYS and ERAS5. Thus, TALOD proves to work correctly. The good data is further used to compute sea level rise (SLR), which leads to the conclusion that SLR in this location is due to vertical land movement (VLM).
This manuscript is clear and well written. I appreciate that the authors are adapting the TALOD method for the sea level height observations in open ocean. Also, the type of bad data is even further categorized, which brings much information on how SLH behaves.
Detailed comments:
-Please do an analysis on the tides.
-Please specify the constituents considered in this study.
Fig. 1: characters should be larger.
L250-253: There really should not be so many stuck errors in a modern sensor, is there an explanation for that?
L270: Yes, recurrent spikes make the automatic detection “think” they are good values. Good job, there, solving the issue by computing a local bias.
L420 -HYCOM shows a trend in SLH of -23.86mm/yr, which is quite high. Please consider whether it is a good model to compare to.
Sec 3.3 Great analysis of the contribution to sea level rise on this site.
Citation: https://doi.org/10.5194/egusphere-2024-3380-RC1 -
AC1: 'Reply on RC1', Jae-Ho Lee, 22 May 2025
- Please do an analysis on the tides.
: Harmonic analysis was conducted on the SLH observations during the well-observed period from March to June 2021. The M2 tide exhibits the largest amplitude of 0.62 m, with a signal-to-noise ratio (SNR) exceeding 10³. This tide is followed by S2 (0.32 m), K1 (0.20 m), N2 (0.16 m), and O1 (0.15 m). The mean amplitude of these primary constituents was 0.28 m, with an average SNR of approximately 3,000, notably higher than that of the remaining 31 constituents with amplitudes under 0.1 m (mean amplitude: 0.01 m, mean SNR: 6.01). We have reflected this tidal analysis result in section 2.1.2.
-Please specify the constituents considered in this study.
: We have specified the 15 tidal components for this study ¾M2, S2, N2, K2, 2N2, K1, O1, P1, Q1, Mf, Mm, M4, MN4, MS4, and S1¾ in the revised manuscript.
-Fig. 1: characters should be larger.
: Acknowledged. We have revised the font size in Figure 1 of the revised manuscript.
- L250-253: There really should not be so many stuck errors in a modern sensor, is there an explanation for that?
: Initially, we attempted to solve this issue by contacting the MIROS company, the manufacturer of the range finder. Then, they requested access to all related raw data to look into the cause of this unrealistic error. Unfortunately, we are not able to provide this access due to contractual restrictions. The technicians working at the Korea Hydrographic and Oceanographic Agency (KHOA), in charge of managing the ocean research station, suggested that the unstable power supply might cause the errors. In addition, we cannot rule out the possibility that biological floats contribute, at least in part, to this phenomenon, as their increase of stuck errors was observed during warm seasons, specifically spring (16,536), summer (7,985), autumn (3,067), and winter (5,795). This seasonality may indicate an impact of surface-drifting plankton (or material) on the rangefinder's reflection rate, presumably resulting in these recurrent stuck errors. To assess this hypothesis, we require an in-depth study on the relationship between floating materials and their impact on sea level observations with taking into account the electrical power level of this station.
- L270: Yes, recurrent spikes make the automatic detection “think” they are good values. Good job, there, solving the issue by computing a local bias.
: Thank you for your comment. These recurrent spikes and stuck values can be falsely classified as real, good data by a classical quality control process. The TALOD approach effectively addresses this issue by calculating non-tidal residuals in order to distinguish real observations from outliers. This improvement reduces false detections and enhances the reliability of observed sea levels.
- L420 -HYCOM shows a trend in SLH of -23.86mm/yr, which is quite high. Please consider whether it is a good model to compare to.
: HYCOM is a reanalysis dataset that assimilates various observational data, including temperature, salinity, and sea level height, into the NCODA system. This reanalysis data has been widely used for both initial and boundary conditions of numerical simulations, as well as for assessing model performance in the broad fields of oceanography and climate studies. When calculating a linear trend, the HYCOM reanalysis rendered an unrealistically high negative trend of -23.86 mm/yr; this impractical trend is primarily due to the robust sea level falling in the HYCOM’s non-assimilation simulation during the recent period since 2017. Therefore, in this study, we divided the 2003–2022 period into an assimilation-applied phase (HYCOM-R) and a non-assimilated phase (HYCOM-S) to discuss relevant issues in HYCOM’s performance (L321-L326). Our sea level analysis may reflect that the recent data of the HYCOM product is unsuitable for studying circulation and associated water properties, at least, in the East China Sea, although in-depth research is needed to assess the comprehensive skills of HYCOM compared to other reanalysis datasets (i.e., GLORYS12, BRAN2020, and ORAS5).
- Sec 3.3 Great analysis of the contribution to sea level rise on this site.
: We appreciate the compliment. Sea level rise is a critical topic linked to climate change. This study aimed to comprehensively analyse not only the sea level trend observed from fixed platforms but also the main contributions to sea level rise, including barostatic and steric effects, as well as vertical land motion. We hope that these findings will contribute to a deeper understanding of regional sea level changes and their implications for climate research.
Citation: https://doi.org/10.5194/egusphere-2024-3380-AC1
-
AC1: 'Reply on RC1', Jae-Ho Lee, 22 May 2025
-
RC2: 'Comment on egusphere-2024-3380', Anonymous Referee #2, 05 Feb 2025
This study proposes a method of quality control for sea level time series called TALOD, which uses a variety of checks to remove bad data. The study indicates that it performs well against ‘IOC’ QC methods and uses the resulting good data (averaged to daily means) along with various altimetry and model datasets to infer something about long term sea level rise and its forcing factors. The various checks seem to be slight variants of IOC methods. However, a fundamental step of QC seems to have been overlooked in TALOD , which is harmonic analysis. This step removes the dominant tidal variability and makes suspect data more easily identifiable from true observations. In addition, it appears that some good data, associated with extreme event, have been removed which may bias any resulting tends that are inferred. The authors use 20 years of data to infer trends, which is a rather short time series to deduce a robust long-term trend. Assumptions are made about vertical land motion, which seem tenuous.
Line 75 – Open ocean tides are generally easier to analyse than those at the coast, where shallow water effects can distort the tide.
Line 152 S 2.2.1 Meta Check – This terminology is confusing. The term ‘metadata’ seems to be used in a non-traditional way here. It seems that what the authors are actually describing are cross checks between instrumental maintenance records and sea level time series. I would therefore give this check an alternative name
Lines 156-163 are confusing, There is stated to be no maintenance record for the station, but then it is claimed that the sensor was relocated twice and swapped out on another occasion. How did the authors deduce this in the absence of maintenance records?
Line 168 – S 2.2.2 Stuck Check. It is unclear whether this check is performed manually or is automated. In any event, given that step 1 is a manual check, why would these ‘stuck data’ checks not be identified during step 1? Figure 5 d shows that they are quite obvious.
Line 177 – S 2.2.3 Range Check. I don’t understand why the authors would go to the trouble of using predictions from a global tidal model to identify the tidal range within a given month, to then remove an offset to move the model closer to observations and then smooth the model tide to compare it to observations. Surely it would be far simpler to perform Classical Harmonic Analysis of the tide gauge observations to generate a non-tidal residual time series in which any suspect datapoints will be immediately obvious, because they are not masked by the dominant tidal variability? This is the principle on which conventional QC of sea level time series is built and by omitting this step, the authors are making life much more difficult for themselves and may indeed overlook some suspect data.
Figure 5(b) and (c) It isn’t clear to me that the range or spike check has worked as some of the yellow boxes appear visually to be within range. If they are truly out-of-range, that will be apparent in the non-tidal residual time series, which should be presented in Figure 5 instead of the total water level.
Figure 6 I’m not clear on the purpose of the EEF flag. Are the authors trying to remove real variability that is due to typhoons? Some of those datapoints that are flagged look reasonable but whether or not this is truly the case can only be demonstrated in a non-tidal residual time series.
Line 235 the authors state that they have compared their process with the IOC standard methodology, but they do not provide a reference for the IOC QC regimen that has been used nor do they describe the software that was used to do so. Given that harmonic analysis is a fundamental component of the IOC QC methodology, I can’t see that such a comparison is valid.
Line 260 – Are the authors simply flagging real extreme events as bad?
Line 273 mention automatic QC and it isn’t clear to me whether the TALOD QC method is manual or automatic, nor whether the IOC QC protocol that has been used is an automatic or delayed mode process. I’m not sure that the 2 systems are comparable.
Figure 8 – the results reported to be from the IOC methodology do not look correct to me. I would recommend that the authors consult the IOC manual of QC Quality control of in situ sea level observations: a review and progress towards automated quality control, volume 1 - UNESCO Digital Library. A recent publication which might also help is here OS - Delayed-mode reprocessing of in situ sea level data for the Copernicus Marine Service
Line 378-392 – The observed VLM at a tide gauge site from the GNSS receiver is a better indicator than differencing altimetry etc, but in any event the observed VLM from GNSS appear to act in the opposite sense to the one the authors have derived.
Citation: https://doi.org/10.5194/egusphere-2024-3380-RC2 -
AC2: 'Reply on RC2', Jae-Ho Lee, 28 May 2025
1. Line 75 – Open ocean tides are generally easier to analyze than those at the coast, where shallow water effects can distort the tide.
Ans: We may have unintentionally given the impression that our main challenge was related to tidal complexity. Our primary focus is on how to classify error-like outliers in observed sea level data, rather than on the tidal complexity. The I-ORS data obtained through the rangefinder contains a considerable amount of unrealistic values, including overshooting-like errors, spikes, and a new form of stuck values, unlike coastal tide gauges that generally provide continuous and reliable measurements. These many error-like values cause frequent disruptions in the time series, making it difficult to extract consistent tidal components through harmonic analysis. This study aims to develop a quality control framework suitable for the error characteristics of range finder data in the open ocean, thereby preserving as much qualified data as possible.
2. Line 152 S 2.2.1 Meta Check – This terminology is confusing. The term ‘metadata’ seems to be used in a non-traditional way here. It seems that what the authors are actually describing are cross checks between instrumental maintenance records and sea level time series. I would therefore give this check an alternative name
Ans: Agreed. The term “meta-check” can be confusing to readers; hence, we renamed it “manual check” in the revised manuscript.
3. Lines 156-163 are confusing, There is stated to be no maintenance record for the station, but then it is claimed that the sensor was relocated twice and swapped out on another occasion. How did the authors deduce this in the absence of maintenance records?
Ans: You are right. There were no formal records detailing the relocations, configuration changes, or cleansing for this rangefinder sensor during most periods of operation, except for the recent few years. We collected maintenance information through personal discussions with technicians from KHOA and the commissioned company, responsible for managing this station. They reported two critical instances of sensor maintenance: first, a change in the data recording method on 12 December 2007, and later a sensor replacement due to its malfunction in 2016. These changes were confirmed with the recorded data.
4. Line 168 – S 2.2.2 Stuck Check. It is unclear whether this check is performed manually or is automated. In any event, given that step 1 is a manual check, why would these ‘stuck data’ checks not be identified during step 1? Figure 5 d shows that they are quite obvious.
Ans: The stuck check in TALOD is an automatic process to particularly to detect a new form of stuck values. The manual check in step 1 was designed to ensure the accuracy of residual component estimation by removing non-realistic, error-like patterns that last over a day. This process manually flags only the periods that need to be removed for the next steps in the QC procedure. When attempting to detect stuck errors in Figure 5d by adopting manual or typical qc process, either the entire dataset from May 5 to 6 is flagged as errors or fails to detect this unique type of error, thus tending to classify them as good data. Meanwhile, the newly designed stuck check, an automatic process in TALOD QC, allows us to retain most observed data successfully by flagging these unique stuck values. These results can be found in Figure S1 included in the supplementary material.
5. Line 177 – S 2.2.3 Range Check. I don’t understand why the authors would go to the trouble of using predictions from a global tidal model to identify the tidal range within a given month, to then remove an offset to move the model closer to observations and then smooth the model tide to compare it to observations. Surely it would be far simpler to perform Classical Harmonic Analysis of the tide gauge observations to generate a non-tidal residual time series in which any suspect datapoints will be immediately obvious, because they are not masked by the dominant tidal variability? This is the principle on which conventional QC of sea level time series is built and by omitting this step, the authors are making life much more difficult for themselves and may indeed overlook some suspect data.
Ans: This study aims to develop a quality control procedure that is both applicable to data obtained from the I-ORS’ rangefinder, which observes sea level with a substantial amount of error-like data, and generalizable to observations from a typical tidal gauge in the coastal region. Due to the limited length, coverage, or poor quality of observations such as those in this study, many researchers struggle to obtain accurate tidal components through a typical harmonic analysis. Practically speaking, we initially adopted the least squares method and harmonic analysis to process our observations at an early stage; this conventional approach, however, did not yield adequate residuals for a local range check (see the attached supplementary Figures S1 and S2, as well as the response provided below for a detailed discussion). To address this limitation, we took advantage of a well-validated tide prediction model. Some may point out the drawback of restricting the standalone operation of this approach, as it relies on the output of an external tidal model, as well as a script for extracting tides from the model. However, the outputs and processing scripts of various (or localized) tidal models are publicly provided and easily accessible, for instance, from https://www.tpxo.net/global of Oregon State University. We believe that using tidal models may be a practical alternative to the conventional method, as in our specific situation, where power supply issues accompany large stuck and spike errors. It is noteworthy to point out that this study uses a double smoothing technique to improve the performance of the spike check that follows the range check. This approach allows us to set thresholds more narrow to 0.5 meters and considerably increases the spike-detection rate by reducing misclassifications caused by frequent overshooting.
6. Figure 5(b) and (c) It isn’t clear to me that the range or spike check has worked as some of the yellow boxes appear visually to be within range. If they are truly out-of-range, that will be apparent in the non-tidal residual time series, which should be presented in Figure 5 instead of the total water level.
Ans: The range check in TALOD is performed on residuals after removing tides; the spike check is also performed based on the square of the change rate of those residuals. This discrepancy appears to invoke confusion about whether the quality control process is being conducted correctly. We checked the non-tidal residual time series and confirmed that the range and spike checks were functioning properly. In response to your comment, we have added the residual one into the supplementary material to aid readers’ understanding (see the attached figure S1).
7. Figure 6 I’m not clear on the purpose of the EEF flag. Are the authors trying to remove real variability that is due to typhoons? Some of those datapoints that are flagged look reasonable but whether or not this is truly the case can only be demonstrated in a non-tidal residual time series.
Ans: The purpose of the EEF flag is not to remove data but to provide users with information, thereby promoting its usability by informing them that these observations, even those marked with precedent qc flags for spikes and out-of-range observations, may still be good data. We manually assigned an extreme event flag to data observed during extreme weather periods. Then, the users have the right to choose the data for their scientific objectives. We calculated the non-tidal residuals and confirmed that the outliers were flagged by the range and spike checks in response to the reviewer’s comment. Additionally, we have included the residual time series for Figure 6 in the supplementary material (Figure S3).
8. Line 235 the authors state that they have compared their process with the IOC standard methodology, but they do not provide a reference for the IOC QC regimen that has been used nor do they describe the software that was used to do so. Given that harmonic analysis is a fundamental component of the IOC QC methodology, I can’t see that such a comparison is valid.
Ans: Appreciate this valuable comment. The Korea Hydrographic and Oceanographic Agency (KHOA) has conducted quality control on the observational data in accordance with the IOC manuals1,2 and the NOAA handbook3. This study utilized the processed data provided by KHOA, which we refer to as IOC QC. The KHOA's methodology is for a near-real-time process after a real-time automatic QC process. They used only four major tidal components, which were estimated from long-term sea level data, to calculate tide residuals, then flag residual components that exceed thresholds. And we have added the relevant details in Section 3.1 and Table 4. To avoid confusion, we replaced ‘IOC’ with ‘KHOA’ in the revised version and included the corresponding references.
1 IOC, GTSPP Real-Time Quality Control Manual, 1990
2 IOC, Manual of Quality Control Procedures for Validation of Oceanographic Data, 1993
3 NOAA, NDBC Handbook of Automated Data Quality Control Checks and Procedures, 20099. Line 260 – Are the authors simply flagging real extreme events as bad?
Ans: No. As mentioned above, data observed during extreme events may exhibit large and abrupt variability, but be realistic in-situ observations. Therefore, we manually assigned the Extreme Event Flag (EEF) to provide users with an option to utilize the data based on their scientific objectives.
10. Line 273 mention automatic QC and it isn’t clear to me whether the TALOD QC method is manual or automatic, nor whether the IOC QC protocol that has been used is an automatic or delayed mode process. I’m not sure that the 2 systems are comparable.
Ans: The TALOD method automates procedures except for the “Meta Check (manual check in the revised version)” and the “Extreme Event Flag” as the last step. The KHOA QC process encompasses both manual and automatic processes. In this study, we focused on comparing and analyzing the results obtained by the automatic processes of both QC methods.
11. Figure 8 – the results reported to be from the IOC methodology do not look correct to me. I would recommend that the authors consult the IOC manual of QC Quality control of in situ sea level observations: a review and progress towards automated quality control, volume 1 - UNESCO Digital Library. A recent publication which might also help is here OS - Delayed-mode reprocessing of in situ sea level data for the Copernicus Marine Service
Ans: As recommended by the reviewer, we adopted SELENE’s quality control and tide-surge module to examine the results, with a particular focus on the first (Fig. 8a, 8b) and third (Fig. 8e, 8f) cases (see the attached pdf file). The first case was a stuck error in which the values NaN, 0.088, and 0.090 alternated repeatedly. Unlike the TALOD, the SELENE one did not detect such errors. The third case corresponds to the range and spike checks. When two or more overshooting values occurred consecutively, the SELENE tends to result in misclassification or detection failure. We confirmed that the SELENE performs well for most datasets, but a specific case, such as our observations, seems to require additional handling. The comparison results of the three QC methods, including the SELENE, are provided in supplementary Figure S1.
12. Line 378-392 – The observed VLM at a tide gauge site from the GNSS receiver is a better indicator than differencing altimetry etc, but in any event the observed VLM from GNSS appear to act in the opposite sense to the one the authors have derived.
Ans: All three VML estimations do appear negative trends, i.e., obtained by subtracting the difference between satellite altimetry and observations -2.51 ± 0.62 mm/yr, by summing the VLM of processes -2.17 ± 0.89 mm/yr, and from using a GNSS sensor -0.89 ± 0.47 mm/yr. These trends consistently reflect the subsidence of the I-ORS’ ground and do not act in the opposite sense.
-
AC2: 'Reply on RC2', Jae-Ho Lee, 28 May 2025
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
205 | 83 | 25 | 313 | 21 | 21 |
- HTML: 205
- PDF: 83
- XML: 25
- Total: 313
- BibTeX: 21
- EndNote: 21
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1