the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Post-Processing High-Resolution Ensemble Forecasts for Extreme Rainfall: Short-Term Skill Evaluation over Kyushu, Japan
Abstract. Extreme rainfall in Japan, exemplified by the August 2021 Kyushu event with multiple linear rainbands, continues to cause severe societal impacts, underscoring the need for reliable ensemble rainfall forecasts. This study evaluates how ensemble size, horizontal resolution, and integration period influence forecast skill using the SCALE-RM model. Four ensembles were examined: a coarser-resolution set (S1; 3.2 km, 100 members) and three finer-resolution sets (S2–S4; 800 m, 50 members), all initialized from ERA5 but with different setup and time integration. Mean Bias (MB) and Quantile Mapping (QM) corrections were applied, and skill was assessed using RMSE, probability maps, ETS, and BS. Before correction, none of the ensembles reproduced the moderate-to-heavy rainfall accumulation in northern Kyushu. S1 produced the lowest RMSE but failed to capture localized maxima, decayed rainfall too early, and missed the second peak on 12 August. After correction, performance diverged. S1 shows noticeable improvement, producing moderate to higher rainfall values in the northern region, though peak intensities remain slightly underestimated. S4 shows the strongest enhancement, successfully generating the extreme rainfall intensities in the rainband core and closely matching observations, indicating that its systematic biases were effectively removed. Overall, the findings demonstrate that high resolution alone does not guarantee improved skill; ensemble size and robust post-processing are equally critical. These insights inform both operational forecasting and controlled weather-modification experiments.
- Preprint
(14582 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (until 15 Apr 2026)
-
RC1: 'Comment on egusphere-2026-1052', Anonymous Referee #1, 10 Mar 2026
reply
-
AC1: 'Reply on RC1', Magfira Syarifuddin, 19 Mar 2026
reply
- Regarding the integration period used in the simulation, we thank the reviewer for this insightful comment. We agree that the 14 August period represents the peak hazard phase. However, the focus on 11–12 August is linked to the original weather-intervention experimental design, which targets the developmental stage of the system. For backbuilding linear rainbands, this earlier phase captures the initiation and organization processes, where the system is more responsive to modification and where forecast improvement is most critical. Therefore, this study evaluates post-processing performance during the stage most relevant for both predictability and intervention feasibility. We will clarify this rationale in the Introduction and add a discussion on the applicability and limitations of the framework under more extreme conditions (e.g., 14 August) in the revised manuscript.
- Regarding the random perturbation method, we acknowledge that the random perturbation (RP) approach is relatively simple compared to more advanced methods such as ensemble Kalman filter (EnKF)-based perturbations. However, RP is widely used in convection-permitting ensemble studies and provides a practical baseline for evaluating ensemble behavior (e.g., Raynaud and Bouttier, 2016; Duc et al., 2021). In this study, the use of RP allows us to assess how much value can be recovered through post-processing, even from a simple ensemble configuration. We will clarify this point and discuss the limitations of the perturbation method and short evaluation period, including the need for multi-event validation, in the revised Discussion.
- Regarding comments #3 about spatial displacement and the training sample:
- We agree that spatial displacement is a key source of error in convective rainfall forecasts. To address this, we will incorporate a centroid-based displacement metric in the revised analysis to quantify positional errors.
- We agree that the training sample size is an important factor in bias-correction performance. While the temporal duration of the training period is relatively short, the effective sample size in this study is substantially increased by the combination of high spatial resolution, high temporal resolution, and multiple ensemble members. For example, for a 24-hour simulation at 10-minute resolution, the training dataset (60%) contains on the order of 10⁷ rainfall samples across the model domain. Even for the hourly datasets, the number of samples remains on the order of 10⁶. These sample sizes are comparable to or exceed those used in recent studies employing more complex methods, such as the deep-learning-based post-processing approach of Liu et al. (2023), which used datasets of similar magnitude. Therefore, although the training period is short in duration, the large number of available samples allows for stable estimation of distributional relationships for event-based bias correction. However, we acknowledge that the limited temporal diversity may affect the robustness of extreme quantile estimation, and we will clarify this limitation in the revised manuscript.
- Regarding AMeDAS data, we thank the reviewer for this comment. In Section 2.2, rainfall from AMeDAS stations was averaged over the target domain using a simple arithmetic mean. Given the high station density in the study region, this approach provides a reasonable representation of regional rainfall. We will clarify this point in the revised manuscript. Regarding the suggested references, while they are valuable, they fall outside the scope of atmospheric convective rainfall and ensemble forecasting. We will instead strengthen the Introduction with recent studies directly relevant to convective-scale rainfall prediction and post-processing.
- We agree that threshold dependence is an important aspect of post-processing evaluation. In our results, performance decreases at higher thresholds, which is consistent with increased sensitivity to spatial displacement errors in convective systems. We will expand the discussion to clarify this behavior and its implications for extreme rainfall prediction. Additionally, we will further elaborate on the balance between bias correction and signal retention in the context of weather-modification applications.
- We thank the reviewer for this suggestion. To better illustrate ensemble behavior, we will include additional visualization (e.g., spaghetti plots) to assess the impact of post-processing on ensemble spread. Mean Bias (MB) and Quantile Mapping (QM) were selected due to their computational efficiency, interpretability, and suitability for event-based datasets with limited training samples. In contrast, more advanced methods such as EMOS or machine learning approaches typically require large multi-event datasets, which are difficult to obtain for convection-permitting ensemble simulations due to their high computational cost. We will clarify this methodological choice and suggest that future work could explore more advanced approaches using larger datasets.
Citation: https://doi.org/10.5194/egusphere-2026-1052-AC1
-
AC1: 'Reply on RC1', Magfira Syarifuddin, 19 Mar 2026
reply
-
RC2: 'Comment on egusphere-2026-1052', Anonymous Referee #2, 14 Mar 2026
reply
This paper presents a post-processing study of high-resolution ensemble rainfall forecasts for the August 2021 Kyushu extreme rainfall event (linear rainbands), investigating the effects of resolution, ensemble size, and integration period on forecast skill. The research has certain scientific value and practical significance, but improvements are needed in methodology description, result presentation, and discussion depth.Major Comments
-
The paper claims this is "the first study to apply bias correction directly to 10-minute to hourly high-resolution ensemble rainfall forecasts," but the literature review insufficiently covers related work. In particular, recent applications of machine learning methods in short-term forecast post-processing (such as LSTM, diffusion models, etc.) are not mentioned. It is recommended to supplement the latest literature on convective-scale ensemble forecast post-processing from 2020-2024, especially studies targeting short-duration heavy rainfall. The advantages and disadvantages of this paper's methods compared to existing methods (such as EMOS, deep learning methods) should be clarified. Meanwhile, the innovation boundaries of this research should be more clearly defined in the introduction.
-
Table 1 shows that S1 (3.2 km, 100 members, 73 hours) and S2-S4 (0.8 km, 50 members, 24-48 hours) vary simultaneously in three dimensions: resolution, ensemble size, and integration length, creating a confounding effect. It is recommended to add control experiments or adopt more systematic experimental designs (such as orthogonal design) to separate the effects of each factor. The limitations of this design on the conclusions should be clearly stated in the discussion section. Additionally, it is recommended to supplement analysis comparing different resolutions within the same integration period (e.g., S2 vs. hypothetical 3.2 km/24 h configuration).
-
The methodology section can be improved in three aspects. (1) Insufficient training samples: S2 has only about 14 hours of training data (page 9). Is this sufficient for tail correction of extreme rainfall? Figure 8 shows that BS converges to raw values at thresholds >4 mm h⁻¹, which may be related to this. (2) Assumption of quantile mapping: The use of isotonic regression assumes a monotonic relationship between model and observation quantiles, but this assumption may not hold for heavy rainfall dominated by displacement errors (such as the linear rainband position bias in S2). (3) Spatial correlation treatment: The current method is applied independently at each grid point without considering spatial correlation, which may produce unphysical spatial structures when dealing with organized convective systems.
-
Section 4.3 discusses the baseline issue for weather modification experiments, which is valuable, but there is a logical leap: the text states that "observational baseline cannot be obtained during modification trials," yet simultaneously suggests using "operational radar climatology" for post-processing. The contradiction between these two points is not resolved. It is recommended to clarify how to distinguish model bias from artificial influence effects without an observational baseline. Is it necessary to construct a "pseudo-control" method?
Minor Comments-
Figure 4 shows that S2 exhibits a peak exceeding 30 mm h⁻¹ during 12:00-15:00 UTC on the 12th, far exceeding observations. However, the text does not analyze whether this is a systematic bias or an outlier from specific members. It is recommended to supplement ensemble spread analysis.
-
Although Figure 3 shows that the spatial distribution of S4-MB+QM is closest to JMA-R, Figure 7's RMSE shows that S4 still has a significant southwest-northeast high-error band. An explanation is needed for why spatial structure improves while displacement errors persist.
-
"Senjō-kōsuitai" and "linear rainbands" are used interchangeably. It is recommended to unify them or clarify their correspondence.
-
Figure 6 lacks color scale description ("Δ probability" ranges from -1 to 1 but has no unit description).
-
Some references (such as Hiraga et al., 2025) appear to be in preprint status and need confirmation of final publication information.
-
In Figure 1, it is recommended to add topographic height shading to illustrate the relationship between linear rainbands and terrain.
-
In Figure 3, the JMA-R reference maps for the four ensemble groups appear repeatedly and could be merged into a single reference panel.
-
For Figures 8-9, it is recommended to add shading or error bars to represent ensemble member spread.
-
On page 3, "COUPLE scheme" first appears without giving its full name or reference, which is unfriendly to non-specialist readers.
-
On page 6, the symbol notation in formula (2) is incorrect; "X+&(")" should be "X̂".
-
The text uniformly uses "5 mm h⁻¹" as the significant rainfall threshold, but the core value of extreme rainfall forecasting lies in higher intensity thresholds (such as 20 mm h⁻¹, 50 mm h⁻¹). It is recommended to supplement comparisons of BS and ETS at different rainfall thresholds (such as 1 mm h⁻¹, 10 mm h⁻¹, 20 mm h⁻¹) to verify whether the post-processing method's correction effect on extreme rainfall is robust.
-
The text mentions that "S2-S4 have structural errors such as rainband tilt and overly wide convective cores," but this is supported only by qualitative descriptions (such as spatial distribution in Figure 3) without quantitative metrics. It is recommended to supplement quantitative structural error analysis, such as using indicators like "rainband orientation angle bias," "convective core width bias," and "rainband position offset distance" to clarify structural error differences among different ensemble configurations.
-
The text mentions using "60% of time steps as the training set," but does not explain whether the training set covers different development stages of the rainband (genesis, maturity, decay). Due to the strong spatiotemporal heterogeneity of linear rainbands, training data from a single stage may lead to correction bias. It is recommended to supplement the temporal distribution characteristics of the training set (such as rainfall intensity proportions in different periods) and verify whether the training set covers the full lifecycle of the rainband.
-
The text mentions "applying ±3% random perturbations to specific humidity below 600 m," but does not explain the basis for selecting this perturbation magnitude (such as whether it is based on ERA5 reanalysis data humidity uncertainty assessment or reference to similar studies). It is recommended to supplement sensitivity analysis of perturbation magnitude or explain how this setting reasonably represents key uncertainties in convective initiation.
-
The text points out that "high resolution + small ensemble easily leads to structural errors, while low resolution + large ensemble easily leads to amplitude errors," but does not deeply analyze the physical essence. It is recommended to combine theory of convection-permitting modeling (such as uncertainty sources in convection-permitting models) to explain "why 50 ensemble members are insufficient to support the structural stability of 0.8 km resolution," and quantify the balance between "convective detail gains from resolution improvement" and "uncertainty losses from insufficient ensemble size."
-
Page 8: The rationale for selecting the evaluation period (03:00-12:00 UTC 12 August) should be more fully explained, as this only covers part of the event.
Citation: https://doi.org/10.5194/egusphere-2026-1052-RC2 -
AC2: 'Reply on RC2', Magfira Syarifuddin, 23 Mar 2026
reply
We thank the reviewer for the valuable feedback. Following are our responses to each comment.
Major comments:
- Regarding the novelty: We thank the reviewer for this important comment. We agree that the novelty and positioning of this study should be clarified with respect to recent developments in ensemble post-processing, including machine learning (ML) approaches.
In the revised manuscript, we will expand the literature review to include recent studies (2020–2024) on convective-scale post-processing, including ML-based methods such as LSTM and related approaches (e.g., Liu et al., 2023), as well as statistical methods such as EMOS and BMA.
We will also revise the novelty statement to avoid overly broad claims and more precisely define the contribution of this study. Specifically, while many existing studies focus on hourly or coarser temporal resolutions or require large multi-event datasets for training, this study examines bias correction applied to convection-permitting ensemble rainfall forecasts at a sub-hourly temporal resolution (10 min) and a kilometer-scale spatial resolution over a large domain.
Furthermore, the use of relatively simple methods (MB and QM) is intentional, as they provide a computationally efficient and interpretable framework suitable for event-based applications with limited training data. This study provides a baseline for complexity, and therefore acts as a complement, rather than competes with, more data-intensive ML approaches.
This is consistent with recent studies (e.g., Dhawan et al., 2024), which show that while machine learning methods may perform well during training, their performance can degrade in testing when sufficient observational data are not available, particularly at high temporal resolutions. This supports the use of robust statistical approaches such as QM for limited, event-based datasets. - Regarding the recommendation to conduct control experiments: We thank the reviewer for this important observation. We agree that the ensemble configurations (S1–S4) vary simultaneously in resolution, ensemble size, and integration length, which introduces a degree of confounding when interpreting the role of each factor.
However, the configurations are inherited from the original experimental design and reflect realistic trade-offs in convection-permitting ensemble forecasting under computational constraints. hey represent practical ensemble configurations for their respective lead times, reflecting the computational constraints faced in real-time disaster prevention.
The aim of this study is not to isolate the individual effects of each factor, but to evaluate post-processing performance under these practical configurations. This approach is consistent with previous studies where post-processing is applied to ensemble outputs at fixed configurations or lead times (e.g., Gneiting et al., 2005; Gebetsberger et al., 2018; Allen et al., 2020; Roberts et al., 2023).
We will clarify this scope and explicitly acknowledge this limitation in the revised manuscript. Conducting additional control experiments is beyond the scope of the present study and will be considered in future work. - Regarding the methodology improvement: We thank the reviewer for the detailed and constructive comments on the methodology. We address each point below.
- Training sample size: We agree that the duration of the training period (e.g., ~14 hours for S2) is relatively short and may limit the robustness of extreme quantile estimation. However, the effective sample size is substantially increased by the high spatial resolution, temporal frequency, and ensemble size, resulting in O(10⁶–10⁷) samples in a spatiotemporal sense within the domain. This is comparable to or larger than datasets used in recent post-processing studies (e.g., Liu et al., 2023).
We acknowledge that limited temporal diversity may reduce performance at high thresholds, consistent with the BS convergence observed. We will also emphasize that the current results are specific for this event case, and should not be used to generalized all linear rainband cases occur in Kyushu. These limitations will be clarified in the revised manuscript. - Assumption of quantile mapping: We agree that QM assumes a monotonic relationship between modeled and observed rainfall distributions, which may not hold under strong displacement errors. In this study, this limitation is reflected in the reduced effectiveness of MB+QM for ensembles with structural errors (e.g., S2–S3), where positional uncertainty dominates. We will clarify that QM primarily corrects amplitude bias and is less effective when spatial misalignment is the dominant error source.
- Spatial correlation treatment: We acknowledge that the current approach applies correction independently at each grid point and does not explicitly account for spatial correlation. However, this grid-based framework is widely used in rainfall post-processing, where observations (e.g., rain gauge networks) are often interpolated to gridded products prior to bias correction .
In this study, the focus is on correcting intensity distributions rather than reconstructing spatial coherence. We will clarify this limitation and note that spatially consistent approaches (e.g., neighborhood or object-based methods) represent an important direction for future work.
- Training sample size: We agree that the duration of the training period (e.g., ~14 hours for S2) is relatively short and may limit the robustness of extreme quantile estimation. However, the effective sample size is substantially increased by the high spatial resolution, temporal frequency, and ensemble size, resulting in O(10⁶–10⁷) samples in a spatiotemporal sense within the domain. This is comparable to or larger than datasets used in recent post-processing studies (e.g., Liu et al., 2023).
- Regarding the baseline issue in the weather modification experiment: We thank the reviewer for this insightful comment. We agree that the original text did not clearly distinguish between the role of observations in the calibration phase and their availability during modification experiments.
In this study, observational data (JMA radar) are used only for offline bias correction under natural conditions. During actual weather-modification experiments, however, the true unmodified rainfall (counterfactual baseline) is not observable, and therefore direct application of observation-based post-processing is not feasible.
We will revise Section 4.3 to clarify this distinction and emphasize that the use of radar climatology is limited to bias estimation prior to the experiment. For operational application, alternative strategies such as a pseudo-control framework (e.g., multi-case baseline or dual-ensemble approach) are required to separate model bias from artificial intervention signals.
Minor Comments:
- Systematic bias of S2 in Figure 4: We agree, and following the suggestion provided by RC#1, we will include Spaghetti plots in the revised manuscript.
- Spatial structure and displacement error in Figure 3: We agree with this observation. While MB+QM improves the rainfall intensity distribution and overall spatial resemblance, it does not correct spatial displacement errors. As a result, even when the rainband structure appears more realistic (Fig. 3), positional offsets persist, leading to a southwest–northeast RMSE corridor (Fig. 7). This reflects the well-known limitation of intensity-based post-processing methods, which cannot resolve timing and location mismatches.
- Senjō-kōsuitai" and "linear rainbands" terms: We will use the term ‘linear rainbands’ for clarity and consistency in the revised manuscript.
- "Δ probability" in Fig. 6: Δ probability is a dimensionless quantity representing the difference in exceedance probability between ensembles, ranging [-1,1]. We will clarify this in the figure caption to avoid confusion.
- Reference (Hiraga et al., 2025, etc): Thank you very much for the cautious comment. We will replace this reference with the peer-reviewed, final, revised paper.
- Topographic shades in Figure 1: We thank the reviewer for this suggestion. While topography is not explicitly analyzed in this study, we agree that it can provide useful context. We will consider adding topographic shading to improve the figure's interpretability.
- Repeated panel in Figure 3: We thank the reviewer for this suggestion. However, the JMA-R panels are not repeated redundantly; each panel corresponds to a different accumulation period (24 h (S2), 36 h (S3), 48 h (S4), and 72 h (S1)), consistent with the respective ensemble configurations. As accumulated rainfall fields depend on integration length, the reference maps necessarily differ between panels.
Therefore, separate JMA-R panels are required to ensure consistent and fair comparison with each ensemble configuration. We will clarify this point in the figure caption to avoid confusion. - Shading or error bars in Figures 8-9: We will include spaghetti plots to illustrate ensemble spread. As these provide a detailed representation of member variability, additional error bars are not necessary.
- "COUPLE scheme" on page 3: Thank you very much for the comment, we will add description of the scheme in the revised manuscript.
- Symbol notation in formula (2): We thank the reviewer for this comment. The formulation of Eqs. (1)–(2) is mathematically correct; however, we acknowledge that the notation may not have been sufficiently explicit. In the revised manuscript, we will clarify that the index 𝑖 represents the spatiotemporal samples used for bias estimation (i.e., time steps and/or grid points), and that 𝑛 denotes the total number of samples. We will also refine the description to avoid ambiguity in the definition of the mean bias.
- Rainfall thresholds for BS and ETS: We agree that higher thresholds are important for evaluating extreme rainfall. We will extend the analysis to include additional thresholds (e.g., 1, 10, and 20 mm h⁻¹) and discuss the robustness of post-processing performance across intensity ranges.
- Quantitative metrics for the rainband tilt: We agree and will supplement the analysis with quantitative indicators of spatial misalignment, including centroid displacement and spatial correlation shift, to better characterize structural differences among ensemble configurations.
- Training set and representativeness: We thank the reviewer for this important point. The training period is constrained by the original experimental design, which focuses on the early development stage of the event for weather-modification purposes. Therefore, it does not fully cover all lifecycle stages (e.g., peak and decay).
We acknowledge that this may introduce bias in the correction and will clarify this limitation in the revised manuscript. This study is intended as an event-based evaluation under constrained data conditions rather than a fully generalizable framework. - Random perturbations (3% of qv): We thank the reviewer for this important comment. The ±3% perturbation applied to specific humidity is intended as a simplified representation of initial-condition uncertainty in convective-scale environments, consistent with commonly used random perturbation approaches in ensemble forecasting studies. The primary objective of this study is not to optimize the ensemble generation strategy, but to evaluate the effectiveness of post-processing under a controlled and computationally feasible ensemble configuration. In this context, the perturbation is designed to introduce sufficient spread to represent uncertainty, rather than to fully characterize its physical spectrum.
We agree that the sensitivity of forecast skill to perturbation magnitude is an important topic; however, a systematic sensitivity analysis would require additional ensemble simulations and is therefore beyond the scope of the present study. This limitation will be clearly stated in the revised manuscript, and future work will explore more systematic perturbation strategies. - Deeper analysis of the physical process: We agree and will expand the discussion by incorporating theoretical insights from convection-permitting ensemble modeling. In particular, we will clarify that higher resolution increases sensitivity to initial-condition uncertainty, requiring larger ensemble sizes to maintain structural stability, whereas smaller ensembles tend to exhibit displacement and structural errors. Relevant references will be added to support this interpretation.
- The rationale of evaluation period: We thank the reviewer for this comment. The selected evaluation period corresponds to a common overlapping time window across all ensemble configurations and is independent of the training dataset. This allows a consistent comparison of post-processing performance across ensembles and demonstrates its applicability beyond the calibration period.
We agree that this period represents only part of the event and will clarify the rationale for its selection and the associated limitations in the revised manuscript.
Citation: https://doi.org/10.5194/egusphere-2026-1052-AC2 - Regarding the novelty: We thank the reviewer for this important comment. We agree that the novelty and positioning of this study should be clarified with respect to recent developments in ensemble post-processing, including machine learning (ML) approaches.
-
Viewed
| HTML | XML | Total | BibTeX | EndNote | |
|---|---|---|---|---|---|
| 152 | 52 | 23 | 227 | 11 | 10 |
- HTML: 152
- PDF: 52
- XML: 23
- Total: 227
- BibTeX: 11
- EndNote: 10
Viewed (geographical distribution)
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
1.The study focuses on the early rainfall phase of August 11-12, rather than the extreme peak phase of August 14, although it is stated that this is determined by the original experimental design, but does not fully demonstrate the representativeness of this period for the evaluation post-treatment method, and it is recommended to add the rationality of choosing this period in the introduction or method section, or explain the potential applicability or limitations of the method in more extreme periods in the discussion.
2.The ensemble perturbation method is relatively simple, and it is recommended to explain the limitations of the current perturbation method in the discussion, and put forward suggestions for introducing more systematic perturbation methods in the future. The training period and the testing period are divided into short periods during the study period, and it is recommended to explain the limitations of the current evaluation in the conclusion or outlook, and put forward the need for verification in a longer period of time and multiple events.
3.The position error is not specifically dealt with in this article, and it is suggested that it can be combined with displacement correction, fuzzy verification and other methods in the discussion, or use object-oriented evaluation indicators in the future. The sensitivity analysis of the training sample size is not carried out in this paper, and it is recommended to supplement the sensitivity analysis to compare the post-processing effect under different training durations or sample sizes.
4.The article does not explain the spatial matching of AMeDAS site data, and it is recommended to add the specific method of point-surface matching in Section 2.2, or explain the representativeness of the selected site to the region. It is recommended to refer to recent research papers in the introduction. Nonlinear evolution characteristics and seepage mechanical model of fluids in broken rock mass based on the bifurcation theory. Numerical research on disastrous mechanism of seepage instability of karst collapse column considering variable mass effect. Probabilistic stability analyses of two-layer undrained slopes.
5.The stability of post-processing methods under different thresholds is not discussed, and it is recommended to add the analysis of threshold dependence in the discussion, and put forward suggestions for post-processing strategies for different thresholds. The enlightenment part of the weather improvement experiment can be further deepened, and it is recommended to further discuss how to balance BIAS correction and signal retention in post-processing in Section 4.3, or propose a specific implementation path for the "double set" strategy.
6.The effect of post-processing methods on ensemble spread is not evaluated, and it is suggested to supplement the evaluation of post-processing ensemble spread, such as rank histogram and continuous rank probability score. At the end of the article, it is not compared with other post-processing methods, and it is suggested to explain the reasons for choosing MB and QM in the discussion, and put forward suggestions that other methods can be introduced for comparative analysis in the future.