the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Severe hail detection with C-band dual-polarisation radars using convolutional neural networks
Abstract. Radar has consistently proven to be the most reliable source of information for the remote detection of hail within storms in real-time. Currently, existing hail detection techniques have limited ability to clearly distinguish storms that produce severe hail from those that do not. This often results in a prohibitive number of false alarms that hamper real-time decision-making. This study utilises convolutional neural network (CNN) models trained on dual-polarisation radar data to detect severe hail occurrence on the ground. The morphology of the storms is studied by leveraging the capabilities of a CNN. A database of images of 60 km x 60 km containing 19 different radar-derived features is built above severe hail reports (above 2 cm) and above rain or small hail reports (rain or hail below 2 cm) created for the occasion with the help of a cell-identification algorithm. After a tuning phase on the CNN architecture and its input size, the CNN is trained to output one probability of severe hail on the ground per image of 30 km x 30 km. A test set of 1396 images between 2018 and 2023 demonstrates that the CNN method outperforms state-of-the-art methods according to various metrics. A feature importance study indicates that existing hail proxies as input features are beneficial to the CNN, particularly the maximum estimated size of hail (MESH). The study demonstrates that many of the existing radar hail proxies can be adjusted using a threshold value and a threshold area to achieve similar performance to that of the CNN for severe hail detection. Finally, the output of ten fitted CNN models in inference mode on a hail event is shown.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(9846 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(9846 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2024-1336', Anonymous Referee #1, 21 Jun 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-1336/egusphere-2024-1336-RC1-supplement.pdf
-
AC1: 'Reply on RC1', Vincent Forcadell, 31 Jul 2024
Publisher’s note: this comment is a copy of AC4 and its content was therefore removed.
Citation: https://doi.org/10.5194/egusphere-2024-1336-AC1 -
AC4: 'Reply on RC1', Vincent Forcadell, 31 Jul 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-1336/egusphere-2024-1336-AC4-supplement.pdf
-
AC1: 'Reply on RC1', Vincent Forcadell, 31 Jul 2024
-
RC2: 'Comment on egusphere-2024-1336', Anonymous Referee #2, 26 Jun 2024
Review comments for “Severe hail detection with C-band dual-polarisation radars using convolutional neural networks” by Forcadell et al.
This work constructed a convolutional neural network (CNN) based hail occurrence model trained on dual-polarisation radar data over France, and the training “truth” combines three ground datasets including a crow-sourcing one with careful screening and quality control. In addition to the radar measured variables, some traditional hail prediction proxies are also included as the input features. The target is to predict either severe hail (flag = 1) or rain/small hail (flag = 0). The machine learning (ML) model performance is then comprehensively compared to previously used hail detection proxies for performance statistics, and the feature importance for the model is also thoroughly evaluated. The CNN model outperforms all 6 traditional proxies in all evaluated metrics.
This work is carefully designed and thoroughly conducted. The quality-control of the training “truth” dataset involves a great amount of work, which is highly appreciated (I don’t find the open science statement, but do think it would be very valuable if the training dataset can be published somewhere). The ML model architecture selection and fine-tuning are carefully executed. The results are concrete.
However, I do feel the design of the work has limited contributions to advance science, mainly because of the concern that the input features include the 6 proxies, and one of them dominates the determination processes according to the feature importance rank (and that’s partially because some other highly correlated proxies are removed before feature ranking, or otherwise, they’ll all rank high). So scientifically speaking, the new ML model is a “smart improved version 2.0” of the previous proxies. Since processing input features reads like not easy (e.g., interpolation using two adjascent radars to reconstruct the 3D fields, and then interpolate to 2D images, etc.), I doubt the applicability of the new ML model to operational use, given the traditional proxy data seem to be much easier to be calculated and the performance the best two traditional indices are only slightly worse (Fig. 13 and Table 8). In your revised version, I’d strongly recommend the authors adding one paragraph in the discussion or summary about their thoughts of the scientific merit and applicability of their work to the future.
Another minor concern is the length. It’s a bit too lengthy right now, easily causing readers missing highlights of your work. I’d suggest moving some definition of the common ML terminologies (ROC, AUC, confusion matrix) to the appendix, as well as detailed procedures of the QC of your training “truth”.
Minor caveats:
- Reconstructed 2D images from one closest radar and a second-closest radar are both used. But it was never discussed (or I might have missed) what are the differences for the prediction? Is it more practically to just use the closed image? Or does the result really sensitive to the distance between radar location and event location?
- Starting from comparing ML results to various previous hail detection proxy variables (Section 4.2), the ResNet is dropped for discussion. Why? Is it because training ResNet takes significantly longer time than training a ConvNet?
- How was “SHI” defined? It was never clear to me. If it has an explicit analytical format involving radar measured quantities, then it’s a “smart use” of radar measurements, and you can directly use “SHI” as your training input feature.
Citation: https://doi.org/10.5194/egusphere-2024-1336-RC2 -
AC3: 'Reply on RC2', Vincent Forcadell, 31 Jul 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-1336/egusphere-2024-1336-AC3-supplement.pdf
-
RC3: 'Comment on egusphere-2024-1336', Anonymous Referee #3, 03 Jul 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-1336/egusphere-2024-1336-RC3-supplement.pdf
-
AC2: 'Reply on RC3', Vincent Forcadell, 31 Jul 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-1336/egusphere-2024-1336-AC2-supplement.pdf
-
AC2: 'Reply on RC3', Vincent Forcadell, 31 Jul 2024
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2024-1336', Anonymous Referee #1, 21 Jun 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-1336/egusphere-2024-1336-RC1-supplement.pdf
-
AC1: 'Reply on RC1', Vincent Forcadell, 31 Jul 2024
Publisher’s note: this comment is a copy of AC4 and its content was therefore removed.
Citation: https://doi.org/10.5194/egusphere-2024-1336-AC1 -
AC4: 'Reply on RC1', Vincent Forcadell, 31 Jul 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-1336/egusphere-2024-1336-AC4-supplement.pdf
-
AC1: 'Reply on RC1', Vincent Forcadell, 31 Jul 2024
-
RC2: 'Comment on egusphere-2024-1336', Anonymous Referee #2, 26 Jun 2024
Review comments for “Severe hail detection with C-band dual-polarisation radars using convolutional neural networks” by Forcadell et al.
This work constructed a convolutional neural network (CNN) based hail occurrence model trained on dual-polarisation radar data over France, and the training “truth” combines three ground datasets including a crow-sourcing one with careful screening and quality control. In addition to the radar measured variables, some traditional hail prediction proxies are also included as the input features. The target is to predict either severe hail (flag = 1) or rain/small hail (flag = 0). The machine learning (ML) model performance is then comprehensively compared to previously used hail detection proxies for performance statistics, and the feature importance for the model is also thoroughly evaluated. The CNN model outperforms all 6 traditional proxies in all evaluated metrics.
This work is carefully designed and thoroughly conducted. The quality-control of the training “truth” dataset involves a great amount of work, which is highly appreciated (I don’t find the open science statement, but do think it would be very valuable if the training dataset can be published somewhere). The ML model architecture selection and fine-tuning are carefully executed. The results are concrete.
However, I do feel the design of the work has limited contributions to advance science, mainly because of the concern that the input features include the 6 proxies, and one of them dominates the determination processes according to the feature importance rank (and that’s partially because some other highly correlated proxies are removed before feature ranking, or otherwise, they’ll all rank high). So scientifically speaking, the new ML model is a “smart improved version 2.0” of the previous proxies. Since processing input features reads like not easy (e.g., interpolation using two adjascent radars to reconstruct the 3D fields, and then interpolate to 2D images, etc.), I doubt the applicability of the new ML model to operational use, given the traditional proxy data seem to be much easier to be calculated and the performance the best two traditional indices are only slightly worse (Fig. 13 and Table 8). In your revised version, I’d strongly recommend the authors adding one paragraph in the discussion or summary about their thoughts of the scientific merit and applicability of their work to the future.
Another minor concern is the length. It’s a bit too lengthy right now, easily causing readers missing highlights of your work. I’d suggest moving some definition of the common ML terminologies (ROC, AUC, confusion matrix) to the appendix, as well as detailed procedures of the QC of your training “truth”.
Minor caveats:
- Reconstructed 2D images from one closest radar and a second-closest radar are both used. But it was never discussed (or I might have missed) what are the differences for the prediction? Is it more practically to just use the closed image? Or does the result really sensitive to the distance between radar location and event location?
- Starting from comparing ML results to various previous hail detection proxy variables (Section 4.2), the ResNet is dropped for discussion. Why? Is it because training ResNet takes significantly longer time than training a ConvNet?
- How was “SHI” defined? It was never clear to me. If it has an explicit analytical format involving radar measured quantities, then it’s a “smart use” of radar measurements, and you can directly use “SHI” as your training input feature.
Citation: https://doi.org/10.5194/egusphere-2024-1336-RC2 -
AC3: 'Reply on RC2', Vincent Forcadell, 31 Jul 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-1336/egusphere-2024-1336-AC3-supplement.pdf
-
RC3: 'Comment on egusphere-2024-1336', Anonymous Referee #3, 03 Jul 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-1336/egusphere-2024-1336-RC3-supplement.pdf
-
AC2: 'Reply on RC3', Vincent Forcadell, 31 Jul 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-1336/egusphere-2024-1336-AC2-supplement.pdf
-
AC2: 'Reply on RC3', Vincent Forcadell, 31 Jul 2024
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
490 | 188 | 222 | 900 | 23 | 18 |
- HTML: 490
- PDF: 188
- XML: 222
- Total: 900
- BibTeX: 23
- EndNote: 18
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Vincent Forcadell
Clotilde Augros
Olivier Caumont
Kévin Dedieu
Maxandre Ouradou
Cloe David
Jordi Figueras i Ventura
Olivier Laurantin
Hassan Al-Sakka
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(9846 KB) - Metadata XML