the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Rapid Flood Mapping from Aerial Imagery Using Fine-Tuned SAM and ResNet-Backboned U-Net
Abstract. Flooding is a major natural hazard that requires a rapid response to minimize the loss of life and property and to facilitate damage assessment. Aerial imagery, especially images from unmanned aerial vehicles (UAVs) and helicopters, plays a crucial role in identifying areas affected by flooding. Therefore, developing an efficient model for rapid flood mapping is essential. In this study, we present two segmentation approaches for the mapping of flood-affected areas: (1) a fine-tuned Segment Anything Model (SAM), comparing the performance of point prompts versus bounding box (Bbox) prompts, and (2) a U-Net model with ResNet-50 and ResNet-101 as pre-trained backbones. Our results showed that the fine-tuned SAM performed best in segmenting floods with point prompts (Accuracy: 0.96, IoU: 0.90), while Bbox prompts led to a significant drop (Accuracy: 0.82, IoU: 0.67). This is because flood images often cover the image from edge to edge, making Bbox prompts less effective at capturing boundary details. For the U-Net model, the ResNet-50 backbone yielded an accuracy of 0.87 and an IoU of 0.72. Performance improved slightly with the ResNet-101 backbone, achieving an accuracy of 0.88 and an IoU of 0.74. This improvement can be attributed to the deeper architecture of ResNet-101, which allows more complex and detailed features to be extracted, improving U-Net’s ability to segment flood-affected areas accurately. The results of this study will help emergency response teams identify flood-affected areas more quickly and accurately. In addition, these models could serve as valuable tools for insurance companies when assessing damage. Moreover, the segmented flood images generated by these models can serve as training data for other machine learning models, creating a pipeline for more advanced flood analysis and prediction systems.
- Preprint
(2015 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (until 07 Oct 2025)
-
RC1: 'Comment on egusphere-2025-3146', Saham Mirzaei, 03 Sep 2025
reply
-
AC1: 'Reply on RC1', Hadi Shokati, 09 Sep 2025
reply
Dear Editor and Reviewer,
We would like to express our sincere gratitude for your time and thoughtful comments on our manuscript, " Rapid Flood Mapping from Aerial Imagery Using Fine-Tuned SAM and ResNet-Backboned U-Net." Your insightful feedback has been extremely valuable in helping us improve the clarity, strength, and overall quality of our work.
We have carefully considered all suggestions and addressed them point-by-point in the revised manuscript. For your reference, we have highlighted our responses to your comments in green. We believe these revisions have significantly strengthened the manuscript and we are confident that it is now ready for further consideration.
Thank you again for your valuable contribution to this process. We look forward to your feedback on the revised manuscript.
-
AC1: 'Reply on RC1', Hadi Shokati, 09 Sep 2025
reply
-
RC2: 'Comment on egusphere-2025-3146', Saham Mirzaei, 09 Sep 2025
reply
Upon re-reading the manuscript, I noticed that in lines 200–203 you mention the use of various data augmentation techniques. Could you please clarify the probability settings assigned to each augmentation method?
In lines 201–203, it is not clear whether the augmentation was applied exclusively to the training dataset. Providing this clarification would enhance the transparency of the methodology.
Still in lines 201–203, it would be highly valuable to explicitly include details regarding the number of images before and after data augmentation, as well as their distribution across the training, validation, and test sets. Such information is critical to ensure reproducibility.
In lines 209–219, you mention the use of both Dice Loss and Cross-Entropy Loss. Could you please specify how these two loss functions were combined? For example, were they summed, averaged, or weighted differently?
I appreciate that the code is publicly available on GitHub. However, I could not locate the corresponding datasets in the repository. Based on the README file, it seems that the authors expect users to obtain the data from an external source. While this is acceptable provided that the source remains reliably available, hosting a copy of the datasets within your GitHub repository would be preferable for long-term accessibility.Citation: https://doi.org/10.5194/egusphere-2025-3146-RC2 -
AC2: 'Reply on RC2', Hadi Shokati, 11 Sep 2025
reply
Dear Editor and Reviewer,
We would like to express our sincere gratitude again for your time and thoughtful comments on our manuscript, " Rapid Flood Mapping from Aerial Imagery Using Fine-Tuned SAM and ResNet-Backboned U-Net." Your insightful feedback has been extremely valuable in helping us improve the clarity, strength, and overall quality of our work.
We have carefully considered all suggestions and addressed them point-by-point in the revised manuscript. For your reference, we have highlighted our responses to your comments in green. We believe these revisions have significantly strengthened the manuscript and we are confident that it is now ready for further consideration.
Thank you again for your valuable contribution to this process. We look forward to your feedback on the revised manuscript.
-
AC2: 'Reply on RC2', Hadi Shokati, 11 Sep 2025
reply
-
CC1: 'Comment on egusphere-2025-3146', Armin Moghimi, 17 Sep 2025
reply
Dear authors,
I read the paper, and I see the method used by the authors, and also the code is derived from the following paper and GitHub:
ArminMoghimi/Fine-tune-the-Segment-Anything-Model-SAM-: A. Moghimi, M. Welzel, T. Celik, and T. Schlurmann, "A Comparative Performance Analysis of Popular Deep Learning Models and Segment Anything Model (SAM) for River Water Segmentation in Close-Range Remote Sensing Imagery,"
https://github.com/ArminMoghimi/Fine-tune-the-Segment-Anything-Model-SAM-
https://doi.org/10.1109/ACCESS.2024.3385425
However, I couldn't see this reference in the reference list, which it should be.
Citation: https://doi.org/10.5194/egusphere-2025-3146-CC1
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
992 | 21 | 10 | 1,023 | 23 | 28 |
- HTML: 992
- PDF: 21
- XML: 10
- Total: 1,023
- BibTeX: 23
- EndNote: 28
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
The research is well designed and written. It contributes to the development of a strong and user-friendly AI tool that can provide quick and effective support in flood-affected areas where urgent assistance is needed, without requiring harmonized or standardized procedures for image collection from different sources. As a limitation of the research, I believe it would be valuable to suggest including the geolocation of the final flood map to facilitate relief efforts. Furthermore, the reasons behind the superiority of SAM-Points should be discussed. Compared to other methods, this approach appears to be more effective in distinguishing bare soil from flooded areas.