the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Integrating multi-hazard susceptibility and building exposure: A case study for Quang Nam province, Vietnam
Abstract. Natural hazards have serious impacts worldwide on society, economy and environment. In Vietnam, throughout the years, natural hazards have caused a significant loss of lives as well as severe devastation to houses, crops, and transportation. This paper presents a new model for multi-hazard (floods and wildfires) exposure estimates using machine learning models, Google Earth Engine, and spatial analysis tools for a typical Quang Nam province, Vietnam case study. By establishing the context and collected data on climate hazards and impacts, a geospatial database was built for multiple hazard modelling, including an inventory of climate-related hazards (floods and wildfires), topography, geology, hydrology, climate features (temperature, wetness, wind), land use, and building data for exposure assessment. The hazard susceptibility and exposure matrices were presented to demonstrate a hazard profiling approach for multi-hazards. The results are explicitly illustrated for floods and wildfire hazards and the exposure of buildings. Susceptibility models using the random forest approach provide model accuracy of the AUC=0.882 and 0.884 for floods and wildfires, respectively. The flood and wildfire hazards are combined within a semi-quantitative matrix for assessing the building exposure to different combinations of hazards. Digital multi-hazard risk and exposure maps of floods and wildfires aid the identification of areas prone to climate-related hazards and the potential impacts of hazards. This approach can be used to inform communities and regulatory authorities on how they develop and implement long-term adaptation solutions.
- Preprint
(10345 KB) - Metadata XML
- BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on egusphere-2024-57', Julius Schlumberger, 18 Feb 2024
In the study “Integrating multi-hazard susceptibility and building exposure: A case study for Quang Nam province, Vietnam”, the authors use an established set of machine learning models to estimate the susceptibility of the Quang Nam province, Vietnam, to floods and wildfires. By creating a comprehensive geospatial database including various historic flood and wildfire events, topographical, geological, hydrological, and climatic features, along with land use and building data, the authors have developed a robust basis for susceptibility mapping regarding the floods and wildfires and offer interesting insights regarding exposure assessment. By combining susceptibility categories for wildfires and floods the study offers a more nuanced perspective on what type of spatially co-occuring multi-hazard events could be expected in different areas of the study region.
The study presents relevant insights into susceptibility factors for floods and wildfires. However, there are several major aspects that require clarification:
- The authors use the term multi-hazard repeatedly in their study. They introduce the term in line 49 to 51 and indicate that “Multi-hazard susceptibility assessment provides insights into the spatial co-occurrence of multi-hazard” (line 53). Further specification in what type of multi-hazard interactions are relevant for the selected hazard pair of floods and wildfires is not provided. Yet, the study would significantly benefit from such a clarification. For instance, the study could benefit from discussing the dynamic interplay between flood probability in wet seasons and wildfire likelihood in dry seasons. Based on previous studies looking at wildfire – flood interactions, most emphasis has been put on the reduction of infiltration/storage capacity in natural systems that have been burnt (see e.g. Mueller et al. 2018). Similarly, it seems physically plausible that a flood might even reduce the risk of wildfires due to the large-scale wetting of vegetation. I am thus wondering whether the current set-up of this study would rather serve the purpose of multiple hazard susceptibility mapping as it neglects the hazard interaction dynamics that are of critical importance for multi-hazard events? This is particularly critical, since the authors want to make the step from susceptibility mapping towards exposure mapping. But due to (temporal) dynamics of multi-hazard events, an assumption of constant exposure might be worth discussing.
Mueller, M., Lima, R. E., Springer, A. E., & Schiefer, E. (2018). Using matching methods to estimate impacts of wildfire and post wildfire flooding on house prices. Water Resources Research, 54, 6189–6201. https://doi.org/10.1029/2017WR022195 - While the authors do a great job to explain the significance of each of the single-hazard risks in the study area, the importance of the multi-hazard event remains unclear. Also unclear remains, what type of floods the authors are referring to. In the introduction multiple floods including coastal floods are mentioned. The remainder of this study seems to focus on riverine floods.
- The authors provide a comprehensive list of study aims. However, I am not sure how targets 1 and 4 are covered in this study. The authors describe well the process of deriving the susceptibility maps, yet there's a lack of evidence considering spatially or temporally co-occurring events where hazard dynamics are relevant. Coming back to the previous comment, multi-hazard events are significant because of their interactions which lead to non-linearly altered impacts. It is also unclear how the outputs of the workflow are assessed to determine whether it is a useful assessment tool and provides decision-support (for what exactly?) regarding risk reduction and management.
- The method section could benefit from some streamlining. While the overall workflow as presented in Figure 2 is clear and straightforward. Aligning the subsection sequence (and subsection titles) with the workflow presented in Figure 2 would reduce redundancy and improve clarity. Specifically, sections such as 3.3.3 could be embedded in the overall flow (first check multicollinearity, then apply ML model?). Another suggestion would be to present GEE already as part of the methodology flowchart as the overarching framework in which data are combined, the models are set up, tested, and used etc.
- The process of combining data for the flood and wildfire inventories could benefit from further elaboration (either in the main text or in a supplement). Regarding floods: a) it seems that the flood events point data and map data were combined. How was this done? How were the points prepared for the combination with the maps for the training? To determine the non-flood points, how were the flood markers considered? b) A specification what made the three flood events historic would be interesting. Are those all the same flood type (e.g. fluvial floods)? For wildfires: a) Which period was considered to determine the 1,911 wildfire locations? Was it just within the last year or the past decade or…? B) When selecting non-wildfire locations, it was assumed that built environments cannot burn. However, when it comes to exposure, to wildfires we would assume that the built environment must be exposed to these fires somehow. It would be helpful if the authors could elaborate in this section (or in the section when describing the built environment), how the choice that built environments are non-wildfire locations influence the outcomes of the machine learning training to spot fires that endanger built environment. Also, it would be interesting to learn whether there are any multi-hazard events in the dataset of historic events (potentially to be added to the supplement?)?
- The authors comprehensively describe what influencing factors are considered. For some of the influencing factors with temporal and/or spatial variability (e.g. precipitation, temperature) it is unclear how the collected data are further processed. For example, precipitation are collected for a period of 10 years, while the considered flood marker cover the periods 2007, 2009, 2013 and 2017-2021. The same applies for the temperature data where data were available only for 3 years. How were the influencing factors considered for wildfires that took place outside of this period (assuming that this is the case, since no specification is made with regards to the time horizon over which the wildfire locations were collected). Same holds for the NDVI index, where it is not clear when this imagery has been produced. Additionally, it would be valuable if the authors could reflect on the interpolation method used to inter/extrapolate between the gauge stations. Are the gauges distributed sufficiently well and do the elevation/similarity characteristics allow for the application of the chosen method?
- While the authors do well in qualitatively describing the algorithm and underlying principles, key information regarding the hyperparameter tuning (and final chosen ones) and pruning techniques are not provided. As such it is difficult to reproduce the results. The results of the tests for the hyperparameter tuning could be useful additions as supplemental information. On a similar note, further quantitative information regarding the bootstrapping (e.g. number of bootstrapped samples) could be relevant as well. Furthermore, it would be helpful for readers less familiar with ML methodology (like me) to link the parameters used in the equations to the inputs used in this specific study. For example, what are D, N, X and Y in the CART?
- The discussion section could benefit from some critical reflection on the decisions made in this study set-up and discuss some of the limitations that come with it. This is particularly critical since the authors claim, that this workflow could be extended by including different hazards and applied in different regions. For example, with reference to Line 458: Both in terms of inputs as well as in terms of how multi-hazard has been defined and conceptualized. The aspect of dynamics has been neglected and it has mostly been looked at spatially co-occurring (without temporal memory) hazard events. Or with reference to Line 489: From the results it seemed that multi-hazard seems to be a less prominent problem (both in terms of susceptibility as well as the exposure). So, a planner could also read the results as mentioned by the authors: “flood risk is much more of a problem, we should focus on that!”. I would suggest specifying that with these exposure maps, further analysis into the impacts of multi-hazard events can be made that ultimately can inform multi-hazard risk assessment and thus effective DRM.
- Have the authors considered to deposit input data maps, algorithms, and model code in FAIR-aligned repositories/archives in alignment with the ambition of NHESS to support open data?
Minor comments
- The introduction generally includes all relevant elements. The overall story for the introduction could be refined, e.g. by avoiding duplication (compare lines 91 to 99 with lines 32 to 45). Similarly, lines 60 to 88 provide in depth introduction to the ML and previous practice. At the same time, the authors mention multiple information which are quite interesting, but seem to be not relevant for this study (e.g. line 61 to 63; 63 to 65; 68 to 70). I would also suggest trying to integrate lines 60 to 77 with the current practice described in lines 78 to 88).
- The methodological flow is described nicely. However, it seems that there is a lot of overlap in lines 135 to 139 compared to line 139 to 147. Streamlining the text could help the reader.
- Figure 2: The flowchart is very nice. Couple of questions:
- What is the importance of different colors used in this figure? I tried to understand why certain boxes were colored in different colors (e.g. flood influencing factors vs floods, same colors for e.g. ML vs Testing…). If there is a reason for specific colors, I would suggest making it clearer (e.g. explaining in the figure description) or otherwise reduce the number of colors used.
- I was expecting that the susceptibility maps would be built after the validation exercise. The flow suggest that they were created directly from the training dataset?
- Line 174 to 175: I don’t understand this sentence. Is that the method to determine whether a wildfire has occurred?
- Line 176 to 177: This sentence seems unclear to me. What filter has been applied to filter what?
- Line 180: it is not clear whether areas larger than 2 ha were assumed to be human caused.
- Figure 4: I would suggest to either add a bit more text to explain the different maps as part of the influencing factors or place Figure 4 in the appendix. In the appendix, individual plots could also be resized so that legends are better readable.
- Line 188: How was this set of influencing factors determined? For flooding, proximity to coast could also be a determinant of (coastal) flooding?
- Line 301 to 302: What does these choices of filtering for confidence interval mean? What type of buildings are more likely to be disregarded with the chosen confidence intervals?
- Line 354 to 366: This section seems almost identical with the workflow presented and discussed alongside figure 2. I would suggest streamlining the method section and remove Section 3.2.2 and add relevant information in previous sections. For example, the information that CART and RF work cell-based is quite a relevant information given that the flood and wildfire inventories are point information.
- Line 386: Can the authors explain how the importance sampling can inform which factors have the highest impact on multi-hazard formations? The algorithm used is applied to single hazards (either floods or droughts) but not the multi-hazards?
- Figure 5: Aligning terminology (either testing or validating dataset) would help the readability. Also, what does the Se stand for?
- Line 453 to 455: Can the authors clarify what they mean when they claim that floods and wildfires have ‘similar spatial extent’ and frequency?
- Line 483 to 485: How is this finding affected by the choice to define built-up areas as non-wildfire areas when creating the training data set? It seems that seeing less fires in built up areas could also be influenced by the fact that the ML algorithms were taught that wildfires just don’t occur in more densely populated areas?
- Line 486ff: How do the authors derive the claim, that the chosen method works well with recurring hazard events? The applied methods seemed not to account for the changes in the physical system induced by either floods or wildfires.
Citation: https://doi.org/10.5194/egusphere-2024-57-RC1 -
AC1: 'Reply on RC1', Chinh Luu, 03 May 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-57/egusphere-2024-57-AC1-supplement.pdf
- The authors use the term multi-hazard repeatedly in their study. They introduce the term in line 49 to 51 and indicate that “Multi-hazard susceptibility assessment provides insights into the spatial co-occurrence of multi-hazard” (line 53). Further specification in what type of multi-hazard interactions are relevant for the selected hazard pair of floods and wildfires is not provided. Yet, the study would significantly benefit from such a clarification. For instance, the study could benefit from discussing the dynamic interplay between flood probability in wet seasons and wildfire likelihood in dry seasons. Based on previous studies looking at wildfire – flood interactions, most emphasis has been put on the reduction of infiltration/storage capacity in natural systems that have been burnt (see e.g. Mueller et al. 2018). Similarly, it seems physically plausible that a flood might even reduce the risk of wildfires due to the large-scale wetting of vegetation. I am thus wondering whether the current set-up of this study would rather serve the purpose of multiple hazard susceptibility mapping as it neglects the hazard interaction dynamics that are of critical importance for multi-hazard events? This is particularly critical, since the authors want to make the step from susceptibility mapping towards exposure mapping. But due to (temporal) dynamics of multi-hazard events, an assumption of constant exposure might be worth discussing.
-
RC2: 'Comment on egusphere-2024-57', Neiler Medina, 01 Apr 2024
- While the paper is generally well-written, there could be improvements in organizing the content to enhance readability and flow, particularly in presenting the methodology and results sections.
- It would be beneficial to include a more detailed discussion on the validation process and uncertainty analysis of the models to ensure the robustness and reliability of the findings.
- It is not entirely clear from the paper how the multiple hazards (floods and wildfires) are integrated into the multi-hazard exposure estimation. The methodology section should provide a more detailed explanation of the approach used to combine and assess the compound risk arising from different hazards. Clarifying this aspect would help readers better understand the synergistic effects of multiple hazards and how they contribute to overall risk.
- My previous comment is of special relevance when the two hazards analyzed are common to happen in different hydrological seasons. Exploring the interactions, dependencies, and cumulative effects of floods and wildfires would provide valuable insights into the complex nature of multi-hazard scenarios. A comparative analysis of the combined risk versus individual hazards would further highlight the significance of considering multiple hazards in risk assessment and management.
- Consideration of stakeholder engagement and feedback in the development and application of the multi-hazard exposure estimation model could enhance the relevance and applicability of the research to real-world scenarios
Citation: https://doi.org/10.5194/egusphere-2024-57-RC2 -
AC2: 'Reply on RC2', Chinh Luu, 03 May 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-57/egusphere-2024-57-AC2-supplement.pdf
-
AC3: 'Reply on RC2', Chinh Luu, 03 May 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-57/egusphere-2024-57-AC3-supplement.pdf
-
AC2: 'Reply on RC2', Chinh Luu, 03 May 2024
Status: closed
-
RC1: 'Comment on egusphere-2024-57', Julius Schlumberger, 18 Feb 2024
In the study “Integrating multi-hazard susceptibility and building exposure: A case study for Quang Nam province, Vietnam”, the authors use an established set of machine learning models to estimate the susceptibility of the Quang Nam province, Vietnam, to floods and wildfires. By creating a comprehensive geospatial database including various historic flood and wildfire events, topographical, geological, hydrological, and climatic features, along with land use and building data, the authors have developed a robust basis for susceptibility mapping regarding the floods and wildfires and offer interesting insights regarding exposure assessment. By combining susceptibility categories for wildfires and floods the study offers a more nuanced perspective on what type of spatially co-occuring multi-hazard events could be expected in different areas of the study region.
The study presents relevant insights into susceptibility factors for floods and wildfires. However, there are several major aspects that require clarification:
- The authors use the term multi-hazard repeatedly in their study. They introduce the term in line 49 to 51 and indicate that “Multi-hazard susceptibility assessment provides insights into the spatial co-occurrence of multi-hazard” (line 53). Further specification in what type of multi-hazard interactions are relevant for the selected hazard pair of floods and wildfires is not provided. Yet, the study would significantly benefit from such a clarification. For instance, the study could benefit from discussing the dynamic interplay between flood probability in wet seasons and wildfire likelihood in dry seasons. Based on previous studies looking at wildfire – flood interactions, most emphasis has been put on the reduction of infiltration/storage capacity in natural systems that have been burnt (see e.g. Mueller et al. 2018). Similarly, it seems physically plausible that a flood might even reduce the risk of wildfires due to the large-scale wetting of vegetation. I am thus wondering whether the current set-up of this study would rather serve the purpose of multiple hazard susceptibility mapping as it neglects the hazard interaction dynamics that are of critical importance for multi-hazard events? This is particularly critical, since the authors want to make the step from susceptibility mapping towards exposure mapping. But due to (temporal) dynamics of multi-hazard events, an assumption of constant exposure might be worth discussing.
Mueller, M., Lima, R. E., Springer, A. E., & Schiefer, E. (2018). Using matching methods to estimate impacts of wildfire and post wildfire flooding on house prices. Water Resources Research, 54, 6189–6201. https://doi.org/10.1029/2017WR022195 - While the authors do a great job to explain the significance of each of the single-hazard risks in the study area, the importance of the multi-hazard event remains unclear. Also unclear remains, what type of floods the authors are referring to. In the introduction multiple floods including coastal floods are mentioned. The remainder of this study seems to focus on riverine floods.
- The authors provide a comprehensive list of study aims. However, I am not sure how targets 1 and 4 are covered in this study. The authors describe well the process of deriving the susceptibility maps, yet there's a lack of evidence considering spatially or temporally co-occurring events where hazard dynamics are relevant. Coming back to the previous comment, multi-hazard events are significant because of their interactions which lead to non-linearly altered impacts. It is also unclear how the outputs of the workflow are assessed to determine whether it is a useful assessment tool and provides decision-support (for what exactly?) regarding risk reduction and management.
- The method section could benefit from some streamlining. While the overall workflow as presented in Figure 2 is clear and straightforward. Aligning the subsection sequence (and subsection titles) with the workflow presented in Figure 2 would reduce redundancy and improve clarity. Specifically, sections such as 3.3.3 could be embedded in the overall flow (first check multicollinearity, then apply ML model?). Another suggestion would be to present GEE already as part of the methodology flowchart as the overarching framework in which data are combined, the models are set up, tested, and used etc.
- The process of combining data for the flood and wildfire inventories could benefit from further elaboration (either in the main text or in a supplement). Regarding floods: a) it seems that the flood events point data and map data were combined. How was this done? How were the points prepared for the combination with the maps for the training? To determine the non-flood points, how were the flood markers considered? b) A specification what made the three flood events historic would be interesting. Are those all the same flood type (e.g. fluvial floods)? For wildfires: a) Which period was considered to determine the 1,911 wildfire locations? Was it just within the last year or the past decade or…? B) When selecting non-wildfire locations, it was assumed that built environments cannot burn. However, when it comes to exposure, to wildfires we would assume that the built environment must be exposed to these fires somehow. It would be helpful if the authors could elaborate in this section (or in the section when describing the built environment), how the choice that built environments are non-wildfire locations influence the outcomes of the machine learning training to spot fires that endanger built environment. Also, it would be interesting to learn whether there are any multi-hazard events in the dataset of historic events (potentially to be added to the supplement?)?
- The authors comprehensively describe what influencing factors are considered. For some of the influencing factors with temporal and/or spatial variability (e.g. precipitation, temperature) it is unclear how the collected data are further processed. For example, precipitation are collected for a period of 10 years, while the considered flood marker cover the periods 2007, 2009, 2013 and 2017-2021. The same applies for the temperature data where data were available only for 3 years. How were the influencing factors considered for wildfires that took place outside of this period (assuming that this is the case, since no specification is made with regards to the time horizon over which the wildfire locations were collected). Same holds for the NDVI index, where it is not clear when this imagery has been produced. Additionally, it would be valuable if the authors could reflect on the interpolation method used to inter/extrapolate between the gauge stations. Are the gauges distributed sufficiently well and do the elevation/similarity characteristics allow for the application of the chosen method?
- While the authors do well in qualitatively describing the algorithm and underlying principles, key information regarding the hyperparameter tuning (and final chosen ones) and pruning techniques are not provided. As such it is difficult to reproduce the results. The results of the tests for the hyperparameter tuning could be useful additions as supplemental information. On a similar note, further quantitative information regarding the bootstrapping (e.g. number of bootstrapped samples) could be relevant as well. Furthermore, it would be helpful for readers less familiar with ML methodology (like me) to link the parameters used in the equations to the inputs used in this specific study. For example, what are D, N, X and Y in the CART?
- The discussion section could benefit from some critical reflection on the decisions made in this study set-up and discuss some of the limitations that come with it. This is particularly critical since the authors claim, that this workflow could be extended by including different hazards and applied in different regions. For example, with reference to Line 458: Both in terms of inputs as well as in terms of how multi-hazard has been defined and conceptualized. The aspect of dynamics has been neglected and it has mostly been looked at spatially co-occurring (without temporal memory) hazard events. Or with reference to Line 489: From the results it seemed that multi-hazard seems to be a less prominent problem (both in terms of susceptibility as well as the exposure). So, a planner could also read the results as mentioned by the authors: “flood risk is much more of a problem, we should focus on that!”. I would suggest specifying that with these exposure maps, further analysis into the impacts of multi-hazard events can be made that ultimately can inform multi-hazard risk assessment and thus effective DRM.
- Have the authors considered to deposit input data maps, algorithms, and model code in FAIR-aligned repositories/archives in alignment with the ambition of NHESS to support open data?
Minor comments
- The introduction generally includes all relevant elements. The overall story for the introduction could be refined, e.g. by avoiding duplication (compare lines 91 to 99 with lines 32 to 45). Similarly, lines 60 to 88 provide in depth introduction to the ML and previous practice. At the same time, the authors mention multiple information which are quite interesting, but seem to be not relevant for this study (e.g. line 61 to 63; 63 to 65; 68 to 70). I would also suggest trying to integrate lines 60 to 77 with the current practice described in lines 78 to 88).
- The methodological flow is described nicely. However, it seems that there is a lot of overlap in lines 135 to 139 compared to line 139 to 147. Streamlining the text could help the reader.
- Figure 2: The flowchart is very nice. Couple of questions:
- What is the importance of different colors used in this figure? I tried to understand why certain boxes were colored in different colors (e.g. flood influencing factors vs floods, same colors for e.g. ML vs Testing…). If there is a reason for specific colors, I would suggest making it clearer (e.g. explaining in the figure description) or otherwise reduce the number of colors used.
- I was expecting that the susceptibility maps would be built after the validation exercise. The flow suggest that they were created directly from the training dataset?
- Line 174 to 175: I don’t understand this sentence. Is that the method to determine whether a wildfire has occurred?
- Line 176 to 177: This sentence seems unclear to me. What filter has been applied to filter what?
- Line 180: it is not clear whether areas larger than 2 ha were assumed to be human caused.
- Figure 4: I would suggest to either add a bit more text to explain the different maps as part of the influencing factors or place Figure 4 in the appendix. In the appendix, individual plots could also be resized so that legends are better readable.
- Line 188: How was this set of influencing factors determined? For flooding, proximity to coast could also be a determinant of (coastal) flooding?
- Line 301 to 302: What does these choices of filtering for confidence interval mean? What type of buildings are more likely to be disregarded with the chosen confidence intervals?
- Line 354 to 366: This section seems almost identical with the workflow presented and discussed alongside figure 2. I would suggest streamlining the method section and remove Section 3.2.2 and add relevant information in previous sections. For example, the information that CART and RF work cell-based is quite a relevant information given that the flood and wildfire inventories are point information.
- Line 386: Can the authors explain how the importance sampling can inform which factors have the highest impact on multi-hazard formations? The algorithm used is applied to single hazards (either floods or droughts) but not the multi-hazards?
- Figure 5: Aligning terminology (either testing or validating dataset) would help the readability. Also, what does the Se stand for?
- Line 453 to 455: Can the authors clarify what they mean when they claim that floods and wildfires have ‘similar spatial extent’ and frequency?
- Line 483 to 485: How is this finding affected by the choice to define built-up areas as non-wildfire areas when creating the training data set? It seems that seeing less fires in built up areas could also be influenced by the fact that the ML algorithms were taught that wildfires just don’t occur in more densely populated areas?
- Line 486ff: How do the authors derive the claim, that the chosen method works well with recurring hazard events? The applied methods seemed not to account for the changes in the physical system induced by either floods or wildfires.
Citation: https://doi.org/10.5194/egusphere-2024-57-RC1 -
AC1: 'Reply on RC1', Chinh Luu, 03 May 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-57/egusphere-2024-57-AC1-supplement.pdf
- The authors use the term multi-hazard repeatedly in their study. They introduce the term in line 49 to 51 and indicate that “Multi-hazard susceptibility assessment provides insights into the spatial co-occurrence of multi-hazard” (line 53). Further specification in what type of multi-hazard interactions are relevant for the selected hazard pair of floods and wildfires is not provided. Yet, the study would significantly benefit from such a clarification. For instance, the study could benefit from discussing the dynamic interplay between flood probability in wet seasons and wildfire likelihood in dry seasons. Based on previous studies looking at wildfire – flood interactions, most emphasis has been put on the reduction of infiltration/storage capacity in natural systems that have been burnt (see e.g. Mueller et al. 2018). Similarly, it seems physically plausible that a flood might even reduce the risk of wildfires due to the large-scale wetting of vegetation. I am thus wondering whether the current set-up of this study would rather serve the purpose of multiple hazard susceptibility mapping as it neglects the hazard interaction dynamics that are of critical importance for multi-hazard events? This is particularly critical, since the authors want to make the step from susceptibility mapping towards exposure mapping. But due to (temporal) dynamics of multi-hazard events, an assumption of constant exposure might be worth discussing.
-
RC2: 'Comment on egusphere-2024-57', Neiler Medina, 01 Apr 2024
- While the paper is generally well-written, there could be improvements in organizing the content to enhance readability and flow, particularly in presenting the methodology and results sections.
- It would be beneficial to include a more detailed discussion on the validation process and uncertainty analysis of the models to ensure the robustness and reliability of the findings.
- It is not entirely clear from the paper how the multiple hazards (floods and wildfires) are integrated into the multi-hazard exposure estimation. The methodology section should provide a more detailed explanation of the approach used to combine and assess the compound risk arising from different hazards. Clarifying this aspect would help readers better understand the synergistic effects of multiple hazards and how they contribute to overall risk.
- My previous comment is of special relevance when the two hazards analyzed are common to happen in different hydrological seasons. Exploring the interactions, dependencies, and cumulative effects of floods and wildfires would provide valuable insights into the complex nature of multi-hazard scenarios. A comparative analysis of the combined risk versus individual hazards would further highlight the significance of considering multiple hazards in risk assessment and management.
- Consideration of stakeholder engagement and feedback in the development and application of the multi-hazard exposure estimation model could enhance the relevance and applicability of the research to real-world scenarios
Citation: https://doi.org/10.5194/egusphere-2024-57-RC2 -
AC2: 'Reply on RC2', Chinh Luu, 03 May 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-57/egusphere-2024-57-AC2-supplement.pdf
-
AC3: 'Reply on RC2', Chinh Luu, 03 May 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-57/egusphere-2024-57-AC3-supplement.pdf
-
AC2: 'Reply on RC2', Chinh Luu, 03 May 2024
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
484 | 210 | 46 | 740 | 18 | 17 |
- HTML: 484
- PDF: 210
- XML: 46
- Total: 740
- BibTeX: 18
- EndNote: 17
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1