the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Using Network Science to Evaluate Vulnerability of Landslides on Big Sur Coast, California, USA
Abstract. Landslide events, ranging from slips to catastrophic failures, pose significant challenges for prediction. This study employs a physically inspired framework to assess landslide vulnerability at a regional scale (Big Sur Coast, California). Our approach integrates techniques from the study of complex systems with multivariate statistical analysis to identify areas vulnerable to landslide events. We successfully apply a technique originally developed on the 2017 Mud Creek landslide and refine our statistical metrics to characterize landslide vulnerability within a larger geographical area. Our method is compared against factors such as landslide location, slope, displacement, precipitation, and InSAR coherence using multivariate statistical analysis. Our network analyses, which provides a natural way to incorporate spatiotemporal dynamics, perform better as a monitoring technique than traditional methods. This approach has potential for real-time monitoring and evaluating landslide vulnerability across multiple sites.
Status: open (until 17 Apr 2025)
-
RC1: 'Comment on egusphere-2024-2979', Anonymous Referee #1, 21 Mar 2025
reply
Thank you for your innovative work. I have some comments and questions that I listed below:
Line 2 (and throughout the paper): the term "vulnerability" might be misused in the context of risk science (e.g. no estimate is made of infrastructure fragility). I would argue that the terms to be used are "hazard" and "exposure” depending on the context.
Line 13: Palmer (2017) refers to slow-landslide specifically and the definition of which specific process the current paper is addressing is not clear at this point
Line 22: please see my comment above - "mass movement of material (rock, earth, debris) down the hillslope, defined as a landslide event". It would be good to define specifically the process under study here. It is not only a semantic problem as rock fall, landslides, debris flow processes are mechanically and spatially different.
Line 26-28: Can you check the https://blogbigsur (Drabinski and Bertola) reference- I couldn't access the reference mentioned. Any other scientific publication available?
Line 29 : More background information about network science would be good to have here - e.g. Nodes, Edges etc. Why is a network framework interesting in this case?
Line 31 : "two categories: stable and vulnerable. A region is considered vulnerable if it is likely to experience a landslide event" - see my comment above - "susceptible" might be more appropriate?
Figure 1: a minimap of the location in the state / country would be nice for a geomorphologic context
Line 64: it is unclear to which "mass downslope movement" you are referring to here. The link between slow moving landslide and evidence of associated mass movement is missing
Line 70: 1) 40x40 m2 should be 40m2 or 40x40m, I believe 2) The temporal resolution is not explicit
Line 90: How much of the total area represent the mask? Can it impact the analysis?
Line 105: A representation of the graph and communities would be important for the comprehension
Line 106: A Poisson sampling with Delaunay triangulation is unlikely to follow a hydro-geomorphologic logic - would there be an advantage in using slope units as the basis for the nodes/edges for example (https://doi.org/10.1016/j.geomorph.2020.107124)?
Line 109: "we calculated the average velocity and slope of any two connected nodes and set that as the edge weight" - Are the community distribution sensitive to a metric different than the average (e.g. Maximum, Skewness, Kurtosis)?Line 113: More details are needed about the GenLouvain algorithm
Line 125: In the case of a catastrophic failure (Millions m3), I would expect several communities involved to remain stable while bordering communities would see an increase in Z values - correct?
Is there any (albeit rare) scenarios where the Z-score could, for example, average out and be misleading?Line 138: Can you be more explicit of what a Z < 0 would actually mean?
Line 140-145: It is not clear to me over which period the average community persistence is calculated; is it a rolling average?
Line 148: "Here, darker sub-regions represent higher peak Z-scores. sub-regions with a relatively stable Z-score had peak Z < 2.5. Within the sub-regions that showed increasing Z, some sub-regions have peak Z < 3, and some have peak Z " - can you be more explicit about the 1 to 4 categories shown on Figure 2c?
Figure 2b: the asterisks are really small
Line 159: The Multivariate analysis should probably be introduced in the methodological section with the result explained in the Result section. A Discussion section could then be added before the conclusion
4.1 Multivariate Analysis: Can you clean this paragraph, as there are several discrepancies and it makes it hard to follow: e.g. "Community persistence exhibits positive correlations with mean displacement (-0.53)", "Moreover, precipitation has a strong positive correlation by 0.63 with precipitation"
Figure 3: Could you add a ROC plot and AUC score from the Z-score and events?
Line 202: "we analyzed two time periods:Nov 2015 to Nov 2022 and Nov 2015 to Feb 2023" the two analysis periods remains unclear in term of their relationship or purpose
Line 206: The 97% is coming out of the blue and not convincing (as you pointed out) and consider, from what I understood, a single threshold. See above my comment on the ROC curve. Could you iterate the threshold with various scoring metrics to identify an optimal threshold and provide a better selling point for your method?
Line 217: Correct "(Oregon State Univeristy, 2015) with velocity and slope in S2" - presumably Supplementary Information 2?
Line 221: What is the result of the WRF-Hydro model doing in the Conclusion section? it seems out of place
The Conclusion looks more like a Discussion + Conclusion and the conclusion lacks specific recommendations for practical implementation
Overall the landslide inventories and their uses remains unclear and need to be addressed
Citation: https://doi.org/10.5194/egusphere-2024-2979-RC1 -
RC2: 'Comment on egusphere-2024-2979', Anonymous Referee #2, 09 Apr 2025
reply
Review of Desai et al. – “Using Network Science to Evaluate Vulnerability of Landslides on Big Sur Coast, California, USA”
This manuscript explores the use of network science to analyze landslide failure potential along the Big Sur Coast, with a focus on the concept of "community persistence" across subregions. While the use of network science is innovative and the figures are visually appealing, I have significant concerns regarding terminology, scale, and the clarity and coherence of the manuscript's structure.
Major Comments
- The use of the term vulnerability is bizarre. In the risk assessment framework, vulnerability has a well-defined meaning —people, property, infrastructure, and resources, or environments that are particularly exposed to adverse impact from a hazard event. The authors appear to conflate this with hazard, which would be more appropriate terms in the context of slow-moving landslides. This issue persists throughout the manuscript and should be addressed comprehensively.
- The analysis is conducted at a subregional scale of 5 km², yet the landslide processes typically affect areas of a few 0.1 km². This mismatch raises questions about the sensitivity and appropriateness of the method for capturing relevant slope dynamics. Additionally, the definition of subregions is not clearly justified—especially the inclusion of “varied terrain” within single units. From a monitoring perspective, this scale is difficult to reconcile with actionable insights, and no clear rationale is provided for not working at the scale of slope units or other, more geomorphologically meaningful divisions. This should as well form a clear part of the manuscript discussion.
- From my perspective, this methodology making use of network science/community presence/nodes/etc. seems overly complex without delivering clear added value. Especially since, ultimately the Z-score based on the ‘community persistence’ metric only considers slope and surface velocity... Why not including the other factors introduced in the paper ? The exclusion is neither explained nor justified.
- The manuscript would benefit from significant restructuring. Results and discussion are overly interwoven, and their is actually little discussion of the results/methods. Also, the Conclusion introduces new analyses rather than synthesizing findings. This undermines the clarity and scientific rigor of the narrative. Moreover, several key analytical steps—such as the multi-variable analysis—are presented in the Results rather than the Methods section, reducing transparency.
- It must be made clearer from the abstract and throughout the manuscript that the methodology targets large, deep-seated, slow-moving landslides. The focus on four specific landslides (e.g., W22-3) is not well-motivated in the Introduction.
Section-specific comments
Abstract
- What do you mean by ‘which provides a natural way to incorporate spatiotemporal dynamics’?
Introduction
- The introduction could be more comprehensive, particularly in contextualizing the relevance of network science to landslide analysis.
- Paragraph 30: Clarify what network science techniques entail and justify their application here.
- Again, reconsider the use of vulnerability.
- Clarify the rationale behind focusing on the W22-3 landslides.
- It is not clear in the introduction why you focus on the 4 landslides of W22-3 and not the 44 others.
Data
- Paragraph 50: The initial detection of 44 active landslides is mentioned twice and then largely ignored. Either omit or incorporate this information meaningfully in the analysis—e.g., assessing their impact on community persistence in other subregions.
- Paragraph 60–65: Rather than stating volumes removed, provide basic information about landslide size. Paul’s Slide, for example, appears much larger than others, which likely impacted the analysis. This should be discussed in the discussion section as well.
- It feels premature to present the number of landslides detected before explaining the InSAR methodology.
- Figure 1: Increase the size of panel (a), reduce legend clutter, and adjust scale values in panel (c) for cleaner presentation.
- The term “InSAR snapshots” is unclear—consider replacing with “deformation maps” or similar.
- How did you defined you subregions, is it relevant to include ‘varied terrain’ in a subregion? Also, in a monitoring perspective, how to deal with information at a 5km² scale?
- Paragraph 115: Why are only slope and velocity used in community persistence? This choice should be justified more explicitly.
- Paragraph 130: “We use peak Z to quantify differences between sub-regions to better classify the slow-moving landslides as stable (peak Z < 2.5) or vulnerable (peak Z > 2.5).” Clarify whether you are classifying subregions or landslides based on peak Z.
- Landslide susceptibility is approximated using the sole slope. Why not using susceptibility models – I guess there are many available for the region?
Results
- Figure 2: Clearly show the correspondence between Zt point size and value. Make the color scheme and value thresholds more intuitive (e.g., emphasize the vulnerable vs. stable divide).
- How do you rule out that Paul’s Slide is detected so clearly simply because it is much larger than the others? This point deserves to be discussed.
Discussion & Conclusion
- Results, discussion and conclusion are overly interwoven, and there is ulitmately little discussion of the results/methods.
- How did you define the subregions? Is it relevant to include ‘varied terrain’ in a subregion? Also, from a monitoring perspective, how to deal with information at a 5km² scale (e.g., to ‘potentially allowing for preemptive monitoring and mitigation measures’)? How does changes in slope velocity (of a few cm/yr) over a few 0.1 km² influence velocities max/averaged over subregions of that size? Why not working e.g., over slope units?
- paragraph 205: The Conclusion should not introduce new results; instead, it should synthesize key findings and implications.
Final Recommendation
This manuscript addresses an important (and complex) aspect of landslide risk, aiming at combining satellite remote sensing and network science framework to provide a comprehensive monitoring technique. However, I see fundamental issues with terminology, scale, methodological clarity, and manuscript structure that significantly limit its current impact.
Citation: https://doi.org/10.5194/egusphere-2024-2979-RC2
Data sets
Data from: Using network science to evaluate vulnerability of landslides on Big Sur Coast, California, USA Vrinda D. Desai and Alexander L. Handwerger https://doi.org/10.5061/dryad.1jwstqk42
Model code and software
networkLandslide Vrinda D. Desai https://github.com/vddesai-97/networkLandslide.git
Viewed
Since the preprint corresponding to this journal article was posted outside of Copernicus Publications, the preprint-related metrics are limited to HTML views.
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
70 | 0 | 0 | 70 | 0 | 0 |
- HTML: 70
- PDF: 0
- XML: 0
- Total: 70
- BibTeX: 0
- EndNote: 0
Viewed (geographical distribution)
Since the preprint corresponding to this journal article was posted outside of Copernicus Publications, the preprint-related metrics are limited to HTML views.
Country | # | Views | % |
---|---|---|---|
United States of America | 1 | 34 | 47 |
China | 2 | 9 | 12 |
Belgium | 3 | 5 | 6 |
Germany | 4 | 5 | 6 |
Spain | 5 | 4 | 5 |
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
- 34