the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
A protocol for vulnerability and exposure assessment in rapid extreme event attribution studies
Abstract. Over the past decade, the number of rapid extreme event attribution studies have increased substantially, both in frequency and speed of completion, often released in just a couple of weeks after an extreme weather event. Rapid analysis of vulnerability and exposure is a key complement to the hazard analysis in these studies in order to ensure a more holistic understanding of the drivers of observed impacts. In this paper, we present a method for rapid vulnerability and exposure assessment developed by the World Weather Attribution group. It focuses on the development and use of hazard-specific vulnerability and exposure assessment templates that are applied during a rapid literature review covering media, grey and academic literature. These templates are applied in conjunction with local expert judgement to elicit both breadth and depth of the assessment of potential drivers. This protocol supports the systematic integration of vulnerability and exposure into rapid attribution studies, strengthening their ability to inform the public about changing climate risks.
- Preprint
(494 KB) - Metadata XML
-
Supplement
(281 KB) - BibTeX
- EndNote
Status: open (extended)
- RC1: 'Comment on egusphere-2025-6568', Alexandre Pereira Santos, 13 Feb 2026 reply
Viewed
| HTML | XML | Total | Supplement | BibTeX | EndNote | |
|---|---|---|---|---|---|---|
| 219 | 94 | 20 | 333 | 30 | 13 | 17 |
- HTML: 219
- PDF: 94
- XML: 20
- Total: 333
- Supplement: 30
- BibTeX: 13
- EndNote: 17
Viewed (geographical distribution)
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
The preprint presents a protocol for rapid assessment of vulnerability and exposure in impact attribution studies. Its relevance lies in producing secondary evidence on a short time window around an extreme weather event, which can summarise otherwise dispersed evidence, foreground non-climatic drivers of exposure and vulnerability (e.g., socioeconomic or physical vulnerability), and direct future research. The draft does not include a clear research question, despite positioning itself fairly within the rapid attribution practice. Furthermore, this article sits in the science-policy or science-practice interface, which is evidenced by the pragmatic epistemic stance, the short time span for analysis, and the relevance aimed at humanitarian and disaster-response communities. The paper presents a brief context from the literature, the WWA rapid vulnerability and exposure assessment method, three case studies, a recent literature review, a short discussion, and an even shorter conclusion section.
The main merit of the work lies in presenting an extension to a long-practiced protocol (over 10 years of usage are mentioned) to the scrutiny of the scientific community. Adding this significance to the relevance of rapid attribution in shaping the public and scientific discussion afterwards gives a strong sense of relevance to the work. The general structure of the draft is not traditional, but it works fine. There is, however, uneven detail in the sections, leaving important gaps that we will address in detail. Additionally, the manuscript, in the current form, lacks stating its explicit research goal, external criteria for evaluating the effectiveness of the protocol (i.e., from previous research), and has a limited discussion of the added benefits and potential risks of adopting the protocol. We will expand on these topics next.
In terms of structure, the text is very well written and generally very clear. The main gaps lie in the uneven space devoted to the introduction (comparatively large), a short description of the methods (which should be the focus of the protocol), and a general discussion. The introduction does a good job of contextualizing the need for considering exposure and vulnerability in attribution studies in general. It does not acknowledge any conditions, previous studies, or lessons that point to the requirements for vulnerability and exposure protocols in this context. For instance, the preprint states that rapid assessments are scoping in nature, but is there a minimum quantity and quality of information that is necessary? Have other protocols been evaluated that could provide lessons and reasons for improvement? What changes when considering exposure and vulnerability (e.g., new requirements, challenges)? Overall, the introduction is too general to provide guidance to the methods presentation beyond the intrinsic motivations of making better assessments (which are abundant). To be systematic, however, this evaluation should not be limited to intrinsic criteria, but address extrinsic ones, which stem from the literature.
This lack of external references seems to reflect in the cursory nature of the method's presentation, which could benefit from stating what the sufficiency criteria are applied in each stage. In other words, how does the protocol guide in assessing the quality of evidence acquired in its application? As it is, the protocol's methodological considerations sound casual, which is probably not the case in reality. For example, in section 2.2, what are the criteria for selecting the local partners? By having "0-30" experts in different cases, we are left to interpret that either they are not important (when there is none), or one can never have enough of them (when there are 30). Qualitative studies have criteria for sufficiency and quality of evidence (e.g., sampling strategy and data or theoretical saturation) and can guide a better critique of the protocol by the authors. The overall structure of the protocol, however, is sound and makes sense. What is not clear is how its specific sequence, formulation, and guidance help achieve better results than other protocols could. This also hinders the impact of the manuscript, as the added value of the protocol becomes unclear.
Concerning the results, there are two issues. First, each case study has a different level of detail, and many questions remain unanswered in the first one, which is the shortest, for instance. A more even and systematic description of the cases relating to the phases and key steps of the protocol would make for a stronger presentation. The cases are there to test if the assumptions of the protocol hold against reality, and an honest presentation of good and bad performing steps would make for a more credible and grounded protocol presentation.
The second issue is the "update" section (number 4), which has structural, methodological, and content issues. While I appreciate that the traditional format of an academic paper is not set in stone, this section does not relate clearly to the problems raised in the introduction, nor does it respond directly to the methodological presentation. While chronologically it may have been necessary for the authors to go through with the review, that is not necessarily true for the readers. The academic paper has the burden of translating the process into a clean narrative. I am not convinced that this section could not become part of the introduction, strengthening the criteria that the protocol needs to respond to. Methodologically, it suffers from a worrying casual criteria for selecting the literature (i.e., the first 20 results). Again, finding sufficient literature is the absolute key for performing a good review, and even snowballing methods have a guiding line establishing when enough evidence has been collected. Google Scholar also lacks robust classification criteria beyond citations, and is not systematic enough. This issue becomes apparent in the table on page 7, which has many empty cells (e.g., no land use impact of extreme heat or cold) that seem to be an artifact of the search strategy. Moreover, the table is also very general and does little to clarify how the review improved the previous versions of the protocol. The corresponding supplementary material sits on the other extreme, as it is too verbose. My suggestion is to consider if this table could not provide a good framework for the protocol's requirements, sufficiency, and quality criteria in the introduction.
The discussion is good and addresses many of the themes raised in the introduction, with apt references to the case studies. There is still a lack of external criteria to assess the limitations of the protocol (e.g., expert selection criteria, potential confirmation bias). Additionally, there is no assessment of the validity of the results produced by the protocol in its 10-year history. Even if a systematic assessment is out of scope in this case, a summary foregrounding the insights that would not be available otherwise would greatly strengthen the case for the protocol adoption. Similarly, an assessment of mistakes, imprecisions, and reasons for improvement, for example, comparing rapid with longer and more robust attribution methods, would clarify the limitations and trade-offs of the approach. Minimally, readers would expect to know if the protocol reveals vulnerability and exposure that was otherwise ignored or underappreciated.
Overall, the preprint presents an extremely relevant topic but needs extensive review of its presentation to allow a wide readership to benefit from its (apparently) significant findings. The preprint has great potential, but the authors must also reveal the added value of their approach. This is currently only hinted at, but not explicitly stated. With extensive reviews, however, its quality may come to light.