the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Exploring the decision-making process in model development: focus on the Arctic snowpack
Abstract. The Arctic poses many challenges to Earth System and snow physics models, which are unable to simulate crucial Arctic snowpack processes, such as vapour gradients and rain-on-snow-induced ice layers. These limitations raise concerns about the current understanding of Arctic warming and its impact on biodiversity, livelihoods, permafrost and the global carbon budget. Recognizing that models are shaped by human choices, eighteen Arctic researchers were interviewed to delve into the decision-making process behind model construction. Although data availability, issues of scale, internal model consistency, and historical and numerical model legacies were cited as obstacles to developing an Arctic snowpack model, no opinion was unanimous. Divergences were not merely scientific disagreements about the Arctic snowpack, but reflected the broader research context. Inadequate and insufficient resources partly driven by short-term priorities dominating research landscapes, impeded progress. Nevertheless, modellers were found to be both adaptable to shifting strategic research priorities – an adaptability demonstrated by the fact that interdisciplinary collaborations were the key motivation for model development – and anchored in the past. This anchoring led to diverging opinions about whether existing models are “good enough” and whether investing time and effort to build a new model was a useful strategy when addressing pressing research challenges. Moving forward, we recommend that both stakeholders and modellers be involved in future snow model intercomparison projects in order to drive developments that address snow model limitations that currently impede progress in various disciplines. We also argue for more transparency about the contextual factors that shape research decisions. Otherwise, the reality of our scientific process will remain hidden, limiting the changes necessary to our research practice.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(1306 KB)
-
Supplement
(1665 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(1306 KB) - Metadata XML
-
Supplement
(1665 KB) - BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-2926', Anonymous Referee #1, 25 Jan 2024
Review of “Exploring the decision-making process in model development: focus on the Arctic snowpack” by Menard et al.
This is a somewhat unusual manuscript, submitted as a “Research article” for consideration in The Cryosphere. The “unusual” aspect is that, where most research articles focus on measurement data and/or simulations, this study reports on interviews conducted with experts in the field of Arctic snow, [L96-97] “to understand why decisions made by modellers all over the world and over the past decades have not led to more (or is it “any”?) progress in Arctic snowpack modelling, …“
Here, issues identified are the somewhat troublesome transferability of modeling approaches between lower latitudes and polar regions, limited data availability from the arctic suitable for model development, parameterization development, and calibration and validation. Other issues are the historical underrepresentation of arctic snow in snow model development environments, lack of attention and (thus) funding for the problem, and inadequate approaches.
Major comments:
- I think it is an interesting concept to access knowledge that is normally not finding its way to the broader community in the form of manuscripts. However, I think there are some methodological problems that devalue this manuscript from a “Research article” to only an opinion piece.
- First of all, the selection of participants was seemingly done very subjectively, and is not transparent for the reader. The only procedural aspect mentioned here is [L118] “CM, SR and IM compiled a shortlist of participants”. I wish that there would have been some objective criteria, for example a random pick of first authors on papers that mention “snow”, “arctic” and “modeling” in the abstract that were published over the last ten years, based on a database like Scopus, ISI knowledge or google scholar.
- Second, the manuscript relies heavily on quotes from the interviews. The full interview transcripts are, understandably, not released. Thus this could potentially result in heavy cherry-picking of quotes by the first three authors. Apparently, the interview transcripts have been coded using NVivo, but it is not clear how this has further been used. It is not clear what attempts were made for objective analysis of the interview transcripts.
- Third, I’m concerned that quotes from the interviewed scientist are published, without fact-checking if this is true. This results in a few false statements, for example that “CROCUS is an avalanche model” [L237], or that [L282-283] “[Models] are limiting the number of [snow] layers for computational stability and efficiency”, which for Crocus or SNOWPACK, for example, would be trivially easy to adjust. [L373-375] “In my sense, large scale climate modellers aren't sufficiently aware of snow. (…) There are so many people who don't care about that“. The first part of this statement is an opinion. The second part is stated as a fact. “There are so many people who don't care about that”. I would like to see evidence for that.
- Lastly, the Interview consent form states: “Access to the interview transcript will be limited to the research team: Dr Menard, University of Edinburgh; Dr Sirpa Rasmus, University of Lapland; Dr Ioanna Merkouriadi, Finnish Meteorological Institute.” Yet the list of co-authors further encompasses the majority of interviewed scientists. I cannot see how this can be objective. I think interviewees have the right to review their quotes, such that they can verify that no misunderstandings or misrepresentations have occurred. But I fail to understand how the interviewees can also be co-author. On the one hand, they have no access to the other interview transcripts, thus cannot reliably judge if this was a proper reporting of what was said in the interviews, but more importantly, as author they have direct impact on which quotes from them are selected, and how they are presented. That means that this manuscript basically has become a vehicle to get their own opinions across, which I think doesn’t align with what is expected for a “Research article”. On top of that, they obviously have full access to their own interview, but not to the other interviews. I cannot see how this can properly result in a good co-authorship, when the majority of underlying data is inaccessible to the co-author. I cannot see a scenario where this leads to proper scientific conduct for a peer-reviewed “Research article”. Unfortunately, I don’t see how these methodological flaws can be corrected, and I think the manuscript should be rejected as a peer-reviewed “Research article”. It may find an outlet as an opinion piece.
- I also struggled with understanding the modeling environment that the authors were considering. I found that the manuscript paints a picture of this environment that simply didn’t resonate with me. For example, when I read: [L549-553] “Yet, models are a product of one or multiple modelers’ vision. This was reflected in the interviews during which many participants often mentioned the name of the model creator or lead developer instead of, or as well as, the model’s name. The research identity of many modellers is, whether they want it or not, intertwined with their model; inviting authors to reflect about their positionality would allow modelers to regain control over their own narrative and research identity.” My personal experience is completely different. Thinking about the snow model I work with most, and which is widely used and recognized in various cryosphere communities, basically all major model developments in the last 15 years were done by PhD students and PostDocs, most of whom have since moved on. So their “research identities” stretch way beyond “their model”. I think when asked, very few of the PhD students would describe the model as “their model”. In fact, even though they contributed most significantly to model developments, I doubt they will describe their role as a “modeller”. The model I’m mostly familiar with, has almost no dedicated, long-term model developers or code maintainers. The large majority of recent code changes (last 15 years) has been done by people with contracts lasting shorter than a few years. The original “model creators”, in the meantime, have taken up different research fields, retired or have taken up other roles in academia. For the model I work with most, no “lead developer” can be identified. Thus, I struggle to agree with this proposed narrative of “model creators” or “lead developers” as well as supposedly the concept of “their model” at face value. It needs to be supported by data and analysis. For example by analyzing model code repositories and investigating how many people contributed how much to the code, and in what role. That would give the necessary underpinning of this narrative. I’m now curious if the model ecosystem I work with is the exception, or the rule. It could also signal a bias in the selection of participants for the interviews.
- Further, it is written: [L96-97] “The aim of this study is to understand why decisions made by modellers all over the world and over the past decades have not led to more (or is it “any”?) progress in Arctic snowpack modelling, ...“. Given that my personal experience is that most development is done by researchers on PhD or other short-term contracts, I think a lot of issues were mentioned that they have no control over, like funding or the historical legacy of models. In contrast, very little was reported on the experiences and choices made by PhD students or other short-term contracted researchers over the course of their model development efforts. I think it plays a role here that those researchers seem to be absent from the pool of interviewees.
- I found that the manuscript was lacking context. It feels like it is assumed that the readers understand the problems with snow models in the Arctic. There is very little substantiation of these problems (basically restricted to L86-95). In my opinion, it fails to properly introduce the problems to the reader. Furthermore, I found context lacking in what the past decades have seen in model development and projects focusing on Arctic snowpacks. In modern-day science, which is highly project-driven, national funding agencies are one of the major sources of funding for model development. There is a lot of emphasis in the manuscript on lack of funding, lack of long-term perspective, focus on other regions than the Arctic, as well as a strong sentiment that these “modellers” supposedly live in their own world. I would have expected to read much more about efforts undertaken by the Arctic snow community to support model development. How many proposals did they submit? How many were funded? How much of these funds was allocated for model development? I would have expected to see more hard data on this. Also more concrete information about decision making. I.e., if a proposal contains a modeling component, what model is selected and why? How is decided where to focus energy on model development? Right now, the manuscript comes across as a lot of complaining and finger-pointing, but a bit more reporting on one’s own activities, including some concrete and objective data on funding, money spent, etc., would be expected given the goal set forth by the authors. Here, for example: [L116-118] “In these discussions, it became clear that the current snow models fell short in representing all the Arctic snowpack processes needed by project collaborators.” I personally think that this is not the right approach to tackle this. In my opinion, one cannot start a project, and then expect there to be a free, open-source model that fits one's needs, with proper documentation and an email address that you can send all your questions to with efficient response times. That is just an unrealistic expectation. In my opinion “modeling” should be an undertaking done by the community as a whole, where everyone contributes knowledge, expertise, skills, data, etc.
- [L182] “I'm sick of modelers who think the world is a computer screen” This quote is a confirmation for me about the big problem of accessibility to fieldwork, combined with the “hero”-status attached to fieldwork (Nash et al, 2019). Many research positions including fieldwork ask for previous fieldwork experience, or, alternatively, “outdoor experience”. Particularly back-country skiers, (alpine) climbers, and hikers have an edge in securing snow-related fieldwork. And we know that "the outdoors" notoriously lacks diversity (e.g., Winter et al., 2020, Ho and Chang, 2022). Fieldwork is mostly accessible for PhD students, or senior scientists with previous fieldwork experience. Model developers often lack access to participating in fieldwork, and people without access to fieldwork mostly concentrate on doing modeling work. It’s important to note here that even when possibilities arise, fieldwork is not a safe environment for everyone (Marín-Spiotta et al., 2020), and that could be prohibitive for participation. The fact of the matter is that many researchers will never go to the field for a variety of reasons, which may require rethinking of the status of fieldwork (e.g., Bruun et al., 2023). The message delivered in this manuscript is mostly one-directional: [L96-97] “The aim of this study is to understand why decisions made by modellers all over the world and over the past decades have not led to more (or is it “any”?) progress in Arctic snowpack modelling”, combined with the statement “I'm sick of modelers who think the world is a computer screen”. I so wish the authors would have written “by the Arctic snow community” instead of “by modellers”. I found this diversity, equity and inclusion aspect overwhelmingly missing from the manuscript. I will further detail my sentiments here in the “Epilogue” below.
Minor comments:- Several statements and wordings are vague.
- [L96-97] “The aim of this study is to understand why decisions made by modellers all over the world and over the past decades have not led to more (or is it “any”?) progress in Arctic snowpack modelling.” See also my major concern #3. I think more effort is needed to document and quantify the progress that has been made, such that it can be objectively concluded whether or not this constitutes “progress”. As it stands, this statement carries little weight. In fact, the problems with snow modeling in the Arctic are poorly introduced in the manuscript. Only L88-95 discuss this aspect, but only very marginally.
- [L294-296] “When I speak to large scale modellers about rain on snow, the feedback is usually ‘we are aware that something needs to be done, but we have other priorities and we don’t have resources for this’. It’s not straightforward.”
I think I understand what this is about because of my expertise, but for reaching a broader audience, it should be made explicit. Please specify what the issues with rain-on-snow are. Is it the precipitation phase separation rain vs snow, is it the runoff from a snowpack, is it the formation of ice lenses? Also, academia is almost fully project driven, so why not write a proposal or provide funding otherwise for a model developer to work on improving the “rain-on-snow” problems in a model? I think this also relates to my major concern #3, listed above, regarding missing context. - [L372-373] “the first thing it would do is alert the modelers to the difficulties that they have in the Arctic that, in the absence of these evaluations, they wouldn't even know about…“ Please provide examples. The statement suggests that the interviewee knows about difficulties that the modelers supposedly don’t know about. I deem it inadequate to publish a paper with statements like that, without sufficient backing up of examples, preferably using peer-reviewed literature.
- [L310-311] “I mean, the idea that you're going to create an arctic snow model in a PhD is...?!“
This is an incomplete sentence, and I’m not sure what I need to fill in at the “…?!”. Please add some explanation here. - [L537-538] “Some users of [our model], they probably don't know what they're doing, and sometimes a paper comes where I say ???”
Please fill in the “???” here. With my social background, I think I understand what “???” and “?!” is supposed to indicate, but for non-native English speakers, I think there is a risk here that they don’t get the implicit message.
- There were a few quotes that I think are wrong, and I wonder if there should not be an editorial comment that the statement is deemed inaccurate.
- For example, looking at the publications involving Crocus over the last 10 years, I don’t think the statement [L237] “But, I mean Crocus, it's an avalanche model, right?” is accurate.
- Similarly, [L282-L284] “[Models] are limiting the number of [snow] layers for computational stability and efficiency so they are not respecting the way in which the snow pack is actually built up i.e. in episodic snowfall events, which will form different layers (…)”. For models like Crocus and SNOWPACK, it is trivially easy to avoid a limiting number of snow layers. I think it is important to make an editorial remark, since otherwise, false information gets propagated.
- Extensive use of the term “Modeller”: I’m not sure the word “modeler” is meaningful. Even the authors seem to have an ambivalent definition, defining it both as “model developer” [L127] as well as “with expertise in modeling” [L128]. I think there is a substantial difference between both. Note that in L132, both SPM and LSM “modelers” are defined as “model developers”. Personally, I think labeling someone as a “modeler” often attaches an identity to an individual, where this is not justified. It also has unclear meaning. Is it someone who uses the model, or someone who develops for the model, or is it someone who maintains the model code? Is someone who has used a model once in their research career already a “modeler”, or is it someone who uses models in more than, let’s say, 50% of their research? I would rather like to see more exact wording being used, specifically focusing on the role someone has. Like “model user”, “model developer” or “model maintainer”. I think IPCC rightfully avoids the word modeler (referring to L546). But thinking about roles avoids attaching an identity to a researcher, while allowing to encapsulate the common situation where researchers can take up different roles during their career, or even within a single project.
- [L427-428]: “We argue that efforts to represent Arctic snowpack processes would pave the way in the research areas highlighted below for new interdisciplinary collaborations”. What follows are three rather specific research directions. Not that I want to argue about their relevance, it is just missing context why those three are listed, who has set these priorities? Did this come out of the interviews as well?
Epilogue
I also would like to stress that the manuscript contained quite some material that to me came across as somewhat “aggressive”. I would like to make the authors aware that it left me with the impression of a poorly working field, with a lack of communication, collaboration and a missing cooperative mindset.
Examples:
[L182] “I'm sick of modelers who think the world is a computer screen”
In fact, many scientists have no other choice but to focus on modeling, since fieldwork in polar regions is generally poorly accessible (Nash et al., 2019, Karplus et al., 2022). I know scientists who would give an arm and a leg to go to the field just once, and probably doing so would increase the quality of their model development efforts considerably. The phrasing of this statement suggests that the scientist never considered that they could have made an effort to bring the "modelers who think the world is a computer screen” in closer contact with the real world, instead of saying that they are “sick” of them.[L184-185] “The[se] models spend so much time doing things that aren't very important for lots of applications that they're kind of worthless“
Claiming that work done by fellow scientists is worthless, because it doesn’t fit one's own needs, is detrimental to a healthy, open and welcoming academic atmosphere I think.[L537-538] “Some users of [our model], they probably don't know what they're doing, and sometimes a paper comes where I say ???”
First of all, I’m not really sure what I have to fill in at the “???”, but I assume it is some negative sentiment. In these cases, reaching out to those users can be of great help to the users, and would foster exchange of knowledge, and, again, an open and welcoming academic environment.[L374-375] “In my sense, large scale climate modellers aren't sufficiently aware of snow. (…) There are so many people who don't care about that“
I find this quite the accusation that those people don’t care. Please provide evidence that they don’t care, for example from reviews of proposals and/or manuscripts. Did papers in fact get rejected, because reviewers claim that snow is irrelevant? I’m skeptical that that is the case.[L96-97] “The aim of this study is to understand why decisions made by modellers all over the world and over the past decades have not led to more (or is it “any”?) progress in Arctic snowpack modelling, ...“
I understand that the phrasing “(or is it “any”?)” is catchy, but it comes across a bit as dismissive towards publications from, let’s say, the last 10 to 20 years, documenting improvements in modeling approaches, some of which are cited in the manuscript. I would strongly encourage more precise wording. Which objective has not been achieved (yet)? Also, this phrasing implies that “modelers” are to blame for the supposedly slow progress. In fact, the manuscript discusses very few decisions made by “modelers” (interpreted by me here as model developers). And also in light of the sentences I have listed above, I think this is unfair. There seems to be a lack of healthy collaboration in the field. I am also aware that there is also a big issue with accessibility (diversity and inclusion) to fieldwork, that in my opinion plays a role here. There are also funding agencies, and hiring decisions that I think are to blame for a lack of resources for model development. Some of those are addressed in the manuscript, some of those are not. But it would have been better to phrase the aim of the study as: “The aim of this study is to understand why decisions made by the Arctic snow community all over the world and over the past decades have not led to more progress in Arctic snowpack modelling, ...“”I put this feedback as “Epilogue”, because for me, it is not relevant to whether or not the manuscript could be published as a scientific research article, but I hope the authors become aware that including statements like these, unfortunately left me with the impression that the field of Arctic snow is a somewhat unhealthy environment, with some missing collaborative mindset. In a way, I think it’s already a problem that these sorts of things apparently have been said in the interviews, but maybe this was simply the heat of the moment. Maybe the interviewees expressed themselves somewhat awkwardly because they also felt like they were in an informal private conversation. It is also very possible that context or tone went missing in the transcription and the quote selection for the manuscript. One could argue that it may be important to report about such sentiments in the field, since it can signal problems hindering progress. However, it would require proper context, including identifying this as a problem, and proposing pathways forward to resolve such conflicts. I think that the authors should seriously consider the purpose, and effect, of including statements like these in the manuscript. In my opinion, it doesn’t reflect well on the Arctic snow community, and I refuse to believe that this is the message the authors wanted to get across.
References:- Bruun, J. M., & Guasco, A. (2023). Reimagining the ‘fields’ of fieldwork. Dialogues in Human Geography, 0(0). https://doi.org/10.1177/20438206231178815
- Yi Chien Jade Ho & David Chang(2022)To whom does this place belong? Whiteness and diversity in outdoor recreation and education, Annals of Leisure Research, 25:5, 569-582, DOI: 10.1080/11745398.2020.1859389
- Karplus MS, Young TJ, Anandakrishnan S, et al. Strategies to build a positive and inclusive Antarctic field work environment. Annals of Glaciology. 2022;63(87-89):125-131. doi:10.1017/aog.2023.32
- Marín-Spiotta, E., Barnes, R. T., Berhe, A. A., Hastings, M. G., Mattheis, A., Schneider, B., and Williams, B. M.: Hostile climates are barriers to diversifying the geosciences, Adv. Geosci., 53, 117–127, https://doi.org/10.5194/adgeo-53-117-2020, 2020.
- Nash M, Nielsen HEF, Shaw J, King M, Lea MA, et al. (2019) “Antarctica just has this hero factor…”: Gendered barriers to Australian Antarctic research and remote fieldwork. PLOS ONE 14(1): e0209983. https://doi.org/10.1371/journal.pone.0209983
- Winter, P.L.; Crano, W.D.; Basáñez, T.; Lamb, C.S. Equity in Access to Outdoor Recreation—Informing a Sustainable Future. Sustainability 2020, 12, 124. https://doi.org/10.3390/su12010124
Citation: https://doi.org/10.5194/egusphere-2023-2926-RC1 -
AC1: 'Reply on RC1', Cécile Ménard, 05 Feb 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2023-2926/egusphere-2023-2926-AC1-supplement.pdf
- I think it is an interesting concept to access knowledge that is normally not finding its way to the broader community in the form of manuscripts. However, I think there are some methodological problems that devalue this manuscript from a “Research article” to only an opinion piece.
-
RC2: 'Comment on egusphere-2023-2926', Anonymous Referee #2, 16 Mar 2024
This paper covers an important topic—how decisions are made in the context of modeling of Arctic snowpack. By providing insight into the influences on decisions throughout the modeling process the paper contributes to a minimally understood feature of modeling practice, as in most cases, the decision-making is not explicitly documented, nor are the reasons that justify particular decisions. However, there are some issues with the manuscript that need to be addressed:
I urge the authors to consider some of the philosophical literature on decision-making in modeling, which mainly concerns climate models but applies to the discussions and perhaps the interpretations of some of these qualitative findings.
- Several philosophers working on issues in climate science have detailed how values (i.e., interests) influence decision-making through the course of model development, but none of that literature is referenced here despite the high relevance to the topic of discussion. I recommend looking at Parker and Winsberg (2018), Parker (2014), and Morrison (2021), specifically chapter 3 of the latter. The research by these scholars discusses how interests (subjective preferences) and features of the modeling context (pragmatics) influence decision-making in the course of climate modeling, including those choices of determining modeling purposes and priorities, what and how to represent features of the target system, the suitability of observations and metrics for model assessment and validation, etc. The authors might also consider looking at the Shackley article “Epistemic Lifestyles in Climate Change Modeling” (2001). I suggest adding elements from these papers to the paragraph starting in line 64 or including an additional paragraph to capture the discussions in philosophy on these topics. You might also find that the insights from certain papers are relevant to specific sections as well (for example, Parker 2014 for section on data available and resources.
- And, concerning tradeoffs, see work by Levins, mainly “The Strategy of Model-building in population biology” (1966). I note that the subject of modeling is different, but Levins’ thesis applies to the modeling of complex systems generally and is thus related to the discussion in 3.1.1.
- Concerning the disagreement about models being “good enough” for current research problems—an article deals with similar disagreement about the value of different modeling systems in relation to different sets of research questions by Lloyd, Bukovsky, and Mearns (2020). The authors here argue that the reason for disagreements about the value of regional versus global models is because they have different research questions and the representational features of the models are different. So they don’t take the representational features of one type of model to be valuable for their questions, and vice versa. Wonder if something similar here is going on, thus this frame might be useful…and might even be useful for analyzing the lack of unanimity in the responses to the questions that were asked. They have different interests, are asking different questions, and have different local epistemologies (Longino 2002 and Morrison 2021). (Where the authors talk about identity, this seems akin to local epistemologies.)
Appreciate the content-context distinction, however, I wonder if you can separate them, and would appreciate more consideration of the way research context, understood more generally than “identity” in the paper, shapes perception of modeling practice, etc. I am also not sure whether the analysis from Staddon (2017) on the distinction between professional and personal is fitting here. Again, I think these responses are a function of differences in the context in which these individuals conduct research and the local epistemologies they are part of. For example, with “I’m sick of modelers who think the world is a computer screen” this is a rejection of the attitude of being focused on the modeling world as opposed to the empirical world, which can be reduced to differences in one’s scientific ontology and epistemic values. And “these models spend so much time…” this can be interpreted as someone who is more of a pluralist about models and their application, as opposed to part of the paradigm by which models are seen as fit-for-purpose for a limited number of intentionally chosen applications….in other words, it’s not necessarily the “identities” of the researchers that come out in these quotes, but rather, the diversity of local epistemologies that can be found in Arctic modeling, and the disagreement that arises from this diversity. I appreciate the information in the intro to section 3 but think you could do more to shed light on the significance of sharing these sorts of quotes from your interviews. A different frame for your discussion might add depth and significance.
In the same vein as the above comments, I think philosophical discussions can help to frame your results. For example, the somewhat reductive interpretation of the quote at the beginning of 3.2.1.: prioritization is a feature of scientific practices, including modeling, being driven by human interests, and certain elements of the complex systems we investigate being more or less important relative to those interests. While resources are limited, human beings are also inherently value-driven, and if they don’t perceive something as related to their interests, they will deprioritize it, and yes, the practical constraints make this more apparent, but aren’t the sole cause of prioritization in science. There are an infinite number of questions we could ask, and we will see value in some and ignore others. I think this is what the quote is getting at you have chosen here, with the “we have other priorities” AND “we don’t have resources”, i.e., there are two reasons for not tackling the problem, one is, it is inconsistent with what they care about in modeling, and second, there aren’t resources, and these compound one another. Longino’s discussions of modeling complex systems in her 2002 book would be helpful here. This is an example of one place in the manuscript where the interpretation of qualitative evidence can be aided by appealing to philosophical discussions from the philosophy of science in practice (i.e., Longino and others have done empirical studies to draw their conclusions, it’s not “armchair” analysis).
The comments on short-termism are incredibly important, appreciate their explicit inclusion, and wonder if more can be said about the implications of this current paradigm in funding procedures…
I am a bit confused about the discussion of the anchoring bias…it appears a bit vague in what the bias is in itself, and I am not sure that the explanation in the first paragraph makes it clear what it is. I think it is the judged adequacy of the models, based on historical model features and development, in relation to some purpose, which can shift when one’s interests or research questions change (which the authors hint at in lines 375–379). I think this is what is being said also in the case that community efforts can lead to shifts in these anchors…community comparison projects foster interdisciplinary discourse on model capabilities and limitations, which can presumably highlight inadequacies in relation to priority research questions. This section could be clearer, especially with respect to what it is about the existing models that function as a reference point for judging the value of different future development efforts. The section should also conclude with a clear summary of the argument the authors seek to make given the statement in the first paragraph: “anchoring contributed largely to the absence of Arctic snow processes in existing models”.
In conclusion, this is a valuable study and provides significant empirical insight into understudied and implicit components of modeling of climate features generally. However, I think work needs to be done with the framing of the findings from the study and their discussion. I strongly suggest bringing in philosophical work on modeling to help add depth and detail to the discussion.
References:
Levins, R. (1966). The strategy of model building in population biology. American scientist, 54(4), 421-431. (see also, for updated discussions: Weisberg, M. (2006). Forty years of ‘the strategy’: Levins on model building and idealization. Biology and Philosophy, 21, 623-645. and Matthewson, J. (2011). Trade-offs in model-building: A more target-oriented approach. Studies in History and Philosophy of Science Part A, 42(2), 324-333.)
Lloyd, E. A., Bukovsky, M., & Mearns, L. O. (2021). An analysis of the disagreement about added value by regional climate models. Synthese, 198(12), 11645-11672.
Longino, H. E. (2002). The fate of knowledge. Princeton University Press. (See chapter 8 for local epistemologies and differences between different investigative communities, which is relevant to your discussion.)
Morrison, M. A. (2021). The models are alright: A socio-epistemic theory of the landscape of climate model development. Indiana University.
Parker, W. (2014). Values and uncertainties in climate prediction, revisited. Studies in History and Philosophy of Science Part A, 46, 24-30.
Parker, W. S., & Winsberg, E. (2018). Values and evidence: how models make a difference. European Journal for Philosophy of Science, 8, 125-142.
Shackley, S. (2001). Epistemic lifestyles in climate change modeling.
Citation: https://doi.org/10.5194/egusphere-2023-2926-RC2 -
AC2: 'Reply on RC2', Cécile Ménard, 04 Apr 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2023-2926/egusphere-2023-2926-AC2-supplement.pdf
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-2926', Anonymous Referee #1, 25 Jan 2024
Review of “Exploring the decision-making process in model development: focus on the Arctic snowpack” by Menard et al.
This is a somewhat unusual manuscript, submitted as a “Research article” for consideration in The Cryosphere. The “unusual” aspect is that, where most research articles focus on measurement data and/or simulations, this study reports on interviews conducted with experts in the field of Arctic snow, [L96-97] “to understand why decisions made by modellers all over the world and over the past decades have not led to more (or is it “any”?) progress in Arctic snowpack modelling, …“
Here, issues identified are the somewhat troublesome transferability of modeling approaches between lower latitudes and polar regions, limited data availability from the arctic suitable for model development, parameterization development, and calibration and validation. Other issues are the historical underrepresentation of arctic snow in snow model development environments, lack of attention and (thus) funding for the problem, and inadequate approaches.
Major comments:
- I think it is an interesting concept to access knowledge that is normally not finding its way to the broader community in the form of manuscripts. However, I think there are some methodological problems that devalue this manuscript from a “Research article” to only an opinion piece.
- First of all, the selection of participants was seemingly done very subjectively, and is not transparent for the reader. The only procedural aspect mentioned here is [L118] “CM, SR and IM compiled a shortlist of participants”. I wish that there would have been some objective criteria, for example a random pick of first authors on papers that mention “snow”, “arctic” and “modeling” in the abstract that were published over the last ten years, based on a database like Scopus, ISI knowledge or google scholar.
- Second, the manuscript relies heavily on quotes from the interviews. The full interview transcripts are, understandably, not released. Thus this could potentially result in heavy cherry-picking of quotes by the first three authors. Apparently, the interview transcripts have been coded using NVivo, but it is not clear how this has further been used. It is not clear what attempts were made for objective analysis of the interview transcripts.
- Third, I’m concerned that quotes from the interviewed scientist are published, without fact-checking if this is true. This results in a few false statements, for example that “CROCUS is an avalanche model” [L237], or that [L282-283] “[Models] are limiting the number of [snow] layers for computational stability and efficiency”, which for Crocus or SNOWPACK, for example, would be trivially easy to adjust. [L373-375] “In my sense, large scale climate modellers aren't sufficiently aware of snow. (…) There are so many people who don't care about that“. The first part of this statement is an opinion. The second part is stated as a fact. “There are so many people who don't care about that”. I would like to see evidence for that.
- Lastly, the Interview consent form states: “Access to the interview transcript will be limited to the research team: Dr Menard, University of Edinburgh; Dr Sirpa Rasmus, University of Lapland; Dr Ioanna Merkouriadi, Finnish Meteorological Institute.” Yet the list of co-authors further encompasses the majority of interviewed scientists. I cannot see how this can be objective. I think interviewees have the right to review their quotes, such that they can verify that no misunderstandings or misrepresentations have occurred. But I fail to understand how the interviewees can also be co-author. On the one hand, they have no access to the other interview transcripts, thus cannot reliably judge if this was a proper reporting of what was said in the interviews, but more importantly, as author they have direct impact on which quotes from them are selected, and how they are presented. That means that this manuscript basically has become a vehicle to get their own opinions across, which I think doesn’t align with what is expected for a “Research article”. On top of that, they obviously have full access to their own interview, but not to the other interviews. I cannot see how this can properly result in a good co-authorship, when the majority of underlying data is inaccessible to the co-author. I cannot see a scenario where this leads to proper scientific conduct for a peer-reviewed “Research article”. Unfortunately, I don’t see how these methodological flaws can be corrected, and I think the manuscript should be rejected as a peer-reviewed “Research article”. It may find an outlet as an opinion piece.
- I also struggled with understanding the modeling environment that the authors were considering. I found that the manuscript paints a picture of this environment that simply didn’t resonate with me. For example, when I read: [L549-553] “Yet, models are a product of one or multiple modelers’ vision. This was reflected in the interviews during which many participants often mentioned the name of the model creator or lead developer instead of, or as well as, the model’s name. The research identity of many modellers is, whether they want it or not, intertwined with their model; inviting authors to reflect about their positionality would allow modelers to regain control over their own narrative and research identity.” My personal experience is completely different. Thinking about the snow model I work with most, and which is widely used and recognized in various cryosphere communities, basically all major model developments in the last 15 years were done by PhD students and PostDocs, most of whom have since moved on. So their “research identities” stretch way beyond “their model”. I think when asked, very few of the PhD students would describe the model as “their model”. In fact, even though they contributed most significantly to model developments, I doubt they will describe their role as a “modeller”. The model I’m mostly familiar with, has almost no dedicated, long-term model developers or code maintainers. The large majority of recent code changes (last 15 years) has been done by people with contracts lasting shorter than a few years. The original “model creators”, in the meantime, have taken up different research fields, retired or have taken up other roles in academia. For the model I work with most, no “lead developer” can be identified. Thus, I struggle to agree with this proposed narrative of “model creators” or “lead developers” as well as supposedly the concept of “their model” at face value. It needs to be supported by data and analysis. For example by analyzing model code repositories and investigating how many people contributed how much to the code, and in what role. That would give the necessary underpinning of this narrative. I’m now curious if the model ecosystem I work with is the exception, or the rule. It could also signal a bias in the selection of participants for the interviews.
- Further, it is written: [L96-97] “The aim of this study is to understand why decisions made by modellers all over the world and over the past decades have not led to more (or is it “any”?) progress in Arctic snowpack modelling, ...“. Given that my personal experience is that most development is done by researchers on PhD or other short-term contracts, I think a lot of issues were mentioned that they have no control over, like funding or the historical legacy of models. In contrast, very little was reported on the experiences and choices made by PhD students or other short-term contracted researchers over the course of their model development efforts. I think it plays a role here that those researchers seem to be absent from the pool of interviewees.
- I found that the manuscript was lacking context. It feels like it is assumed that the readers understand the problems with snow models in the Arctic. There is very little substantiation of these problems (basically restricted to L86-95). In my opinion, it fails to properly introduce the problems to the reader. Furthermore, I found context lacking in what the past decades have seen in model development and projects focusing on Arctic snowpacks. In modern-day science, which is highly project-driven, national funding agencies are one of the major sources of funding for model development. There is a lot of emphasis in the manuscript on lack of funding, lack of long-term perspective, focus on other regions than the Arctic, as well as a strong sentiment that these “modellers” supposedly live in their own world. I would have expected to read much more about efforts undertaken by the Arctic snow community to support model development. How many proposals did they submit? How many were funded? How much of these funds was allocated for model development? I would have expected to see more hard data on this. Also more concrete information about decision making. I.e., if a proposal contains a modeling component, what model is selected and why? How is decided where to focus energy on model development? Right now, the manuscript comes across as a lot of complaining and finger-pointing, but a bit more reporting on one’s own activities, including some concrete and objective data on funding, money spent, etc., would be expected given the goal set forth by the authors. Here, for example: [L116-118] “In these discussions, it became clear that the current snow models fell short in representing all the Arctic snowpack processes needed by project collaborators.” I personally think that this is not the right approach to tackle this. In my opinion, one cannot start a project, and then expect there to be a free, open-source model that fits one's needs, with proper documentation and an email address that you can send all your questions to with efficient response times. That is just an unrealistic expectation. In my opinion “modeling” should be an undertaking done by the community as a whole, where everyone contributes knowledge, expertise, skills, data, etc.
- [L182] “I'm sick of modelers who think the world is a computer screen” This quote is a confirmation for me about the big problem of accessibility to fieldwork, combined with the “hero”-status attached to fieldwork (Nash et al, 2019). Many research positions including fieldwork ask for previous fieldwork experience, or, alternatively, “outdoor experience”. Particularly back-country skiers, (alpine) climbers, and hikers have an edge in securing snow-related fieldwork. And we know that "the outdoors" notoriously lacks diversity (e.g., Winter et al., 2020, Ho and Chang, 2022). Fieldwork is mostly accessible for PhD students, or senior scientists with previous fieldwork experience. Model developers often lack access to participating in fieldwork, and people without access to fieldwork mostly concentrate on doing modeling work. It’s important to note here that even when possibilities arise, fieldwork is not a safe environment for everyone (Marín-Spiotta et al., 2020), and that could be prohibitive for participation. The fact of the matter is that many researchers will never go to the field for a variety of reasons, which may require rethinking of the status of fieldwork (e.g., Bruun et al., 2023). The message delivered in this manuscript is mostly one-directional: [L96-97] “The aim of this study is to understand why decisions made by modellers all over the world and over the past decades have not led to more (or is it “any”?) progress in Arctic snowpack modelling”, combined with the statement “I'm sick of modelers who think the world is a computer screen”. I so wish the authors would have written “by the Arctic snow community” instead of “by modellers”. I found this diversity, equity and inclusion aspect overwhelmingly missing from the manuscript. I will further detail my sentiments here in the “Epilogue” below.
Minor comments:- Several statements and wordings are vague.
- [L96-97] “The aim of this study is to understand why decisions made by modellers all over the world and over the past decades have not led to more (or is it “any”?) progress in Arctic snowpack modelling.” See also my major concern #3. I think more effort is needed to document and quantify the progress that has been made, such that it can be objectively concluded whether or not this constitutes “progress”. As it stands, this statement carries little weight. In fact, the problems with snow modeling in the Arctic are poorly introduced in the manuscript. Only L88-95 discuss this aspect, but only very marginally.
- [L294-296] “When I speak to large scale modellers about rain on snow, the feedback is usually ‘we are aware that something needs to be done, but we have other priorities and we don’t have resources for this’. It’s not straightforward.”
I think I understand what this is about because of my expertise, but for reaching a broader audience, it should be made explicit. Please specify what the issues with rain-on-snow are. Is it the precipitation phase separation rain vs snow, is it the runoff from a snowpack, is it the formation of ice lenses? Also, academia is almost fully project driven, so why not write a proposal or provide funding otherwise for a model developer to work on improving the “rain-on-snow” problems in a model? I think this also relates to my major concern #3, listed above, regarding missing context. - [L372-373] “the first thing it would do is alert the modelers to the difficulties that they have in the Arctic that, in the absence of these evaluations, they wouldn't even know about…“ Please provide examples. The statement suggests that the interviewee knows about difficulties that the modelers supposedly don’t know about. I deem it inadequate to publish a paper with statements like that, without sufficient backing up of examples, preferably using peer-reviewed literature.
- [L310-311] “I mean, the idea that you're going to create an arctic snow model in a PhD is...?!“
This is an incomplete sentence, and I’m not sure what I need to fill in at the “…?!”. Please add some explanation here. - [L537-538] “Some users of [our model], they probably don't know what they're doing, and sometimes a paper comes where I say ???”
Please fill in the “???” here. With my social background, I think I understand what “???” and “?!” is supposed to indicate, but for non-native English speakers, I think there is a risk here that they don’t get the implicit message.
- There were a few quotes that I think are wrong, and I wonder if there should not be an editorial comment that the statement is deemed inaccurate.
- For example, looking at the publications involving Crocus over the last 10 years, I don’t think the statement [L237] “But, I mean Crocus, it's an avalanche model, right?” is accurate.
- Similarly, [L282-L284] “[Models] are limiting the number of [snow] layers for computational stability and efficiency so they are not respecting the way in which the snow pack is actually built up i.e. in episodic snowfall events, which will form different layers (…)”. For models like Crocus and SNOWPACK, it is trivially easy to avoid a limiting number of snow layers. I think it is important to make an editorial remark, since otherwise, false information gets propagated.
- Extensive use of the term “Modeller”: I’m not sure the word “modeler” is meaningful. Even the authors seem to have an ambivalent definition, defining it both as “model developer” [L127] as well as “with expertise in modeling” [L128]. I think there is a substantial difference between both. Note that in L132, both SPM and LSM “modelers” are defined as “model developers”. Personally, I think labeling someone as a “modeler” often attaches an identity to an individual, where this is not justified. It also has unclear meaning. Is it someone who uses the model, or someone who develops for the model, or is it someone who maintains the model code? Is someone who has used a model once in their research career already a “modeler”, or is it someone who uses models in more than, let’s say, 50% of their research? I would rather like to see more exact wording being used, specifically focusing on the role someone has. Like “model user”, “model developer” or “model maintainer”. I think IPCC rightfully avoids the word modeler (referring to L546). But thinking about roles avoids attaching an identity to a researcher, while allowing to encapsulate the common situation where researchers can take up different roles during their career, or even within a single project.
- [L427-428]: “We argue that efforts to represent Arctic snowpack processes would pave the way in the research areas highlighted below for new interdisciplinary collaborations”. What follows are three rather specific research directions. Not that I want to argue about their relevance, it is just missing context why those three are listed, who has set these priorities? Did this come out of the interviews as well?
Epilogue
I also would like to stress that the manuscript contained quite some material that to me came across as somewhat “aggressive”. I would like to make the authors aware that it left me with the impression of a poorly working field, with a lack of communication, collaboration and a missing cooperative mindset.
Examples:
[L182] “I'm sick of modelers who think the world is a computer screen”
In fact, many scientists have no other choice but to focus on modeling, since fieldwork in polar regions is generally poorly accessible (Nash et al., 2019, Karplus et al., 2022). I know scientists who would give an arm and a leg to go to the field just once, and probably doing so would increase the quality of their model development efforts considerably. The phrasing of this statement suggests that the scientist never considered that they could have made an effort to bring the "modelers who think the world is a computer screen” in closer contact with the real world, instead of saying that they are “sick” of them.[L184-185] “The[se] models spend so much time doing things that aren't very important for lots of applications that they're kind of worthless“
Claiming that work done by fellow scientists is worthless, because it doesn’t fit one's own needs, is detrimental to a healthy, open and welcoming academic atmosphere I think.[L537-538] “Some users of [our model], they probably don't know what they're doing, and sometimes a paper comes where I say ???”
First of all, I’m not really sure what I have to fill in at the “???”, but I assume it is some negative sentiment. In these cases, reaching out to those users can be of great help to the users, and would foster exchange of knowledge, and, again, an open and welcoming academic environment.[L374-375] “In my sense, large scale climate modellers aren't sufficiently aware of snow. (…) There are so many people who don't care about that“
I find this quite the accusation that those people don’t care. Please provide evidence that they don’t care, for example from reviews of proposals and/or manuscripts. Did papers in fact get rejected, because reviewers claim that snow is irrelevant? I’m skeptical that that is the case.[L96-97] “The aim of this study is to understand why decisions made by modellers all over the world and over the past decades have not led to more (or is it “any”?) progress in Arctic snowpack modelling, ...“
I understand that the phrasing “(or is it “any”?)” is catchy, but it comes across a bit as dismissive towards publications from, let’s say, the last 10 to 20 years, documenting improvements in modeling approaches, some of which are cited in the manuscript. I would strongly encourage more precise wording. Which objective has not been achieved (yet)? Also, this phrasing implies that “modelers” are to blame for the supposedly slow progress. In fact, the manuscript discusses very few decisions made by “modelers” (interpreted by me here as model developers). And also in light of the sentences I have listed above, I think this is unfair. There seems to be a lack of healthy collaboration in the field. I am also aware that there is also a big issue with accessibility (diversity and inclusion) to fieldwork, that in my opinion plays a role here. There are also funding agencies, and hiring decisions that I think are to blame for a lack of resources for model development. Some of those are addressed in the manuscript, some of those are not. But it would have been better to phrase the aim of the study as: “The aim of this study is to understand why decisions made by the Arctic snow community all over the world and over the past decades have not led to more progress in Arctic snowpack modelling, ...“”I put this feedback as “Epilogue”, because for me, it is not relevant to whether or not the manuscript could be published as a scientific research article, but I hope the authors become aware that including statements like these, unfortunately left me with the impression that the field of Arctic snow is a somewhat unhealthy environment, with some missing collaborative mindset. In a way, I think it’s already a problem that these sorts of things apparently have been said in the interviews, but maybe this was simply the heat of the moment. Maybe the interviewees expressed themselves somewhat awkwardly because they also felt like they were in an informal private conversation. It is also very possible that context or tone went missing in the transcription and the quote selection for the manuscript. One could argue that it may be important to report about such sentiments in the field, since it can signal problems hindering progress. However, it would require proper context, including identifying this as a problem, and proposing pathways forward to resolve such conflicts. I think that the authors should seriously consider the purpose, and effect, of including statements like these in the manuscript. In my opinion, it doesn’t reflect well on the Arctic snow community, and I refuse to believe that this is the message the authors wanted to get across.
References:- Bruun, J. M., & Guasco, A. (2023). Reimagining the ‘fields’ of fieldwork. Dialogues in Human Geography, 0(0). https://doi.org/10.1177/20438206231178815
- Yi Chien Jade Ho & David Chang(2022)To whom does this place belong? Whiteness and diversity in outdoor recreation and education, Annals of Leisure Research, 25:5, 569-582, DOI: 10.1080/11745398.2020.1859389
- Karplus MS, Young TJ, Anandakrishnan S, et al. Strategies to build a positive and inclusive Antarctic field work environment. Annals of Glaciology. 2022;63(87-89):125-131. doi:10.1017/aog.2023.32
- Marín-Spiotta, E., Barnes, R. T., Berhe, A. A., Hastings, M. G., Mattheis, A., Schneider, B., and Williams, B. M.: Hostile climates are barriers to diversifying the geosciences, Adv. Geosci., 53, 117–127, https://doi.org/10.5194/adgeo-53-117-2020, 2020.
- Nash M, Nielsen HEF, Shaw J, King M, Lea MA, et al. (2019) “Antarctica just has this hero factor…”: Gendered barriers to Australian Antarctic research and remote fieldwork. PLOS ONE 14(1): e0209983. https://doi.org/10.1371/journal.pone.0209983
- Winter, P.L.; Crano, W.D.; Basáñez, T.; Lamb, C.S. Equity in Access to Outdoor Recreation—Informing a Sustainable Future. Sustainability 2020, 12, 124. https://doi.org/10.3390/su12010124
Citation: https://doi.org/10.5194/egusphere-2023-2926-RC1 -
AC1: 'Reply on RC1', Cécile Ménard, 05 Feb 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2023-2926/egusphere-2023-2926-AC1-supplement.pdf
- I think it is an interesting concept to access knowledge that is normally not finding its way to the broader community in the form of manuscripts. However, I think there are some methodological problems that devalue this manuscript from a “Research article” to only an opinion piece.
-
RC2: 'Comment on egusphere-2023-2926', Anonymous Referee #2, 16 Mar 2024
This paper covers an important topic—how decisions are made in the context of modeling of Arctic snowpack. By providing insight into the influences on decisions throughout the modeling process the paper contributes to a minimally understood feature of modeling practice, as in most cases, the decision-making is not explicitly documented, nor are the reasons that justify particular decisions. However, there are some issues with the manuscript that need to be addressed:
I urge the authors to consider some of the philosophical literature on decision-making in modeling, which mainly concerns climate models but applies to the discussions and perhaps the interpretations of some of these qualitative findings.
- Several philosophers working on issues in climate science have detailed how values (i.e., interests) influence decision-making through the course of model development, but none of that literature is referenced here despite the high relevance to the topic of discussion. I recommend looking at Parker and Winsberg (2018), Parker (2014), and Morrison (2021), specifically chapter 3 of the latter. The research by these scholars discusses how interests (subjective preferences) and features of the modeling context (pragmatics) influence decision-making in the course of climate modeling, including those choices of determining modeling purposes and priorities, what and how to represent features of the target system, the suitability of observations and metrics for model assessment and validation, etc. The authors might also consider looking at the Shackley article “Epistemic Lifestyles in Climate Change Modeling” (2001). I suggest adding elements from these papers to the paragraph starting in line 64 or including an additional paragraph to capture the discussions in philosophy on these topics. You might also find that the insights from certain papers are relevant to specific sections as well (for example, Parker 2014 for section on data available and resources.
- And, concerning tradeoffs, see work by Levins, mainly “The Strategy of Model-building in population biology” (1966). I note that the subject of modeling is different, but Levins’ thesis applies to the modeling of complex systems generally and is thus related to the discussion in 3.1.1.
- Concerning the disagreement about models being “good enough” for current research problems—an article deals with similar disagreement about the value of different modeling systems in relation to different sets of research questions by Lloyd, Bukovsky, and Mearns (2020). The authors here argue that the reason for disagreements about the value of regional versus global models is because they have different research questions and the representational features of the models are different. So they don’t take the representational features of one type of model to be valuable for their questions, and vice versa. Wonder if something similar here is going on, thus this frame might be useful…and might even be useful for analyzing the lack of unanimity in the responses to the questions that were asked. They have different interests, are asking different questions, and have different local epistemologies (Longino 2002 and Morrison 2021). (Where the authors talk about identity, this seems akin to local epistemologies.)
Appreciate the content-context distinction, however, I wonder if you can separate them, and would appreciate more consideration of the way research context, understood more generally than “identity” in the paper, shapes perception of modeling practice, etc. I am also not sure whether the analysis from Staddon (2017) on the distinction between professional and personal is fitting here. Again, I think these responses are a function of differences in the context in which these individuals conduct research and the local epistemologies they are part of. For example, with “I’m sick of modelers who think the world is a computer screen” this is a rejection of the attitude of being focused on the modeling world as opposed to the empirical world, which can be reduced to differences in one’s scientific ontology and epistemic values. And “these models spend so much time…” this can be interpreted as someone who is more of a pluralist about models and their application, as opposed to part of the paradigm by which models are seen as fit-for-purpose for a limited number of intentionally chosen applications….in other words, it’s not necessarily the “identities” of the researchers that come out in these quotes, but rather, the diversity of local epistemologies that can be found in Arctic modeling, and the disagreement that arises from this diversity. I appreciate the information in the intro to section 3 but think you could do more to shed light on the significance of sharing these sorts of quotes from your interviews. A different frame for your discussion might add depth and significance.
In the same vein as the above comments, I think philosophical discussions can help to frame your results. For example, the somewhat reductive interpretation of the quote at the beginning of 3.2.1.: prioritization is a feature of scientific practices, including modeling, being driven by human interests, and certain elements of the complex systems we investigate being more or less important relative to those interests. While resources are limited, human beings are also inherently value-driven, and if they don’t perceive something as related to their interests, they will deprioritize it, and yes, the practical constraints make this more apparent, but aren’t the sole cause of prioritization in science. There are an infinite number of questions we could ask, and we will see value in some and ignore others. I think this is what the quote is getting at you have chosen here, with the “we have other priorities” AND “we don’t have resources”, i.e., there are two reasons for not tackling the problem, one is, it is inconsistent with what they care about in modeling, and second, there aren’t resources, and these compound one another. Longino’s discussions of modeling complex systems in her 2002 book would be helpful here. This is an example of one place in the manuscript where the interpretation of qualitative evidence can be aided by appealing to philosophical discussions from the philosophy of science in practice (i.e., Longino and others have done empirical studies to draw their conclusions, it’s not “armchair” analysis).
The comments on short-termism are incredibly important, appreciate their explicit inclusion, and wonder if more can be said about the implications of this current paradigm in funding procedures…
I am a bit confused about the discussion of the anchoring bias…it appears a bit vague in what the bias is in itself, and I am not sure that the explanation in the first paragraph makes it clear what it is. I think it is the judged adequacy of the models, based on historical model features and development, in relation to some purpose, which can shift when one’s interests or research questions change (which the authors hint at in lines 375–379). I think this is what is being said also in the case that community efforts can lead to shifts in these anchors…community comparison projects foster interdisciplinary discourse on model capabilities and limitations, which can presumably highlight inadequacies in relation to priority research questions. This section could be clearer, especially with respect to what it is about the existing models that function as a reference point for judging the value of different future development efforts. The section should also conclude with a clear summary of the argument the authors seek to make given the statement in the first paragraph: “anchoring contributed largely to the absence of Arctic snow processes in existing models”.
In conclusion, this is a valuable study and provides significant empirical insight into understudied and implicit components of modeling of climate features generally. However, I think work needs to be done with the framing of the findings from the study and their discussion. I strongly suggest bringing in philosophical work on modeling to help add depth and detail to the discussion.
References:
Levins, R. (1966). The strategy of model building in population biology. American scientist, 54(4), 421-431. (see also, for updated discussions: Weisberg, M. (2006). Forty years of ‘the strategy’: Levins on model building and idealization. Biology and Philosophy, 21, 623-645. and Matthewson, J. (2011). Trade-offs in model-building: A more target-oriented approach. Studies in History and Philosophy of Science Part A, 42(2), 324-333.)
Lloyd, E. A., Bukovsky, M., & Mearns, L. O. (2021). An analysis of the disagreement about added value by regional climate models. Synthese, 198(12), 11645-11672.
Longino, H. E. (2002). The fate of knowledge. Princeton University Press. (See chapter 8 for local epistemologies and differences between different investigative communities, which is relevant to your discussion.)
Morrison, M. A. (2021). The models are alright: A socio-epistemic theory of the landscape of climate model development. Indiana University.
Parker, W. (2014). Values and uncertainties in climate prediction, revisited. Studies in History and Philosophy of Science Part A, 46, 24-30.
Parker, W. S., & Winsberg, E. (2018). Values and evidence: how models make a difference. European Journal for Philosophy of Science, 8, 125-142.
Shackley, S. (2001). Epistemic lifestyles in climate change modeling.
Citation: https://doi.org/10.5194/egusphere-2023-2926-RC2 -
AC2: 'Reply on RC2', Cécile Ménard, 04 Apr 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2023-2926/egusphere-2023-2926-AC2-supplement.pdf
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
566 | 140 | 36 | 742 | 97 | 27 | 24 |
- HTML: 566
- PDF: 140
- XML: 36
- Total: 742
- Supplement: 97
- BibTeX: 27
- EndNote: 24
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Cited
1 citations as recorded by crossref.
Sirpa Rasmus
Ioanna Merkouriadi
Gianpaolo Balsamo
Annett Bartsch
Chris Derksen
Florent Domine
Marie Dumont
Dorothee Ehrich
Richard Essery
Bruce C. Forbes
Gerhard Krinner
David Lawrence
Glen Liston
Heidrun Matthes
Nick Rutter
Melody Sandells
Martin Schneebeli
Sari Stark
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(1306 KB) - Metadata XML
-
Supplement
(1665 KB) - BibTeX
- EndNote
- Final revised paper