the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
The Destination Earth digital twin for climate change adaptation
Abstract. The Climate Change Adaptation Digital Twin (Climate DT), developed as part of the European Commission’s Destination Earth (DestinE) initiative, is a pioneering effort to build an operational climate information system in support of adaptation. This system produces global climate simulations with local granularity, providing information at scales that matter for decision-making. The Climate DT delivers multi-decadal climate simulations at spatial resolutions of 5–10 km, with hourly outputs, offering globally consistent, frequently updated data. The km-scale simulations address some limitations of current climate models, improving local granularity and reducing longstanding biases, supporting more equitable (understood as accessible and relevant across regions) and credible climate information. The Climate DT is built on cutting-edge infrastructure, expert collaboration, and digital innovation. It supports real-time, on-demand responses to policy questions, with quantified uncertainty. It fosters interactivity by allowing users to influence simulation design, model outputs, and applications through co-design. AI-based tools, including emulators and chatbots, are being developed to enhance flexible scenario exploration and ease climate information access. Sector-specific applications are embedded in the system to generate tailored climate-impact indicators, with examples for energy, water, and forest management. The applications have been co-designed with informed users. An important innovation is the use of high-resolution storylines. These are physically consistent simulations of extreme events under different climate conditions that provide contextual insights to support concrete adaptation decisions. A unified workflow across platforms orchestrates all components ensuring automation, containerisation for portability, and traceability. The unified data management ensures consistency and eases the usability of the data. Data is delivered using standard grids (HEALPix) at high-frequency (hourly) and follows a strict governance policy. Streaming enables real-time data use and unlocks access to the unprecedented data produced by the high- resolution simulations. Monitoring tools provide real-time quality control of both data and models and provide diagnostics during the Climate DT operation. The compute-intensive system is powered by world-class supercomputing capabilities through a strategic partnership with the European High Performance Computing Joint Undertaking (EuroHPC). Despite high computational demands, the Climate DT sets a new benchmark for delivering equitable, credible, and actionable climate information. In this way it complements existing initiatives like CMIP, CORDEX, and national and European climate services, and aligns with global climate science goals for climate adaptation.
- Preprint
(1964 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (until 12 Nov 2025)
- RC1: 'Comment on egusphere-2025-2198', Anonymous Referee #1, 15 Sep 2025 reply
-
RC2: 'Comment on egusphere-2025-2198', Pier-Luigi Vidale, 14 Oct 2025
reply
Review of The Destination Earth digital twin for climate change adaptation
By F Doblas Reyes et al.
This is potentially a very important paper, detailing the many developments behind the creation of the first European Climate Adaptation Digital Twins, part of the DestinE programme, as well as the overall progress of the DestinE programme against the initial aims and ambitions.
As many in our community, I have been accompanying the DestinE journey, and looking forward to learning about its methods, challenges, achievements, as well as lessons for the future.
While reading the paper, understanding it to be a paper that purports to lay out the technical foundations and going on to current achievements, I continued to expect to learn in good detail about the journey, the challenges, the breakthroughs, the concrete outcomes so far. All of these elements are there, albeit in far less detail than I had expected from a technical paper.
At this time, I find myself wondering whether this is a vision paper, but we have already had a few at the inception of the programme, or a high-level overview paper, aiming to attract new users to the programme, which I think belongs elsewhere, finally whether more could be done with the current manuscript to provide a detailed technical description of what the DestinE Digital Twins, and their enabling technologies are.
I do believe this to be an important paper in principle, so I am going to provide some concrete suggestions to help steer it towards its main objectives.
I find the the most important sections are:
SECTION 3: workflows, and the ability to deploy several models and data analyses in a unified way over multiple platforms
SECTION 4: seamless/homogeneous data access
SECTION 5: production of first set of projections and storylines, including their quality assessment
SECTION 6: the impacts applications
In the interest of readability, and of effectively communicating the extremely impressive and important achievements in the sections above, I suggest a substantial reduction to the introduction, motivation etc., which contains material already available in other papers. I also suggest shortening Section 7, which contains many principles, ambitions etc.. albeit too little in terms of concrete future plans, so not as informative as the previous sections. Can more focussed and more tangible ideas be introduced about future plans, based on the lessons learned so far?
I think that it would strengthen the paper to provide more evidence in Sections 3-6, particularly in terms of why some key decisions were taken, e.g. from the point of view of performance, portability, longevity, sustainability of the solutions.
I also think that the paper could be improved in a few areas.
1. Concise, relevant and effective communication.
The paper contains many repetitions, particular in regards to its high-level objectives. For instance, the word “equitable” is repeated six times, within sentences that are quite similar to each other, yet the paper provides very little to demonstrate that this goal has been achieved. This principle is a crucially important part of the original aims, but are we at the stage where any progress can be measured? The word “credible”, which follows, used 5 times, has more of a chance of being supported by evidence from the programme’s achievements so far, and we read about quality assurance, but very little detail is provided, which is a pity. Next is the word salient, 4 times, but there is not much evidence in the paper that the information provided so far is indeed salient. Such value judgements may be applied in the future, once the community has truly adopted the DestinE products, but it seems to early to insist on those at this time.
The “summary and conclusions” contains very long bullet points, covering nearly two pages, and repeats much of what had already been said in the sections that come in the main body of the paper. The information presented is, once again, rather programmatic, instead of focussing on the essence of what has been achieved in the project so far.
Some of the key figures are too small, too low resolution, and use too small fonts. Examples: Figs 1,2, 6.
I suggest:
- stating the high-level aims only once, at the very start of the paper, especially since we have already had papers about the DestinE vision in the past
- summarising progress, based on evidence, in this DestinE phase, against those initial aims
- shortening the summary and conclusions to 1/2 - 2/3 of a page
- make some of the key figures larger and more readable: use them to illustrate key concepts and technologies as soon as possible
2. Reporting on groundbreaking technical progress
The multiple technological choices made during the development and demonstration phases of the programme are introduced several times over, albeit initially in a qualitative, unspecific way. Later in the paper we start to learn the names of some of the key technologies and what they can do, albeit not the first time they are mentioned, which is rather confusing. The one exception is the discussion of HEALPix, which is all in one place, and quite complete.
I also find that a few important things are left undefined, e.g. “bias adjustment” in FIg 1 and “bias correction algorithm” in Fig 2. There is brief mention of some examples around lines 491-495, but which ones are routinely used in the workflow?
Moreover, there are some clear areas of groundbreaking progress that deserve more prominence:
- the climate models, which are now more portable, faster, more energy efficient, and, from other evidence I have seen, producing better results, in many areas, than typical CMIP models. Please discuss more detail on these advances than you currently have.
- the unified workflow manager: being able to run several models at once through a single workflow manager is obviously attractive. However, what are the detailed tradeoffs in terms of speed, portability, resilience, convenience to operators and end-users? Should such a tool be adopted by the entire community?
—- part of the above could be showing an example of the output of one of the quality assurance diagnostic/metrics tools, while more detailed scientific analyses should be left for specialised papers.
- the new data translation and analysis tools, workflows, etc.
- the discovery environment, starting with the data catalogues, sample notebooks/scripts, data analysis platforms, and including the AI “chatbot”.
- the platforms for running end-user models: what are they, what do they enable?
I suggest:
- providing full details of each technological solution the first time it is mentioned
— consider building a concise table with the names of the key technologies, what they are built on, what they deliver,
- explaining why these technologies have been chosen over traditional ones: what are the tradeoffs?
- altering Fig2 and Fig4 to contain the names of the key enabler technologies
3. Users, uptake, co-design etc.
I suggest that you inform on:
- How many users have signed up to the services so far?
- How much resource have they used so far, both in terms of CPU and storage?
- What is the capacity of the system(s), now and in the future: how many users can it accommodate, envisaging a range of uses with different requirements?
4. Future outlooks
I found a lack of concrete guidance on the next steps. While I think that convincing new users to come to the platforms and use/co-design the DestinE data, I had expected to learn about the technologies developed to support such new interactions, that is, what can prospective users expect to find in terms of resources, tools, longevity of the data, stability and longevity of the platforms themselves?
I suggest that you point people to some of the key resources and on how to apply for them, as well as explaining what specific type of support users can expect in the future. Finally, it would be important to draw a roadmap for users to be in the position of effectively influencing/designing future experiments.
Specific points:
Line 111: do also mention what carrying such responsibilities on behalf of the community means for individual career progression, and how these advances can break that cycle.
Line 117: can you be more specific when you say what could be done to complement CMIP and CORDEX? For instance, do you mean generation of short bursts of large ensembles at HR using emulators empowered by ML?
Lines 159-164: you talk about trustworthy, but as far as we know many of the model codes are not open-source. From what is stated, the workflows are fully accessible, so does this mean that users may ask to replicate experiments and/or test them for robustness? More generally, what are the key ingredients of this trustworthiness?
Line 185: there are two concepts: timely and routine production: what are their exact definitions in this (currently semi-operational, going towards operational) context?
Line 187: Unprecedented: can you quantify and put in context?
Lines 189-: you start to take about data streaming, efficient data handling, what does it mean in practice? What is the baseline, and what are the requirements for DestinE to be declared a success?
Line 195: “Design OF adaptation strategies”
Line 205: DT infrastructure is its flexibility: what does this mean in practice? Is the following text just a definition and/or vocational, or have you implemented those things? It is not clear whether this paragraph is still introductory, or reporting on what has been accomplished. There are some undefined concepts, such as “fast testing of climate adaptation options (line 211). What does it mean concretely?
Lines 212 and 237: these are examples of instances in which tools are mentioned, albeit not named nor defined, so the sentences read as rather vague and uninformative.
Line 264: is “Autosubmit” the tool you were mentioning before? It is the only one? Where does it stand internationally? Should it be adopted by others? Can it?
Line 294: you introduce the idea of speed of access. What is the speed required to enable all the DestinE objectives, and what has been achieved so far, both in local node modality and for distributed access?
Line 299: Particular features are: particular features of what? Also, what is “lazily loaded”?
Line 313: over-precision: do you mean over-resolution? Resolution does not necessarily result in precision.
Line 327: You have “Yet … yet”, maybe start with “Even “?
Line 340: If any of the critical checks fail (instead of fails)
Lines 405 and following: this is an important piece of information, and deserves more prominence, as well as discussion. These are very impressive, but are these SYPDs sufficient to enable the DestinE objectives? It would be important to tie these statistics to what is said around line 412. Also, is there scope for even higher speed, or should resources now be dedicated to ensembles?
Lines 442 and following: this is an important part of the methodology, and it would be good to explain in more detail what the technique offers and what its limitations are. For example, moving towards the generation of ensembles, what are the implications of using this technique?
Line 542: define data bridge as soon as it is used, possibly around line 355, so that this sentence can be understood.
Line 794: Synergies require identification of synergies. This seems circular. Can you rephrase?
Citation: https://doi.org/10.5194/egusphere-2025-2198-RC2
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
2,808 | 103 | 14 | 2,925 | 27 | 29 |
- HTML: 2,808
- PDF: 103
- XML: 14
- Total: 2,925
- BibTeX: 27
- EndNote: 29
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
This paper presents an impressive undertaking to address the significant gaps in the ability to provide information from climate models that is user-relevant and accessible. The extent of the technical challenges that have been addressed by this project to fill these needs are incredible. I see this as a framework that could potentially be expanded to S2S predictions as well. There are a few minor suggestions and that I have around making this paper more accessible to a broader audience. 1. There are a lot of acronyms, project names, and technical jargon throughout this paper. Any effort to reduce this would make the paper more accessible. 2. It comes across as if this system can do everything in the climate information space . As with anything that is trying to accommodate many users and meet a wide range of needs, I expect there are some limitations and challenges to doing this. Any effort to discuss this would be helpful. 3. I realize this paper and journal is focused around geoscience model development. I also think the user-focused approach and capabilities is such an important part of this work that it could be expanded a little more here. It seems like a complicated system. How does a user get training to setup and use the system for their specific needs and get involved in co-production efforts?