the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
An improved modelling chain for bias-adjusted high-resolution climate and hydrological projections for Norway
Abstract. About every 10 years, the Norwegian Centre for Climate Services publishes a national climate assessment report, presenting the updated historical climate change and climate projections towards the end of this century. This paper documents the model experiment used to generate high-resolution climate and hydrological projections for the new climate assessment report published in October 2025. The model experiment follows the standard modelling chain for hydrological impact assessment, i.e., climate model selection – downscaling and bias adjustment – hydrological modelling. However, compared with the model experiment for the climate assessment report published in 2015, all modelling components have been improved in terms of data availability, data quality and methodology. Specifically, a large climate model ensemble was available and new criteria were developed to select tailored climate projections for Norway. Two bias-adjustment methods (one univariate and one multivariate) were applied to account for the uncertainty of method choice. The hydrological modelling was improved by implementing a physically-based Penman-Monteith method for evaporation and a glacier model accounting for glacier retreat under climate change scenarios. Besides model description, this paper elaborates the effects of different bias-adjustment methods and the contribution of climate models and bias-adjustment methods to the uncertainty of climate and hydrological projections under the RCP4.5 scenario as examples. The results show that the two bias-adjustment methods can contribute larger uncertainty to seasonal projections than climate models. The multivariate bias-adjustment method improves hydrological simulations, especially in the reference period, but cannot conserve climate change signals of the original climate projections. The dataset generated by the presented modelling chain provides the most updated, comprehensive and detailed hydrometeorological projections for mainland Norway, serving as a knowledge base for climate change adaptation to decision makers at various administrative levels in Norway.
- Preprint
(2544 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (until 06 Jan 2026)
-
RC1: 'Comment on egusphere-2025-5331', Anonymous Referee #1, 08 Dec 2025
reply
-
AC1: 'Quick Reply on RC1', Shaochun Huang, 15 Dec 2025
reply
This study presents the outcomes of bias-correcting and downscaling a climate model ensemble for hydrological applications in Norway. Three emission scenarios, ten GCM/RCM combinations, and two bias corrections were considered. First, the results are evaluated based on temperature and precipitation simulations in the historical and future periods. Then, the simulations and projections of a hydrological and a glacier model are analyzed. Overall, this study is very comprehensive and well presented. As it will probably serve as a basis for many climate and hydrological modeling applications in Norway, I believe it is relevant to GMD. However, I have a number of comments regarding some of the methodological choices, as well as the need to clarify some of the presented results. Since the list of comments is extensive and may require significant effort, I recommend making major revisions before accepting the study.
Answer: we thank the reviewer for the valuable and constructive comments. Since it takes time to do new analysis and make new figures, we pose a quick reply to the comments below and will add the new figures and analyses in the final response letter.
Major comments
- I understand that, for the sake of brevity, the authors chose to present only one scenario, the "intermediate" one. However, since the SSP scenario is one of the novelties of this project compared to the previous related project, it might be worthwhile to present only the results for this scenario or to add some of the key results for this scenario in the SI.
Answer: We understand this point and really would have liked to present the SSP3-7.0 results. However, the EURO-CORDEX CMIP6 downscaling, which includes the SSP simulations, has still not been publicly released. Also, and maybe more important, there is no publication on the SSP simulations from EURO-CORDEX available yet (nor expected to be available in the next weeks or months). This means that this publication would be the first one covering the SSP projections from EURO-CORDEX, which would be problematic both regarding the terms of use of the data as well as the data availability to reproduce our results. We therefore decided to stick to the unproblematic EURO-CORDEX CMIP5 data in the main text and note that the paper focuses on describing the method rather than the results. In addition, we have done similar analysis for SSP370 projections. The effects of the bias-adjustment methods obtained from the RCP4.5 projections are similar as the ones for the SSP370 projections. Hence, we will mention that the conclusions on the bias-adjustments methods also apply for other scenarios. If the EURO-CORDEX SSP data is published or we get permission to use the data from the RCM groups during the review process of this manuscript, we will add some key results in the supplementary material as suggested. However , please note that the national results from the SSP370 scenario runs, including which climate models were selected and the ensemble results, are already available in the Klima i Norge report and on https://klimaservicesenter.no/climateprojections. And the gridded climate and hydrological data for the SSP370 scenario are available at the Arctic Data Centre https://adc.met.no/dataset/ceb8319c-5ebe-51cc-8359-959067daeadd.
- From the text (e.g., lines 68 and 375), it is often unclear that univariate correction does not bias or modify the dependence structure of the projections. On the contrary, as mentioned in line 372, univariate quantile mapping does not modify the simulated dependence of the uncorrected simulations; rather, it leaves it uncorrected. This point should be clarified to avoid confusing readers.
Additionally, it would be interesting to discuss whether univariate corrections might be sufficient if the climate models initially simulate this dependence well compared to observations, for instance, to simulate SWE (Figure 17 for EQM; probably the climate simulations resulting in the upper part of the ensemble, blue lines). Since multivariate correction degrades the preservation of the climate signal and impact modelers may use only a subset of the climate model ensemble, such a discussion could guide the selection of simulations and their interpretation.
Answer: Part 1: Agree, we should try to make that clearer.
Part2: Figure 17 shows the average results of the whole Norway, but the dependence simulated by each climate model varies in space and time. Since our users may be only interested in part of Norway, the direct guidance of model selection based on the country-average results can be misleading. In principle, we suggest using the full ensemble projections if possible, but we also provide general guidance on the methodology of selecting projections. For users who want to select a subset of climate models, they should firstly analyze the climate signals for their study area and periods and then select the models based on the study purpose, e.g., studies aiming to assess the driest and warmest climate conditions or the wettest and coldest conditions in the near or far future. If the users only want to use the 3DBC-EQM adjusted projections that give better dependence between variables, they should first analyze the seasonal trends for their study area and periods using both EQM and 3DBC-EQM projections. If the trends are similar, the 3DBC-EQM adjusted projections can be sufficient, otherwise, we strongly recommend using both bias-adjusted projections.- It is my understanding that bias correction is performed at the target resolution (1 km) rather than the native resolution (~0.11°). While I understand this choice, which was also made in similar projects such as CH 2018, using statistical bias correction techniques to make such a "jump" between scales can result in simulation artifacts, such as the overestimation of extremes and the overcorrection of the drizzle effect for area means (Maraun et al., 2013). While I don't think it's reasonable to make a different choice, I believe it's important for the authors to discuss this topic briefly in the discussion section. For instance, they could highlight that hypotheses about changes in processes below 0.11° resolution might not be valid and shouldn't be overinterpreted. One example is the changes in the spatial variability of rainfall extremes.
Answer: We can certainly discuss this issue briefly in the discussion section. We can e.g. refer to the bottom-up approach adopted by Yuan et al. (2019, 2021). This two-step approach first upscales the observational data which are at 1 km resolution to the native model resolution, i.e. ~12 km. The upscaled dataset is used to fit the parameters in a statistical space-time model which bias-adjusts the RCM outputs. Local fine-scale climate variabilities are then introduced as residuals in the second step. However, this approach is very computational demanding and does not resolve the inter-variable dependence issue.
We will add a few sentences which emphasize the importance of not ‘zooming’ in too much to the results with 1 x 1 km resolution as the uncertainty generally increases with decreasing grid size. E.g. the resolving power of the climate model sets the natural lower limit on which local-scale physical processes can be taken into account. Artifacts might be introduced by the statistical bias-adjustment method. And users should not overinterpret the results which are finer than the native RCM resolution.
- I found the choice of calibration and evaluation periods, as well as the explanations of the calibration method for the hydrological model, to be unclear. Although the model calibration seems to be the result of another study using a specific technique, I think more information is needed. For example, it is unclear if each spatial unit has a different set of parameters or if the calibration is "lumped." On line 413, it is unclear if 10 or 44 parameters are being calibrated and, again, if it is one set of parameters per gauge station or per 1 km cell. Also unclear is why the model was calibrated between 2000 and 2007 when data has been available since 1981 (Vali3 covers 1981–2014). For such a project, it would be good practice to calibrate on the "cold" period and evaluate on a warmer period to assess the temporal generalizability of the simulation, i.e., whether the model residuals depend on forcings projected to change. Finally, Figure 6a shows that the performance of Vali1 is similar to or higher than that of Cali, which is counterintuitive to me because many studies report degraded performance outside the calibration period (e.g., Guo et al., 2020). Could the input product be of higher quality after 2007?
Answer: thank you for the constructive suggestion. We will add more information on the description. For the whole Norway, there is only one calibration parameter set, including 44 parameters. Since there are five major soil types in Norway, the total number of soil parameters is 36 (six (five soil types plus glacier bed) times six soil parameters). It is the same for snow parameters. There are 6 snow parameters (2 snow parameters times 3 land use classes (deciduous forest, coniferous forest and others)). In total, there are 44 parameters, including all soil, snow, glacier and lake parameters. As a result, the parameters vary between 1x1km grid cells due to different combinations of soil and land use types.
For the validation period, we fully agree that the model should be tested in a warm period, and we extend the validation period from 2011 to 2020. In Norway, temperature tends to increase significantly in the recent 50 years (Yang and Huang, 2023) and the last 10 years is the warmest period for most catchments. Compared to the calibration period (2000-2007), the average increase in mean temperature of all 123 catchments is about 0.43 degrees in 2011 - 2020. Hence, we validate our model again for the last 10 years to show the model performance for warmer conditions.
Regarding input data quality, actually it is indeed a reason to explain the better validation results because the number of temperature observations has been constantly increasing since 2000, and in 2017 there are about twice as many stations as in 1957 (Lussana, 2019). As a result, it used more observations to generate the gridded seNorge data for the recent years. Temperature data is an important variable for hydrological modeling in nordic cold regions, because it affects the snow processes. We will add such information on the seNorge data and discuss the validation results in the manuscript.
- Initially, it was unclear to me whether the glacier model was coupled with distHBV. I now understand that the models run independently. This should be made clearer early in the paper. Also, in areas where the glacier model is used, is the runoff evaluated? How does it compare to runoff simulated by distHBV?
Answer: we will modify Fig. 2 and the corresponding text in Section 3 to clarify it. In areas where the glacier model (DEW) is used, the simulated discharge (m3/s) and glacier mass balance were only used for model calibration and validation, and they are not for climate projections. In addition, we replaced the DistHBV runoff (mm/day) results with the DEW’s only for the grid cells with glaciers, and the DEW runoff results for other grid cells within the glacierized catchments were not used. It is because DEW uses a simpler potential evapotranspiration (PET) method and rougher landuse/soil classes than DistHBV. Due to the differences in the static input data (land use and soil type) and the methodology of PET and glacier processes, the spatial distributions of runoff within the glacierized catchments are not comparable between DEW and DistHBV for the calibration and validation periods. However, we can show how different these two models generate the runoff in the future period for the grids with glaciers. We will add a new section 6.3 to show an example of a glacierized catchment and how we used the results from DEW for this catchment. In addition, we will add new figures to show the difference in simulated runoff between HBV and DEW for cells with glaciers in the future period.
- I think it would be interesting to report on the methodology used for selecting the climate model. However, I think a few elements are missing from this section. For example, the second criterion is based on Table 6 of McSweeney et al. (2015). It should explicitly state that the table is used for model selection, not just methodology. More context should be provided for the last selection criterion. The term "visually striking" is vague and should be better defined. An example plot could be added to the supplementary information (SI). Overall, it is difficult to determine which models were excluded because of the second criterion and which were excluded because of the last criterion.
Answer: Thanks for pointing this out. We will make an effort to be more concrete on these aspects and consider adding an example plot as supplementary information.
- Quite often in the paper, descriptions of the main changes for certain variables are provided (e.g., Figures 8 and 9). While I find these results interesting, I’m not sure showing such results is the purpose of the paper. As I understand it, the paper presents the methodology and evaluation of the climate ensemble, the hydrological modeling setup, and the bias correction. In my opinion, replacing these parts of the analysis with more comparison plots that provide additional information about the climate model selection or the hydrological modeling setup would be more useful (see previous comments).
Answer: We will move Fig. 9 and 15 to supplementary material and add comparison in spatial distribution of the variables between the two bias-adjusted methods. New plots will also be added for hydrological models and climate model selection (see our replies above).
- I think a comparison of the results from the two bias correction methods would be more useful than figure 15, especially for the SWE results. This would provide more insight into the potential added value of multivariate correction over univariate correction for certain hydrological applications.
Answer: we will add a comparison in the spatial distribution of SWE between the two bias-adjustment methods (see our reply to the previous comment).
- For many applications, modelers may use only a subset of the provided projections. In light of the results presented in this analysis, I think a discussion about this is needed, as well as some guidance about good practices in that case. This would be very useful for impact modellers.
Answer: Our results only show the average results of the whole Norway, but the dependence simulated by each climate model varies in space and time. Since our users may be only interested in part of Norway, the guidance of model selection based on the country-average results can be misleading. In principle, we suggest using the full ensemble projections if possible, but we can provide general guidance of the methodology to select projections. For users who want to select a subset of climate models, they should firstly analyze the climate signals for their study area and periods and then select the models based on the study purpose, e.g., studies aiming to assess the driest and warmest climate conditions or the wettest and coldest conditions in the near or far future. If the users only want to use the 3DBC-EQM adjusted projections that give better dependence between variables, they should firstly analyze the seasonal trends for their study area and periods using both EQM and 3DBC-EQM projections. If the trends are similar, the 3DBC-EQM adjusted projections can be sufficient, otherwise, we strongly recommend using both bias-adjusted projections.
Minor comments
- L202: I think that a clear distinction between the period used for calibration of the bias correction and the period used for calibration of the hydrological model should be made explicitly somewhere.
Answer: We will use the term 'training period 'for bias-adjustment procedure (1985-2014). And calibration period refers to the calibration of the hydrological model only.
- L277: I’m wondering whether this method might introduce an overestimation bias for precipitation which might then be corrected by the bias correction method. This could be avoided by considering that one day in the 360-day system is 24.3 hours. If this is too much work to update the simulations based on this, maybe just a quick test for a few cells might be useful to check the influence of artificially adding days in the projections.
Answer: Extending a day to 24.3 hours sounds difficult to implement. Besides, precipitation is given as flux, i.e. with units mm/s. To get a reasonable annual amount, this needs to be multiplied by 365(.25) * 24 * 3600, i.e. we need to shoot in extra days. Otherwise the annual amount would only be 360*24*3600*mean(daily). Whether this has been already considered in HadGEM, i.e. whether the conversion from daily amounts to fluxes assumes 24.3 h long days, is beyond our knowledge. However, we consider the impact small relative to the existing biases in the RCM data (we can check for a few grid cells).
- L283: CH2018 is not properly referenced.
Answer: the citation is clearly written in the report https://www.meteosvizzera.admin.ch/dam/jcr:6604da58-be19-4629-9dc0-a3d2352fbccb/CH2018_Technical_Report-compressed.pdf that one should cite like this
CH2018 (2018), CH2018 – Climate Scenarios for Switzerland, Technical Report, National Centre for Climate Services, Zurich, 271 pp. ISBN: 978-3-9525031-4-0
- L297: quickly explain which method was used for the wet-day correction.
Answer: For each grid cell, a threshold value is derived such that the wet-day frequency in modelled precipitation is equal to that in the corresponding reference data for the training period. All modelled precipitation values which are below the derived threshold value are then set to zero for both training and future periods (Gudmundsson et al., 2012).
- L308: remove “which is assumed to be valid for use in the projection period” because it is mentioned earlier already.
Answer: We will remove it.
- L308: quickly explain why seasonal correction is often needed.
Answer: Not sure what the reviewer meant?! The following answer is just a general comment on the reason behind seasonal correction:
When we feed the biased temperature and precipitation data into a hydrological model, the simulated responses, e.g. runoff, will of course be different from the observations. If the simulated temperature is colder than the observed one, the snowmelt will naturally start later. The timing of snowmelt floods will be delayed as well. Depending on the precipitation biases, this might lead to higher flood magnitudes. Generally, incorrect temperature and precipitation patterns can affect snow accumulation and snowmelt patterns which will result in changed hydrological regime and seasonal flow patterns.
- L317: explain why you apply this procedure -> for trend preservation?
Answer: Yes, to better preserve the decadal trend.
- L337: Does this method affect temporal autocorrelation? (see François et al., 2020).
Answer: Yes, to make this clearer, we have added “temporal”: “3DBC adjusts the ranks for future periods according to changes in the temporal auto-correlations”. However, as also noted, in our implementation “adjustments in the variable auto-correlations for the future periods have a limited effect”.
- L342: “ansatz” is a not very frequent term to describe “approach” -> I would use “approach” here
Answer: OK.
- Table 2: reporting performance results in a table makes the reading quite difficult in my opinion. I would replace by distributions, which would also allow to visualize more than the average over all grid cells.
Answer: A new figure will be provided instead of Table 2.
- Fig 5: is there any dependence to elevation? This seems to be a common pattern found in recent studies (Matiu et al., 2024, Astagneau et al., 2025).
Answer: Have not looked at the changes in terms of elevation. We will carry out this additional analysis and if the results are interesting, they will be included in the supplementary material.
- Table 3: The column “category” seems to be wrong: e.g. where is the snow routine category given that there is a snow melt temperature parameter? The resolution of this table is also not sharp.
Answer: We will improve this table in terms of resolution. We will change “landuse” to “snow and glacier” in the table.
- L434: is there a calibrated snow correction factor in the hydrological model? This is a common parameter for HBV. If yes it would be interesting to reflect on this underestimation considering such a correction factor.
Answer: No, in this study, we tried not to correct rainfall and snow data, partly because there are too many parameters, which can cause equifinality problems and partly because of the improvement of seNorge precipitation data. We will highlight it in the manuscript.
- L445: it is not clear what is used from DEW in terms of hydrological projections. For instance, for the Engabreen catchment, will the projected discharge be given by DEW or distHBV? Figure 2 does not provide this information but maybe I missed something in the text. It is also not clear from the data provided.
Answer: This study only provides gridded-based data, so no discharge data from DEW was used. The discharge was only used for calibration and validation. We will add a new section 6.3 to show an example of a glacierized catchment and how we used the results from DEW for this catchment.
- Figure 8: it seems that the spread is reduced around the year 2000. I think this is because all models are calibrated independently and brought close to observations. I think something needs to be mentioned about this. It is also interesting to see that it is almost not the case for SWE and SM (Fig. 14).
Answer: Figure 8 shows 30-year means of the bias-adjusted precip. and temperature data. This is not an effect of model calibration in the RCMs, but the values are bias-adjusted such that they match the observed average in the calibration period 1985-2014. To make this more clear, we will add a sentence noting this and saying that this is the desired effect of the bias-adjustment.
Runoff and ET also show a reduced spread as they are very directly affected by either temperature or precipitation. For SWE and SM the interplay of the single input variables is more complex, resulting in no clear reduction in the spread. Note that using the 3DBC data, the SWE in the period is also very close to the observed value (as shown in Fig.17) which highlights the added value of the 3DBC method.- 499: Why is EQM designed to preserve trends? Because of the detrending process and the sliding 30y window method? Since 3DBC acts as a postprocessor, why is the trend not preserved, at least for the marginals, not the dependence?
Answer: That is correct. The EQM method itself does not necessarily preserve trends. But the pre- and post-processing of the data such as the detrending procedure and the sliding 30-year window altogether contribute to trend preservation.
3DBC as a post-processor preserves the trend as EQM on an annual basis. However, since 3DBC reshuffles the EQM-based time series within a year, the marginal distributions at seasonal scale might be modified.
- 3: I do not understand why this section appears after presenting the hydrological model. Change section order?
Answer: We don’t understand this comment. What does “3” mean? Do you mean to present chapter 7 before the hydrological modelling?
- L525: why is bias correction uncertainty larger for precipitation? Because the initial climate bias is larger and therefore more difficult to correct? Or is this because of greater differences between correction techniques?
Answer: The uncertainty analysis quantifies the contributions to the uncertainty in the changes, not the actual values, i.e. the bias in the initial data does not influence this. The larger uncertainty is an effect of the two bias-adjustment methods resulting in different change signals. These differ more for precipitation than temperature, showing that results for temperature from a single bias-adjustment method are more robust than for precipitation.
- Figures 12, 13 and 19 are not sharp (image resolution).
Answer: We will improve the resolution of these figures.
- L547: why is this the case? A comment on this is needed.
Answer: The climate forcing data have been bias-adjusted to match the statistics of observations in the period 1985-2014, so the simulated hydrological variables using these forcing data show similar pattern of the simulated ones using the observed forcing. However, observed trends in Norway are larger than most of the trends simulated by the climate models, resulting in the observations lying at the edge of the ensemble.
- L561: is this the only driver? Isn’t snow recharge changing antecedent conditions too?
Answer: Good point. It is indeed a driver.
- L683: the paper does not deal with this aspect -> I would remove.
Answer: Agree.
- L691: the link with the previous sentence is not obvious -> I would remove.
Answer: Agree.
- L715: it was not clear to me whether the multivariate correction method is also used to correct these additional variables.
Answer: 3DBC method was used to further post-process all the atmospheric variables mentioned in this paper. Add to line 227 “These projections include nine atmospheric variables at 1x1 km spatial resolution and with daily time steps, each bias-adjusted both with EQM and 3DBC :”
Gudmundsson, L., Bremnes, J. B., Haugen, J. E., and Engen-Skaugen, T.: Technical Note: Downscaling RCM precipitation to the station scale using statistical transformations – a comparison of methods, Hydrology and Earth System Sciences, 16, 3383–3390, https://doi.org/10.5194/hess-16-3383-2012, 2012.
Lussana, C., Tveito, O. E., Dobler, A., and Tunheim, K.: seNorge_2018, daily precipitation, and temperature datasets over Norway, Earth System Science Data, 11, 1531–1551, https://doi.org/10.5194/essd-11-1531-2019, 2019.
Yang, X. and Huang, S.: Attribution assessment of hydrological trends and extremes to climate change for Northern high latitude catchments in Norway, Climatic Change, 176, 139, https://doi.org/10.1007/s10584-023-03615-z, 2023.
Yuan, Q., Thorarinsdottir, T. L., Beldring, S., Wong, W. K., Huang, S., and Xu, C.-Y.: New Approach for Bias Correction and Stochastic Downscaling of Future Projections for Daily Mean Temperatures to a High-Resolution Grid, Journal of Applied Meteorology and Climatology, 58, 2617–2632, https://doi.org/10.1175/JAMC-D-19-0086.1, 2019.
Yuan, Q., Thorarinsdottir, T. L., Beldring, S., Wong, W. K., and Xu, C.-Y.: Bridging the scale gap: obtaining high-resolution stochastic simulations of gridded daily precipitation in a future climate, Hydrology and Earth System Sciences, 25, 5259–5275, https://doi.org/10.5194/hess-25-5259-2021, 2021.
Citation: https://doi.org/10.5194/egusphere-2025-5331-AC1 -
RC2: 'Reply on AC1', Anonymous Referee #1, 15 Dec 2025
reply
Dear authors,
Thank your for your answers. A few precisions (also because I made a mistake in two of my comments, sorry):
- "L308: quickly explain why seasonal correction is often needed."
"Answer: Not sure what the reviewer meant?! The following answer is just a general comment on the reason behind seasonal correction..."
I meant that you could just add one or two sentences explaining why this is generally needed. While it is obvious for anyone working with models and bias correction, I do not think it is the case for the audience that will maybe read your paper.- "Figure 8: it seems that the spread is reduced around the year 2000. I think this is because all models are calibrated independently and brought close to observations. I think something needs to be mentioned about this. It is also interesting to see that it is almost not the case for SWE and SM (Fig. 14).
Answer: Figure 8 shows 30-year means of the bias-adjusted precip. and temperature data. This is not an effect of model calibration in the RCMs, but the values are bias-adjusted such that they match the observed average in the calibration period 1985-2014. To make this more clear, we will add a sentence noting this and saying that this is the desired effect of the bias-adjustment."
I meant "bias adjustment" and not calibration, sorry. Adding a sentence about this seems sufficient.- "3: I do not understand why this section appears after presenting the hydrological model. Change section order?
Answer: We don’t understand this comment. What does “3” mean? Do you mean to present chapter 7 before the hydrological modelling?"
I meant section 7.3, sorry for the typoCitation: https://doi.org/10.5194/egusphere-2025-5331-RC2 -
AC2: 'Reply on RC2', Shaochun Huang, 16 Dec 2025
reply
We thank the reviewer for the quick reply! Now it is clear for us and we will add the explanation accordingly. For section 7.3, we still think our original order makes sense, because Section 7.3 refers to uncertainty analyses for changes in climate variables and section 8.3 refers to uncertainty analysis for changes in hydrological variables . The factors contributing to the uncertainties are the same for those climate and hydrological variables and they are climate models and bias-adjusted methods. In our modelling chain, we don't consider any uncertainty from hydrological modelling because there is only one hydrological model used.
Citation: https://doi.org/10.5194/egusphere-2025-5331-AC2
-
AC2: 'Reply on RC2', Shaochun Huang, 16 Dec 2025
reply
-
AC1: 'Quick Reply on RC1', Shaochun Huang, 15 Dec 2025
reply
Data sets
Daily bias-adjusted climate (COR-BA-2025) and hydrological (distHBV-COR-BA-2025) projections for Norway W. K. Wong et al. https://doi.org/10.21343/0k90-6w67
seNorge_2018 daily mean temperature 1957-2019 Cristian Lussana https://zenodo.org/records/3923706
seNorge_2018 daily maximum temperature 1957-2019 Cristian Lussana https://zenodo.org/records/3923700
seNorge_2018 daily minimum temperature 1957-2019 Cristian Lussana https://zenodo.org/records/3923697
seNorge_2018 daily total precipitation amount 1957-2019 Cristian Lussana https://zenodo.org/records/3923703
HySN2018v2005ERA5 Ingjerd Haddeland https://zenodo.org/records/5947547
KliNoGrid_16.12 wind dataset MET Norway https://thredds.met.no/thredds/catalog/metusers/klinogrid/KliNoGrid_16.12/FFMRR-Nor/catalog.html
Model code and software
DistributedHbv Stein Beldring https://github.com/nve-sbe/DistributedHbv/tree/master/SourcePenmanMonteith
DistributedElementWaterModel Stein Beldring https://github.com/DistributedElementWaterModel/Version_3.03
3DBC: Version 2023 Andreas Dobler https://zenodo.org/records/15260335
qmap: Statistical Transformations for Post-Processing Climate Model Output Lukas Gudmundsson https://cran.r-project.org/web/packages/qmap/index.html
Viewed
| HTML | XML | Total | BibTeX | EndNote | |
|---|---|---|---|---|---|
| 268 | 173 | 28 | 469 | 15 | 17 |
- HTML: 268
- PDF: 173
- XML: 28
- Total: 469
- BibTeX: 15
- EndNote: 17
Viewed (geographical distribution)
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
This study presents the outcomes of bias-correcting and downscaling a climate model ensemble for hydrological applications in Norway. Three emission scenarios, ten GCM/RCM combinations, and two bias corrections were considered. First, the results are evaluated based on temperature and precipitation simulations in the historical and future periods. Then, the simulations and projections of a hydrological and a glacier model are analyzed. Overall, this study is very comprehensive and well presented. As it will probably serve as a basis for many climate and hydrological modeling applications in Norway, I believe it is relevant to GMD. However, I have a number of comments regarding some of the methodological choices, as well as the need to clarify some of the presented results. Since the list of comments is extensive and may require significant effort, I recommend making major revisions before accepting the study.
Major comments
Minor comments
References
Astagneau, P. C., Wood, R. R., Vrac, M., Kotlarski, S., Vaittinada Ayar, P., François, B., and Brunner, M. I.: Impact of bias adjustment strategy on ensemble projections of hydrological extremes, Hydrol. Earth Syst. Sci., 29, 5695–5718, https://doi.org/10.5194/hess-29-5695-2025, 2025.
François, B., Vrac, M., Cannon, A. J., Robin, Y., and Allard, D.: Multivariate bias corrections of climate simulations: which benefits for which losses?, Earth System Dynamics, 11, 537–562, https://doi.org/10.5194/esd-11-537-2020, 2020.
Guo, D., Zheng, F., Gupta, H., & Maier, H. R.: On the robustness of conceptual rainfall-runoff models to calibration and evaluation data set splits selection: A large sample investigation. Water Resources Research, 56 (3). https://doi.org/71210.1029/2019wr026752, 2020
Maraun, D.: Bias Correction, Quantile Mapping, and Downscaling: Revisiting the Inflation Issue, Journal of Climate, 26, 2137–2143, https://doi.org/10.1175/jcli-d-12-00821.1, 2013.
Matiu, M., Napoli, A., Kotlarski, S., Zardi, D., Bellin, A., and Majone, B.: Elevation-dependent biases of raw and bias-adjusted EURO-CORDEX regional climate models in the European Alps, Climate Dynamics, https://doi.org/10.1007/s00382-024-07376-y, 2024.