the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
To what extent are the IASI water vapour profiles representative of the conditions in the autumn before the HPE? Lessons learned from the WaLiNeAs campaign
Abstract. The WaLiNeAs campaign took place along the north-western Mediterranean coast between October 2022 and January 2023. This period was marked by unusual weather conditions associated with dry autumn and winter. In such conditions and for the first time, eight ground-based stations equipped with water vapour Raman lidars were strategically deployed by four European countries. We studied the consistency of this network with the water vapour mixing ratio (WVMR) products derived from the Infrared Atmospheric Sounding Interferometer (IASI) and the European Centre for Medium-Range Weather Forecasts (ECMWF) Reanalysis (ERA5), which assimilate IASI radiances. The statistical metrics used in the comparison are the mean bias (MB, defined as lidar – IASI or ERA5), the root mean square error (RMSE) and the correlation coefficient (COR). A positive MB of approximately 0.9 g kg−1 (respectively 0.6 g kg−1) between 0.2 and 5 km above mean sea level (amsl) indicates a systematic underestimation of the WVMR by IASI (respectively ERA5). RMSE values range from 1 to 2 g kg−1 across all lidar stations for IASI and ERA5, while the measurement uncertainties of the lidars are typically below 0.4 g kg−1. COR presents little variation between stations, it ranges from 0.7 to 0.8 and remains almost constant between 0.2 and 5 km amsl. Both the IASI and the ERA5 products appear to accurately reproduce the temporal variability of the vertical structure of water vapour in the low troposphere. Nevertheless, they show MB and RMSE significantly above the uncertainties of lidar measurements.
- Preprint
(1394 KB) - Metadata XML
-
Supplement
(3491 KB) - BibTeX
- EndNote
Status: open (until 06 Apr 2026)
- RC1: 'Comment on egusphere-2026-111', Anonymous Referee #1, 13 Mar 2026 reply
Viewed
| HTML | XML | Total | Supplement | BibTeX | EndNote | |
|---|---|---|---|---|---|---|
| 163 | 49 | 19 | 231 | 44 | 22 | 16 |
- HTML: 163
- PDF: 49
- XML: 19
- Total: 231
- Supplement: 44
- BibTeX: 22
- EndNote: 16
Viewed (geographical distribution)
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
Review of “To what extent are the IASI water vapour profiles representative of the conditions in the autumn before the HPE? …” by P Chazette et al.
The paper discusses a bias of water vapor from the ground based lidar network WaLiNeAs compared to satellite (IASI) and model (ERA 5).
In this paper ground based water vapor profiles from the western Mediterranean region are compared to satellite (IASI) and model (ERA 5 ) data. The biases are discussed. However, in the current form I have some questions and remarks to the authors, see below:
Title: what is “HPE”? You mention the comparison to IASI, but isn’t the comparison to ERA5 at least equally important?
This paper seems to miss in the introduction: https://www.mdpi.com/2072-4292/17/20/3473
This is a bit pity because these results could have been used to separate the free troposphere from the PBL – the authors fixed threshold at e.g. 1.5km altitude seems a bit rough.
L 120 and L 128 why do you interpolate the lower resolution (IASI; ERA) to your lidar data? Why don’t you average the lidar to the vertical resolution of the former? Suppose it is getting drier with altitude, a part of this drying signal is in the IASC / ERA, not in the lidar.
L167 slope [to compare with ERA] just to be sure: this slope “treatment” is not applied later in Fig 4, isn’t it? So this is just a hypothetical overview on a mismatch, supposed a linear function between water vapor in lidar and EWRA existed? Therefore, you write in line 170 that a bias remains. /Did I understand this correctly?)
Table 2: I am not sure whether I understand this. Do you mean: WVMR_Lidar = calibration_correction*WVMR_ERA5 *Intercept and coefficient of determination is the statistical soundness of the fit? Which altitude interval has been considered and how many data sets? Maybe provide the equation.
Line 197 and Fig 3: the approach is alright. But not very specific. One could have removed or weighted boxes depending on wind direction. Or do you have the impression that P1, P3, P7 and P9 match equally well as P5 to lidar?
Line 202 500m … minimal terrain obstruction This sounds somehow vague. Do you need this information? If it is on comparing lidar to ERA5 I would have started in the free troposphere and then considered the BL (the latter one being more complicated, due to small scale effects, parameterizations behind the ERA data and ….the topography.)
After Eq. 7: as I did not find a free pdf of the Foudali, Steiger (2008) quote: could you briefly justify the number “3” in nominator and denominator of F= …
Table 3: why the MB for station 6 (Toulon) is so much smaller and why this effect is not seen in RMSE?
L 241 (PBL) “more noticeable at some sites” be more specific. Do you mean that one would expect MB and RMSE to decrease with altitude, because there are more stationary conditions in the free troposphere, which is not seen at some sites like e.g. station 4? Is this due to strong winds in Rhone delta?
Fig.4 is convincing as it is. However, this result states that ERA is systematically too dry and your water vapor is systematically larger than IASI. Can you think of reasons for this? This is an important result. I would ask the authors to repeat this analysis (if not here in a follow up study) with a few golden cases only, dry versus wet, only free troposphere and using the wind speed from model and obs to weight the pixels from Fig 3 to provide a robust estimation for a possible bias in ERA5.
L 255 “only lidar data with statistical error < …” Maybe it is worth mentioning how many valid profiles for each station were included in your analysis.
Fig 5: The reason for showing Toulon and Ajaccio separately is that for these stations you have the best data for both wet and dry conditions, isn’t it? Maybe you state this more clearly, e.g. in figure caption
S3 (versus S2): To me, S3 looks “worse” than S2. Yet you give in Table 2 no calibration correction factor for site 3. Is this correct?