the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Coupled large eddy simulations of land surface heterogeneity effects and diurnal evolution of late summer and early autumn atmospheric boundary layers during the CHEESEHEAD19 field campaign
Abstract. Observational studies and large eddy simulations (LES) have reported secondary circulations in the turbulent atmospheric boundary layer (ABL). These circulations form as coherent turbulent structures or mesoscale circulations induced by gradients of land surface properties. However, simulations have been limited in their ability to represent these events and their diurnal evolution over realistic and heterogeneous land surfaces. In this study, we present a LES framework combining the high-resolution observational data collected as part of the CHEESEHEAD19 field campaign to overcome this gap and test how heterogeneity influences the ABL response. We simulated diurnal cycles for four days chosen from late summer to early autumn over a large (49 x 52 km) heterogeneous domain. To investigate surface atmospheric feedbacks such as self-reinforcement of mesoscale circulations over the heterogeneous domain, the simulations were forced with an interactive land surface model with coupled soil, radiative transfer and plant canopy model. The lateral and model top boundary conditions were prepared from National Oceanic and Atmospheric Administration High-Resolution Rapid Refresh (HRRR) meteorological analysis fields. Comparing the simulated profile and near surface data with field measured radiosonde and eddy covariance station data showed a realistic evolution of the near-surface meteorological fields, heat and moisture fluxes and the ABL. The LES had limitations in simulating the night-time cooling in the nocturnal boundary layer. The simulated fields were strongly modulated by the imposed HRRR derived mesoscale boundary conditions, resulting in a slightly warmer and drier ABL. The simulations were run without clouds which resulted in higher daytime sensible heat fluxes for some scenarios.
Our findings demonstrate the capability of the PALM model system to realistically represent the daytime evolution of the ABL response over unstructured heterogeneity and the limitations involved therein with respect to the role of boundary conditions and the representation of the nocturnal boundary layer. The simulation setup and dataset described in this manuscript build the baseline to tackle specific research questions associated with the CHEESEHEAD19 campaign, particularly to address questions about the role of heterogeneous ecosystems in modulating surface-atmosphere fluxes and near surface meteorological fields as well as highlight the needed improvements in model representations of land-atmosphere feedbacks over vegetated environments.
- Preprint
(24814 KB) - Metadata XML
- BibTeX
- EndNote
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-1721', Anonymous Referee #1, 22 Dec 2023
Review of 'Coupled large eddy simulations of land surface heterogeneity effects ..' by Paleri et al.
The authors performed LES simulations for the realistic field campaign and compared the results with the observation data. The simulation methodology employs a comprehensive approach, incorporating nesting, LSM, and PCM, and appears to be done properly. On the other hand, the analysis of results is rather limited. Although the title and the abstract indicate an investigation into the effects of the land surface heterogeneity, no attempt is made to clarify these effects; no comparison with the case with the homogeneous surface, no investigation of secondary circulation, no investigation of the correlation between the ABL (e.g., zi) with the surface heterogeneity pattern. Notably, Fig. 15 has no relation to previous figures, and no effort is made to establish a connection with surface heterogeneity.
I think that the paper may be valuable for its ability to reproduce observation data using novel simulation techniques. However, for this purpose, the authors need to show how newly introduced simulation techniques produce improved simulation results more clearly. For example, the authors can show the present results from simulations with nesting are better than those without it (e.g., MR13). Most profiles compared the results from LES, HRRR, and observation, and it may be necessary to discuss how these profiles are related. For example, the mean values of LES profiles may be close to those of HRRR, but their vertical variations to those of observation.
Other comments
L1-4: This sentence should be in the instruction, not in the abstract.
L87: I suggest including a table comparing the mean meteorological conditions on summer and autumn days. The mean temperature and mixing ratio are higher on autumn days, although the authors mentioned that summer days are latent heat dominated (not incorrect, because the wind is stronger on autumn days, but confusing).
Table 1: I suggest including the information on HRRR too.
L316: 'the simulations were able to capture the CBL height well' ? It is followed by a discussion on the difference in the CBL height of 200 m.
L319; I think the difference in the initial stratification near the surface can be an important reason.
Fig. 4: I cannot understand the large difference between LES and HRRR profiles on autumn days when LES is simulated based on the HRRR boundary conditions. Does a close agreement in the domain averaged profiles (Fig. 5) mean a different distribution pattern? Then how can we explain a good agreement between LES and observation profiles in Fig. 4?
Fig. 8: Fig. 4 shows a good agreement in the mixing ratio between LES and observation. Isn't this contradictory with the large difference in the mixing ratio in the time series in Fig. 8?
Fig. 13, 15: Why not compare with observation data?
L520: I cannot understand why a homogeneous control experiment cannot be performed.
Citation: https://doi.org/10.5194/egusphere-2023-1721-RC1 -
RC2: 'Comment on egusphere-2023-1721', Anonymous Referee #2, 04 Jan 2024
Review of: Coupled large eddy simulations of land surface heterogeneity effect and diurnal evolution of late summer and early autumn atmospheric boundary layers during the CHEESEHEAD19 field campaign, by Paleri et al.
General Comments
The abstract implies using an advanced LES (PALM) simulation of the ABL, with near-surface fluxes represented by a canopy model and a land-surface model, to investigate and compare its evolution over two two-day periods corresponding to CHEESEHEAD19 IOPs, one with a convectively driven convective boundary layer (CBL) and one with a shear-driven CBL. While the abstract implies emphasis on horizontal structures and the influence of land-surface horizontal heterogeneity as well as shear vs buoyancy, emphasis is placed on relatively routine model-to-observation comparisons of vertical profiles and average fluxes, with more novel treatment of horizontal structures dealt with only superficially and briefly, and horizontal surface-flux variation never related to location.
I was excited to be able to review this paper given the topic, the CHEESEHEAD field campaign, and the respect I have for the two senior authors (which seems to include one of the lead author’s advisors), despite some minor problems in the first ten pages. However, once I got into the body of the paper, problems mounted, to the degree that I am forced to conclude that this paper needs a major overhaul before being considered for publication.
Should the lead author decide to give it another try, it would help to incorporate more novel results (perhaps merged with another submitted paper?), and address the concerns listed below. These are not necessarily in the order of importance. Furthermore, I strongly recommend the paper be reviewed by one of the other authors (preferably one of the senior authors) before resubmission.
Specific comments follow.
- Use of CDT. Not recommended. If must use, need to point out the time of solar noon, since it will be an hour later.
- Recommend use of the conventional terms “buoyancy-driven” and “shear-driven,” following Moeng and Sullivan (1994), rather than “free” vs “forced” convection, particularly in view of Fig. 12. (I would have used this figure much earlier when describing why we should expect the two days to differ!)
- Recommend a rewrite of Section 4.1, making more reference to Table 3, and being clear where Figure 4 is discussed. Include the source of the data and the consequential vertical extent in the caption. (I read this section several times)
- There are numerous places where computation of listed or discussed variables is not adequately explained. Particularly related to Table 3, but elsewhere as well.
- Labels way too small in many figures, and sometimes too dim in others.
- Observation discussion mentions those used (radiosondes, surface towers), but in other parts of the paper, aircraft data were mentioned, both from “flying through” the LES and actual observations. But aircraft data were never used. Either delete or present model and observational results from aircraft. Or state that aircraft data are not used. (for complete evaluation, aircraft data would have been useful!).
- L201-3, Table 1. I’ve not seen LES resolution change abruptly in the middle of a CBL. With all the care taken at the horizontal boundaries, how does this work? Are there references justifying this?
- Also, I was surprised that, with all the complexity of the CHEESEHEAD19 domain, that soil properties were held constant. This was surprising, given that the model cost of using different soil types is negligible. Furthermore, I was surprised at the short spin-up time for the land surface/canopy models (L226).
- Most egregious is the uncritical acceptance of the observed mixing-ratio values in Figure 8, given the fact that they are unphysical (greater than the saturation mixing ratio) in Fig. 8d, and not consistent with the radiosonde values.
- Comparison of profiles to observations. Since you are evaluating the model results, you should use more than radiosonde observations. Aren’t there data from the tall tower? From aircraft? The rich CHEESEHEAD19 dataset should be fully exploited to make the best model evaluation.
- In evaluating the Obukhov length, the virtual temperature flux should be used rather than the temperature flux.
- I kept waiting for the novel results, related to the impact of buoyancy- vs shear-driven CBLs, expecting a treatment somewhat like Moeng and Sullivan (1984) but for a heterogeneous surface. But, while the values of the ratio –z_i/L were presented, the expected significance was never explained, and only wind-speed profiles were presented, rather than profiles of U and V, which are needed to see the full shear. Nor were there observed profiles. And convective structure? All that was presented were fields of LES z_i despite guidance in the roll papers cited by these authors to look at horizontal structure within the CBL. Perhaps this explains why the authors think rolls are more likely on 23 August than 25 September (L483-4), contrary to conventional wisdom (Even though I prefer to look at mid-CBL fields for rolls, my interpretation of Fig. 15 is that the structure is more 2-D for 25 September). Moreover, I saw no comparison to observations. (One could use radar reflectivity (from insects), or the aircraft data, or cloud patterns, if there were low clouds).
- Did the authors consider the effects of terrain on CBL structure? The LES by Walko et al (1992) for example, shows an example of terrain influenced CBL convection. Normally, a contour map is presented along with a land-use map is terrain if it is at all significant.
Walko, R. L., W. R. Cotton, and R. A. Pielke, 1992: Large-eddy simulations of the effects of hilly terrain on the convective boundary layer. Bound.-Layer Meteor., 58, 133–150
- More generally, there are several problems with the writing and organization.
- The writing seems to contain contradictions, leading to confusion as to what was going on (several examples in Specific Comments.)
- Terms are sometimes introduced before they are explained.
- Important detail either missing or wrong: g., The number of levels in the canopy model doesn’t seem to extend to the top of the trees if the levels correspond to model grid points. (L235)
- The ‘discussion’ section at the end of the paper looks like a literature review rather than a discussion of the results. Also, it is not always clear what model data are horizontally averaged and what model data are specific to sites.
- The observations seem to be abandoned in the last part of the paper regarding the impact of .
One of the results that intrigued me was the small standard deviations for the 8 ensemble members. What does this mean? The interpretations vary from too-similar members to the significance of nonuniformity. This could relate to how things were computed, of course.
Given (1) an egregious error, (2) lack of clarity in computation and presentation, and (3) the lack of clear novel results, I vote against publication of the current manuscript without a major overhaul. Work on the first two, is necessary for publication, but more results (or more thorough discussion of the impacts of and heterogeneity) are needed for the paper to be sufficiently novel for publication.
Specific Comments
Introductio
Additional interesting papers about rolls that might be worth citing (minor). This is appropriate if the authors follow my recommendation to pay more attention to convective structure.
Banghoff, J.R., Sorber, J.D., Stensrud, D.J., Young, G.S. and Kumjian, M.R., 2020. A 10-year warm-season climatology of horizontal convective rolls and cellular convection in Central Oklahoma. Monthly Weather Review, 148(1), pp.21-42.
Stensrud, D.J., Young, G.S. and Kumjian, M.R., 2022. Wide horizontal convective rolls over land. Monthly Weather Review, 150(11), pp.2999-3010.
Also early work by Hardy and Ottersten using FMCW radar
Hardy, K. R., and H. Ottersten, 1969: Radar investigations of convective patterns in the clear atmosphere. J. Atmos. Sci., 26, 666–672,
L63. Not necessarily. Heterogeneity depends on the relationship of the scale of heterogeneity of the land surface and/or convective structure to the scale of the domain. Assuming “frozen turbulence” applies to a sensor on the ground, since it can take too long for the larger eddies to go by to be meaningfully measured. Aircraft can do a better job here.
L74. Suggest replacing these by actual numbers. These terms were first used in a modeling context (and mainly applicable given horizontal grid spacing of models at the time these scales were introduced); many readers today are not familiar with the terminology. Also, a 10 x 10 km domain does not necessarily capture the large eddies well enough, as has been demonstrated by studies using aircraft.
L80. Hooray! Eliminating cyclical boundary conditions helps a lot!
L86. Two consecutive days? Or 2 24-h days. From L134, it is the latter. Suggest inserting “consecutive” here.
L88. Re “free-convective”, vs. “forced” convection. It would be better to use the standard definitions of ‘buoyancy-driven’ and ‘shear-driven” (as done in Moeng and Sullivan 1984). Since both buoyancy and shear-related energy in effect force the boundary layer.
L105. What is the CHILD01 model? (perhaps used before it is defined).
Field Experiment data: I am assuming there was neither radar nor aircraft data. (NOTE: later contradicted. And they should have been used in evaluating the data – at least the profiles!)
L134. If you write “two consecutive days” earlier, than can shorten to “The 44-h simulations were for August 22 and 23 for the August IOP and September 24-25 for the September IOP. Better yet, specify the initial time of day as well. Then you wouldn’t have to repeat this at around L190.
L136-7. Buoyancy-driven vs shear-driven? Both are “forced convective,” and this is not conventional usage in my experience. See classical work of Moeng and Sullivan.
L188. Were there aircraft measurements? Caption to Fig. 1 mentions ‘virtual’ measurements. Observations section does not measure aircraft. Please clarify! Perhaps it is as simple as inserting “virtual” and explaining the meaning. I am assuming this means that you ”flew” an aircraft through the innermost model domain.
L201-3, Table 1. I’ve not seen LES resolution change abruptly in the middle of a CBL. With all the case taken at the horizontal boundaries, how does this work? Are there references justifying this?
L204. Why even mention CHILD02? (Here it is implied by, “We focus on 3D data from the CHILD01 model … “. Later CHILD02 IS used.
L217. Soil can be important… why keep it constant? Is that close to reality? Is there terrain?
L218, Suggest, adding, “covering a total depth of 2.93 m.” Then the following sentence about thickness makes more sense. Although it would make even more sense if you defined “thin,”
e.g.,
Having too thin a soil layer (< ? m) can lead to ….
L224. I am guessing that there would be few changes of land surface type even in the larger domain from the time of the database to the 2019. Is that correct? (e.g., in areas with crop rotation, more current data might be needed).
L226. Isn’t the 44-h spinup time a bit short? E.g., see Chen et al. 2007, which is about another LSM, but one would think their properties in this regard would be similar. Perhaps this is because the LSM is already close to equilibrium when you start? See:
Chen, F., Manning, K.W., LeMone, M.A., Trier, S.B., Alfieri, J.G., Roberts, R., Tewari, M., Niyogi, D., Horst, T.., Oncley, S.P. and Basara, J.B., 2007. Description and evaluation of the characteristics of the NCAR high-resolution land data assimilation system. Journal of applied Meteorology and Climatology, 46(6), pp.694-713.
Table 2. Interesting that “cranberries’ are common enough to get their own surface-type classification!
Double-counting comment is double-presented. (At the bottom of the table, and in the text at L233) Is this a mistake?.
L233-234. More importantly, assigning “short grass” to the LSM in the presence of trees doesn’t seem like a good idea. The LSM I’m most familiar with (Noah-MP) becomes a canopy model in the presence of trees, for example. Is this standard in the PALM LSM?
L235. Does the canopy model extend high enough? From Table 1, 5 layers is less than 10 m! Looking at Fig. 3, which shows trees extending to 25 m, wouldn’t more layers be needed for the summer runs as well given the grid spacing in table 1? Or are the layers defined independently in the canopy model? Please clarify.
L264. Middle and high clouds, to use meteorological nomenclature. Even some low clouds can lie completely above the CBL and thus interact with it only through radiative processes.
Exciting to have 3-D radiation! But I wonder how big an impact it has in this case.
L274-5. Do you really mean the “volume-averaged turbulent time series” were extracted at the tower sites? I can understand the data at and adjacent to the tower sites being extracted to represent the towers.
L276. Grid points corresponding to tower location plus those immediately to the north, south, east and west?
L279. The phrase, “emulating the IOP airborne campaign flights” implies there is also aircraft data. Were aircraft flying during the IOPs? This is not mentioned in the description of observations. Aircraft data should be mentioned – and its use would greatly improve evaluation of model wind profiles (also, L283).
L290. Assume that you simply averaged the local heights? “Calculated from these 2D fields” is a bit vague. And that the local height were relative to ground level. Is that correct? Not sure how much terrain there is at this site.
Paragraph starting L293, Table 3. I am assuming that the soundings are compared to the statistics at the corresponding heights in Table 3. And then vertically averaged? Or what? It is surprising that CHILD2 is being used, given that it was said earlier the focus was on CHILD1. (Maybe the limited use of CHILD2 should be stated earlier).
If it is CHILD2, should note the altitude range in the caption, text, or both.
More confusing – the discussion of the differences between the soundings and the simulation (L313 are thereabouts) bear no resemblance to Table 3, which shows absolute differences (I am assuming this means the average absolute value of the difference between simulation and sounding), since a specific height interval is given … and the differences are … relative ? Unsure of this. It looks like “mean differences” and “mean absolute differences” are mixed together. MAKE CLEAR WHETHER YOU ARE DISCUSSING TABLE 3 OR FIGURE 4.
L304. Include conversion of simulation time to clock time. But once I get to Table 3, you are using CDT! Again – please clarify or simply use CDT. (Maybe “simulation time” and “CDT” are the same?). The authors should be warned, however, that readers may assume that LST is used rather than LDT if they simply look at the figures, since LDT is rarely (ever?) used. If it is too difficult to change to LST, it needs to be clearly pointed out – and ON THE FIGURES that CDT is used, to keep readers from being misled. Not just the captions, because sometimes people grab figures off papers to give talks.
L350. Do model data correspond to the radiosonde locations? Or are they horizontal averages? Please specify in Fig. 6 caption. Add number of ensembles. I don’t recall ensembles being discussed for the thermodynamic profiles.
L354. Why?
Section 4.3 starts off much more clearly!
Except: L366. Need to be more specific here.
For the model, how are data processed? Since you have 12 locations and 8 ensembles, it’s not clear how standard deviations are computed.
Around L380, Figure 8. The observed mixing-ratio values look seriously too high. Values of 20 g/kg are typical over the tropical ocean, not Wisconsin in September. From Fig. 8b, the SATURATION mixing ratio at 292.5 K is 14.3 g/kg! Based on this, I am guessing the discrepancies in August may be due to observation (calibration?) problems as well.
The authors should also have been suspicious, given the large differences related to the sounding mixing ratios.
L392. Something more specific than “same model level”. Perhaps ‘at 32 m, the tree-top level, or ’32 m, just above tree-top height’?
L398. Is this spike significant?
L424. (Just above Fig. 11). How is “CBL initiation” defined?
Suggest:
The 22 August simulations showed the model delays CBL initiation, as defined by …,.by about an hour,
(More properly, the buoyancy flux should be used rather than H).
And have August text closer to the August Figure!
Section 4. No apparent discussion of Fig 11, which shows the Sept case. At least mention that the biases are similar, with simulations showing higher flux values. I’m guessing perhaps the discussion near Fig. 11 is of September. PLEASE CHECK AND CORRECT.
Figure 11. Isn’t some of the night-time data dominated by spikes? (Are any other variables consistent with the 200 W/m2 LE value in Fig 11c?)
L445. Formula for Obukhov length is wrong! Should use the virtual temperature flux. Not that it will affect your results much here, but physically it’s the buoyancy flux and not the heat flux that we are concerned about, an error which can be 100% over the tropical oceans.
Also, should use OBSERVED data as well. In fact, if I had to choose one set of data, I would use the observations.
L450. Again, please use conventional terminology, as used in Moeng and Sullivan. Both shear- and convectively driven CBLs are “forced,” but by different mechanisms.
L450. Suggest replacing “statically” with “near”, and deleting the “free-convective, statically unstable.” All CBLs are stratigraphically unstable.
455-465. Description more mathematical than physical. I.e., it is more reader-friendly (and physical) to describe how –z_i/L varies with time; and how it differs for the two days rather than nothing what their extrema are. For example, by noting that –z_i/L for 25 September is about 30 times that for 23 August. Why is that significant? Because strong winds (and smaller –z_i/L ) is associated not only with more vertical shear of the horizontal wind (and more contribution of the shear terms in to the CBL’s turbulence kinetic energy) but also differences in convective structure, something first noticed (using wind speed) by Woodcock about 100 years ago through observations of gulls, Kuettner in the 1970s, and many of the authors you site. For LES, Deardorff in the early 1970s was the first, and the first to use smaller –z_i/L .
Figure 13. You don’t need to show fluxes again, since they appear in an earlier figure.
L461 delete “forced” (the boundary layer is already “driven”)
L469. “shear driven” CBL from 09:00 …
Figure 14. should depict either U and V wind components or hodographs rather than wind speed. Much of the shear in the shear driven PBL is directional. See Moeng and Sullivan (1984).
With such an excellent dataset, why aren’t you also showing the observations? If not just the radiosondes, from the tall tower as well? Don’t aircraft sample the wind as well? And better to show both wind components, since the much of the shear in the shear-driven CBL is directional.
Section 5.2. Horizontal structure and its evolution a more precise title.
L475. Quasi 2-3 locally. Could you be seeing gravity waves?
L478. Again, use terminology of Moeng and Sullivan. “Free-convective” strictly refers to no shear.
L479. “Spatial gradients”???? horizontal and vertical? Lots of detail here, but not telling me much. Any observations to support model conclusions?
L483-484. Looks just as 2-D as the August case to me. Again, though – should show structure within CBL. AND having rolls on 23 August is LESS likely than on 25 September
(NOTE re Figure 15 and related discussion: If you are looking for convective organization, you should look at horizontal fields or horizontal within the CBL. The structure at z_i might be related to shear at the CBL top, which could be determined if you had profiles of U and V, which you don’t. Moeng and Sullivan (1994) and LeMone et al. (2010, Part II) both use model output WITHIN the CBL.)
To be specific, you should (a) look at large-eddy structure within the CBL, (b) include mean CBL wind direction in the discussion (partially done, but you need U and V in Fig. 14), and (c) recognize that the scale of the rolls (and convection) would be larger on 25 September, and compare this wavelength to that associated with the typical horizontal-to-vertical 3:1 often reported. (See Young et al. 2002 reference). Since the horizontal maps of structure are at z_i the field could reflect just gravity waves or gravity waves modulating the top of the CBL (Possibly as well as rolls). Flatter rolls are possible with gravity-wave-CBL interaction (Clark et al. 1986).
Young, G. S., D. A. R. Kristovich, M. R. Hjelmfelt, and R. C. Foster, 2002: Rolls, streets, waves, and more: A review of quasi-two-dimensional structures in the atmospheric boundary layer. Bull. Amer. Meteor. Soc., 83, 997–1001
Clark, T. L., T. Hauf, and J. P. Kuettner, 1986: Convectively forced internal gravity waves: Results from two-dimensional experiments. Quart. J. Roy. Meteor. Soc., 112, 899–926,
Can you present any OBSERVATIONAL evidence of convective structure. From nearby weather radar (which “sees” convective structure), aircraft data, or other papers being written about CHEESEHEAD19?
L494. Moist values are wrong because the observations are wrong.
L508. Over land. There are several simulations of PBL structure sampled in field programs over the ocean that predate simulations of CBLs over land, the first of which may have been Sommeria and LeMone 1976, one of two simulations I know of measurements taken to the north of Puerto Rico in December, 1972, the other being Cuijpers et al. (1993).
Cuijpers, J.W.M. and Duynkerke, P.G., 1993. Large eddy simulation of trade wind cumulus clouds. Journal of the Atmospheric Sciences, 50(23), pp.3894-3908.
A second early simulation was by Nicholls et al. of a GATE case, again over the ocean,
Nicholls, S., LeMone, M.A. and Sommeria, G., 1982. The simulation of a fair weather marine boundary layer in GATE using a three‐dimensional model. Quarterly Journal of the Royal Meteorological Society, 108(455), pp.167-190.
Discussion of past simulations of field data. Does this really belong here? I’d be more interested in a discussion of the present field campaigns. This of course makes the ocean-based LES irrelevant.
Section 7
L592. Maybe, but the 32-m observed values of mixing ratio were very different (this because of some instrument problem, I’m sure).
L598-599. Nothing is mentioned about the surprising conclusion of less evident rolls when they should be more evident. (Not surprising, since the BL top is not the place to look). And nothing is mentioned about what was seen in the observations. Thus it is not clear that the LES reproduces the “evolution and characteristics” of the observations.
Citation: https://doi.org/10.5194/egusphere-2023-1721-RC2 -
RC3: 'Comment on egusphere-2023-1721', Anonymous Referee #3, 19 Jan 2024
Coupled large eddy simulations of land surface heterogeneity effects and diurnal evolution of late summer and early automn atmospheric boundary layers during CHEESEHEAD19 fiels campaign
S. Paleri, L. Wanner, M. Sühring, A. Desai and M. Mauder
This study presents a LES of 4 IOP days of the CHEESEHEAD19 campaign and evaluates these simultions comparing them with observations. This article is far from publishable as it stands, for a variety of reasons: poorly defined study objectives, poor structure and organisation, numerous contradictions, lack of informations/explanations, lack of justification of the scientific choices and discussions of the results, …
The most important comment I have is that I don't see any results worth publishing. This paper would be a first part about LES evaluation, with a second part about effects of heterogeneity, I would understand. But as it stands, this paper is about an LES whose results are no convincing.
All these factors make the paper difficult to read and understand. I think that the paper has not been sufficiently proofread and worked on by the authors, whose competence I do not doubt. I don't think it's the reviewers' role to list all the flaws in a paper when the co-authors haven't done the work themselves. So I won't make an exhaustive list of the changes needed, but I will illustrate the criticisms with a few examples.
Poorly defined article objectives:
1- The title suggests that the surface heterogeneity effects are simulated. No results in the simulation show the effects of the surface heterogeneity. The authors may have tried to address this point in the very last section (5.1), but the results and analysis are not at all convincing.
2- L92: At the end of the introduction, I understood that the objective of the article is written as follows: "Following through, we ask, can such a LES be used to evaluate mechanisms that generate surface-heterogeneity induced mesoscale circulations in the diurnal ABL? " . No mesoscale circulation induced by surface heterogeneity is shown in these simulations, so the question is not answered in the study.
Article organization and logic :
1- The field experiment is commented on without introducing Figure 1, which is the only very rough illustration of the experimental set-up.
2- many discussions based on information introduced later in the paper. Some example below
-
L104 : CDT introduced L190
-
L105 : Child01 introduced L170
3- Some sections need to be revised to improve the organization of the ideas. Example :
-
section 2.2 starts with the EC towers (L107-110), then continues with the measurements of leaf phenology (L110), comes back over EC tower (L111-113) and ends with Drone base lidar measurement.
-
The introduction to section 4 is another example of a poorly organized paragraph, jumping from one idea to another only to return to the first.
4- The information about LAD profile are in section 3.3.2, normally devoted to Plant canopy Model, whereas too few information about leaf phenology and LAD measurement by drone is given in section 2.2. What about moving L236 to L252 and associated figures in section 2.2 ?
5-It seems to me that part 3.2 should be presented first, as it is necessary for understanding the following sections.
6- L187 : why the airborne data are mentioned here whereas their are not introduced in section 2 and not used in the study. Also, virtual flight tracks are presented in the section 3.3.2 about Plant Canopy Model (!) and again in section 3.4 about Virtual observational infrastructure… no use since these virtual observations are not used.
7- I don’t understand the usefulness of section 3.5. Could be used as an introduction of section 4 ?
8- Figures 12 and 13 could be merged.
Missing information :
An article can refer to previous studies, but it must also be understandable on its own.
1- L98 : before going through the different data used, a rapid general presentation of the experimental set-up would be useful.
-
What are the horizontal scales of the surface heterogeneity ?
-
What is the surface flux heterogeneity ?
-
Are all the EC tower on forest sites ?
-
…
These would be useful to fully understand the choices made in this LES study.
2- section 3.1 : some methods like « self-nesting » or « offline-nesting » are not at all defined whereas this sentence (L174) : « Employing both the offline-nesting and self-nesting modules lets us include the synoptic scale effects over the simulation domain and model the influence of a heterogeneous land surface and plant canopy over a wide range of scales. » tells that the effects is huge as it can be seen on the model-observation comparisons (section 4.3). These methods are neither defined nor justified.
3- Section 3.3.2 : the Plant Canopy Model is not described : 5 lines over two pages of this section.
Scientific choices justifications :
1- It is written in the abstract that the runs have no cloud simulation (“The simulations were run without clouds which resulted in higher daytime sensible heat fluxes for some scenarios”). No explanation is given on this very important set up choice in the article. L415, the model-observation difference in terms of sensible heat flux is then explained by the lack of clouds in the model. I think the authors should further explain this choice and also justify the interest of realistic simulation in which the clouds are not represented.
2- Section 3.3.2 & fig 2 : A leaf fall is defined for standard forest and wetland forest. Besides it is the first time in the paper that forest over wetland is discussed (and we don’t know the proportion), it seems that the leaf fall for standard and wetland are the same. The mean curves are different because the statistic over wetland is really poor. So why the authors defined two curves.
3- L295-296 : Why the authors use different data output frequency for August (30 minutes) and September (15 minutes) IOPS?
4- Section 5: very little is written to justify and explain why the authors want to compare the August and September IOPs in terms of stability and horizontal evolution. What do we learn from this?
5- The choice to assign forest to short grass to avoid double-accounting for surface radiative effects remains a mystery even if I do not doubt that this choice is the good one. This goes with the poor presentation of the PCM.
Insufficient explanations:
1- Line 263-264 : “This helps us to include effects of the spatially heterogeneous
plant canopy and high clouds on the simulated surface radiation and flux budgets.”. I don’t understand why the use of HRRR data over the Parent model domain helps to include effects of the spatially heterogeneous plant canopy ? It include horizontally heterogeneous surface energy balance between shaded and unshaded surfaces, but what is the link with the heterogeneous plant canopy?
2- L309-312: “Gehrke et al. (2021) discusses this issue, suggesting the role of the SGS model and radiation scheme in combination with the grid resolution as well as the role of the LSM’s surface energy balance parameterisation in combination with Monin-Obukhov Similarity Theory based computation of atmospheric fluxes at the first model grid point.”. Concerning the important under-estimation of the simulated temperature close to surface in the morning, the explanation given by Gehrke cited by the authors does not help a lot, since all the possible causes are listed there.
3- Concerning the bias between simulated and observed flux, nothing is said about the surface energy balance non closure in the measurements which could, if it was considered here, reduce the difference.
Contradictions:
1- L285: “In this manuscript we focus on 3D data from the Child01 model for 23 August and 24 September simulations, when the model domain encompassed the whole of CBL”. In section 4.3, data from 22-23 August and 8-9 September from Child02 are analysed. A discussion would be useful on the effect of ABL developed vertically and encompassed or not in the child01 domain.
-
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-1721', Anonymous Referee #1, 22 Dec 2023
Review of 'Coupled large eddy simulations of land surface heterogeneity effects ..' by Paleri et al.
The authors performed LES simulations for the realistic field campaign and compared the results with the observation data. The simulation methodology employs a comprehensive approach, incorporating nesting, LSM, and PCM, and appears to be done properly. On the other hand, the analysis of results is rather limited. Although the title and the abstract indicate an investigation into the effects of the land surface heterogeneity, no attempt is made to clarify these effects; no comparison with the case with the homogeneous surface, no investigation of secondary circulation, no investigation of the correlation between the ABL (e.g., zi) with the surface heterogeneity pattern. Notably, Fig. 15 has no relation to previous figures, and no effort is made to establish a connection with surface heterogeneity.
I think that the paper may be valuable for its ability to reproduce observation data using novel simulation techniques. However, for this purpose, the authors need to show how newly introduced simulation techniques produce improved simulation results more clearly. For example, the authors can show the present results from simulations with nesting are better than those without it (e.g., MR13). Most profiles compared the results from LES, HRRR, and observation, and it may be necessary to discuss how these profiles are related. For example, the mean values of LES profiles may be close to those of HRRR, but their vertical variations to those of observation.
Other comments
L1-4: This sentence should be in the instruction, not in the abstract.
L87: I suggest including a table comparing the mean meteorological conditions on summer and autumn days. The mean temperature and mixing ratio are higher on autumn days, although the authors mentioned that summer days are latent heat dominated (not incorrect, because the wind is stronger on autumn days, but confusing).
Table 1: I suggest including the information on HRRR too.
L316: 'the simulations were able to capture the CBL height well' ? It is followed by a discussion on the difference in the CBL height of 200 m.
L319; I think the difference in the initial stratification near the surface can be an important reason.
Fig. 4: I cannot understand the large difference between LES and HRRR profiles on autumn days when LES is simulated based on the HRRR boundary conditions. Does a close agreement in the domain averaged profiles (Fig. 5) mean a different distribution pattern? Then how can we explain a good agreement between LES and observation profiles in Fig. 4?
Fig. 8: Fig. 4 shows a good agreement in the mixing ratio between LES and observation. Isn't this contradictory with the large difference in the mixing ratio in the time series in Fig. 8?
Fig. 13, 15: Why not compare with observation data?
L520: I cannot understand why a homogeneous control experiment cannot be performed.
Citation: https://doi.org/10.5194/egusphere-2023-1721-RC1 -
RC2: 'Comment on egusphere-2023-1721', Anonymous Referee #2, 04 Jan 2024
Review of: Coupled large eddy simulations of land surface heterogeneity effect and diurnal evolution of late summer and early autumn atmospheric boundary layers during the CHEESEHEAD19 field campaign, by Paleri et al.
General Comments
The abstract implies using an advanced LES (PALM) simulation of the ABL, with near-surface fluxes represented by a canopy model and a land-surface model, to investigate and compare its evolution over two two-day periods corresponding to CHEESEHEAD19 IOPs, one with a convectively driven convective boundary layer (CBL) and one with a shear-driven CBL. While the abstract implies emphasis on horizontal structures and the influence of land-surface horizontal heterogeneity as well as shear vs buoyancy, emphasis is placed on relatively routine model-to-observation comparisons of vertical profiles and average fluxes, with more novel treatment of horizontal structures dealt with only superficially and briefly, and horizontal surface-flux variation never related to location.
I was excited to be able to review this paper given the topic, the CHEESEHEAD field campaign, and the respect I have for the two senior authors (which seems to include one of the lead author’s advisors), despite some minor problems in the first ten pages. However, once I got into the body of the paper, problems mounted, to the degree that I am forced to conclude that this paper needs a major overhaul before being considered for publication.
Should the lead author decide to give it another try, it would help to incorporate more novel results (perhaps merged with another submitted paper?), and address the concerns listed below. These are not necessarily in the order of importance. Furthermore, I strongly recommend the paper be reviewed by one of the other authors (preferably one of the senior authors) before resubmission.
Specific comments follow.
- Use of CDT. Not recommended. If must use, need to point out the time of solar noon, since it will be an hour later.
- Recommend use of the conventional terms “buoyancy-driven” and “shear-driven,” following Moeng and Sullivan (1994), rather than “free” vs “forced” convection, particularly in view of Fig. 12. (I would have used this figure much earlier when describing why we should expect the two days to differ!)
- Recommend a rewrite of Section 4.1, making more reference to Table 3, and being clear where Figure 4 is discussed. Include the source of the data and the consequential vertical extent in the caption. (I read this section several times)
- There are numerous places where computation of listed or discussed variables is not adequately explained. Particularly related to Table 3, but elsewhere as well.
- Labels way too small in many figures, and sometimes too dim in others.
- Observation discussion mentions those used (radiosondes, surface towers), but in other parts of the paper, aircraft data were mentioned, both from “flying through” the LES and actual observations. But aircraft data were never used. Either delete or present model and observational results from aircraft. Or state that aircraft data are not used. (for complete evaluation, aircraft data would have been useful!).
- L201-3, Table 1. I’ve not seen LES resolution change abruptly in the middle of a CBL. With all the care taken at the horizontal boundaries, how does this work? Are there references justifying this?
- Also, I was surprised that, with all the complexity of the CHEESEHEAD19 domain, that soil properties were held constant. This was surprising, given that the model cost of using different soil types is negligible. Furthermore, I was surprised at the short spin-up time for the land surface/canopy models (L226).
- Most egregious is the uncritical acceptance of the observed mixing-ratio values in Figure 8, given the fact that they are unphysical (greater than the saturation mixing ratio) in Fig. 8d, and not consistent with the radiosonde values.
- Comparison of profiles to observations. Since you are evaluating the model results, you should use more than radiosonde observations. Aren’t there data from the tall tower? From aircraft? The rich CHEESEHEAD19 dataset should be fully exploited to make the best model evaluation.
- In evaluating the Obukhov length, the virtual temperature flux should be used rather than the temperature flux.
- I kept waiting for the novel results, related to the impact of buoyancy- vs shear-driven CBLs, expecting a treatment somewhat like Moeng and Sullivan (1984) but for a heterogeneous surface. But, while the values of the ratio –z_i/L were presented, the expected significance was never explained, and only wind-speed profiles were presented, rather than profiles of U and V, which are needed to see the full shear. Nor were there observed profiles. And convective structure? All that was presented were fields of LES z_i despite guidance in the roll papers cited by these authors to look at horizontal structure within the CBL. Perhaps this explains why the authors think rolls are more likely on 23 August than 25 September (L483-4), contrary to conventional wisdom (Even though I prefer to look at mid-CBL fields for rolls, my interpretation of Fig. 15 is that the structure is more 2-D for 25 September). Moreover, I saw no comparison to observations. (One could use radar reflectivity (from insects), or the aircraft data, or cloud patterns, if there were low clouds).
- Did the authors consider the effects of terrain on CBL structure? The LES by Walko et al (1992) for example, shows an example of terrain influenced CBL convection. Normally, a contour map is presented along with a land-use map is terrain if it is at all significant.
Walko, R. L., W. R. Cotton, and R. A. Pielke, 1992: Large-eddy simulations of the effects of hilly terrain on the convective boundary layer. Bound.-Layer Meteor., 58, 133–150
- More generally, there are several problems with the writing and organization.
- The writing seems to contain contradictions, leading to confusion as to what was going on (several examples in Specific Comments.)
- Terms are sometimes introduced before they are explained.
- Important detail either missing or wrong: g., The number of levels in the canopy model doesn’t seem to extend to the top of the trees if the levels correspond to model grid points. (L235)
- The ‘discussion’ section at the end of the paper looks like a literature review rather than a discussion of the results. Also, it is not always clear what model data are horizontally averaged and what model data are specific to sites.
- The observations seem to be abandoned in the last part of the paper regarding the impact of .
One of the results that intrigued me was the small standard deviations for the 8 ensemble members. What does this mean? The interpretations vary from too-similar members to the significance of nonuniformity. This could relate to how things were computed, of course.
Given (1) an egregious error, (2) lack of clarity in computation and presentation, and (3) the lack of clear novel results, I vote against publication of the current manuscript without a major overhaul. Work on the first two, is necessary for publication, but more results (or more thorough discussion of the impacts of and heterogeneity) are needed for the paper to be sufficiently novel for publication.
Specific Comments
Introductio
Additional interesting papers about rolls that might be worth citing (minor). This is appropriate if the authors follow my recommendation to pay more attention to convective structure.
Banghoff, J.R., Sorber, J.D., Stensrud, D.J., Young, G.S. and Kumjian, M.R., 2020. A 10-year warm-season climatology of horizontal convective rolls and cellular convection in Central Oklahoma. Monthly Weather Review, 148(1), pp.21-42.
Stensrud, D.J., Young, G.S. and Kumjian, M.R., 2022. Wide horizontal convective rolls over land. Monthly Weather Review, 150(11), pp.2999-3010.
Also early work by Hardy and Ottersten using FMCW radar
Hardy, K. R., and H. Ottersten, 1969: Radar investigations of convective patterns in the clear atmosphere. J. Atmos. Sci., 26, 666–672,
L63. Not necessarily. Heterogeneity depends on the relationship of the scale of heterogeneity of the land surface and/or convective structure to the scale of the domain. Assuming “frozen turbulence” applies to a sensor on the ground, since it can take too long for the larger eddies to go by to be meaningfully measured. Aircraft can do a better job here.
L74. Suggest replacing these by actual numbers. These terms were first used in a modeling context (and mainly applicable given horizontal grid spacing of models at the time these scales were introduced); many readers today are not familiar with the terminology. Also, a 10 x 10 km domain does not necessarily capture the large eddies well enough, as has been demonstrated by studies using aircraft.
L80. Hooray! Eliminating cyclical boundary conditions helps a lot!
L86. Two consecutive days? Or 2 24-h days. From L134, it is the latter. Suggest inserting “consecutive” here.
L88. Re “free-convective”, vs. “forced” convection. It would be better to use the standard definitions of ‘buoyancy-driven’ and ‘shear-driven” (as done in Moeng and Sullivan 1984). Since both buoyancy and shear-related energy in effect force the boundary layer.
L105. What is the CHILD01 model? (perhaps used before it is defined).
Field Experiment data: I am assuming there was neither radar nor aircraft data. (NOTE: later contradicted. And they should have been used in evaluating the data – at least the profiles!)
L134. If you write “two consecutive days” earlier, than can shorten to “The 44-h simulations were for August 22 and 23 for the August IOP and September 24-25 for the September IOP. Better yet, specify the initial time of day as well. Then you wouldn’t have to repeat this at around L190.
L136-7. Buoyancy-driven vs shear-driven? Both are “forced convective,” and this is not conventional usage in my experience. See classical work of Moeng and Sullivan.
L188. Were there aircraft measurements? Caption to Fig. 1 mentions ‘virtual’ measurements. Observations section does not measure aircraft. Please clarify! Perhaps it is as simple as inserting “virtual” and explaining the meaning. I am assuming this means that you ”flew” an aircraft through the innermost model domain.
L201-3, Table 1. I’ve not seen LES resolution change abruptly in the middle of a CBL. With all the case taken at the horizontal boundaries, how does this work? Are there references justifying this?
L204. Why even mention CHILD02? (Here it is implied by, “We focus on 3D data from the CHILD01 model … “. Later CHILD02 IS used.
L217. Soil can be important… why keep it constant? Is that close to reality? Is there terrain?
L218, Suggest, adding, “covering a total depth of 2.93 m.” Then the following sentence about thickness makes more sense. Although it would make even more sense if you defined “thin,”
e.g.,
Having too thin a soil layer (< ? m) can lead to ….
L224. I am guessing that there would be few changes of land surface type even in the larger domain from the time of the database to the 2019. Is that correct? (e.g., in areas with crop rotation, more current data might be needed).
L226. Isn’t the 44-h spinup time a bit short? E.g., see Chen et al. 2007, which is about another LSM, but one would think their properties in this regard would be similar. Perhaps this is because the LSM is already close to equilibrium when you start? See:
Chen, F., Manning, K.W., LeMone, M.A., Trier, S.B., Alfieri, J.G., Roberts, R., Tewari, M., Niyogi, D., Horst, T.., Oncley, S.P. and Basara, J.B., 2007. Description and evaluation of the characteristics of the NCAR high-resolution land data assimilation system. Journal of applied Meteorology and Climatology, 46(6), pp.694-713.
Table 2. Interesting that “cranberries’ are common enough to get their own surface-type classification!
Double-counting comment is double-presented. (At the bottom of the table, and in the text at L233) Is this a mistake?.
L233-234. More importantly, assigning “short grass” to the LSM in the presence of trees doesn’t seem like a good idea. The LSM I’m most familiar with (Noah-MP) becomes a canopy model in the presence of trees, for example. Is this standard in the PALM LSM?
L235. Does the canopy model extend high enough? From Table 1, 5 layers is less than 10 m! Looking at Fig. 3, which shows trees extending to 25 m, wouldn’t more layers be needed for the summer runs as well given the grid spacing in table 1? Or are the layers defined independently in the canopy model? Please clarify.
L264. Middle and high clouds, to use meteorological nomenclature. Even some low clouds can lie completely above the CBL and thus interact with it only through radiative processes.
Exciting to have 3-D radiation! But I wonder how big an impact it has in this case.
L274-5. Do you really mean the “volume-averaged turbulent time series” were extracted at the tower sites? I can understand the data at and adjacent to the tower sites being extracted to represent the towers.
L276. Grid points corresponding to tower location plus those immediately to the north, south, east and west?
L279. The phrase, “emulating the IOP airborne campaign flights” implies there is also aircraft data. Were aircraft flying during the IOPs? This is not mentioned in the description of observations. Aircraft data should be mentioned – and its use would greatly improve evaluation of model wind profiles (also, L283).
L290. Assume that you simply averaged the local heights? “Calculated from these 2D fields” is a bit vague. And that the local height were relative to ground level. Is that correct? Not sure how much terrain there is at this site.
Paragraph starting L293, Table 3. I am assuming that the soundings are compared to the statistics at the corresponding heights in Table 3. And then vertically averaged? Or what? It is surprising that CHILD2 is being used, given that it was said earlier the focus was on CHILD1. (Maybe the limited use of CHILD2 should be stated earlier).
If it is CHILD2, should note the altitude range in the caption, text, or both.
More confusing – the discussion of the differences between the soundings and the simulation (L313 are thereabouts) bear no resemblance to Table 3, which shows absolute differences (I am assuming this means the average absolute value of the difference between simulation and sounding), since a specific height interval is given … and the differences are … relative ? Unsure of this. It looks like “mean differences” and “mean absolute differences” are mixed together. MAKE CLEAR WHETHER YOU ARE DISCUSSING TABLE 3 OR FIGURE 4.
L304. Include conversion of simulation time to clock time. But once I get to Table 3, you are using CDT! Again – please clarify or simply use CDT. (Maybe “simulation time” and “CDT” are the same?). The authors should be warned, however, that readers may assume that LST is used rather than LDT if they simply look at the figures, since LDT is rarely (ever?) used. If it is too difficult to change to LST, it needs to be clearly pointed out – and ON THE FIGURES that CDT is used, to keep readers from being misled. Not just the captions, because sometimes people grab figures off papers to give talks.
L350. Do model data correspond to the radiosonde locations? Or are they horizontal averages? Please specify in Fig. 6 caption. Add number of ensembles. I don’t recall ensembles being discussed for the thermodynamic profiles.
L354. Why?
Section 4.3 starts off much more clearly!
Except: L366. Need to be more specific here.
For the model, how are data processed? Since you have 12 locations and 8 ensembles, it’s not clear how standard deviations are computed.
Around L380, Figure 8. The observed mixing-ratio values look seriously too high. Values of 20 g/kg are typical over the tropical ocean, not Wisconsin in September. From Fig. 8b, the SATURATION mixing ratio at 292.5 K is 14.3 g/kg! Based on this, I am guessing the discrepancies in August may be due to observation (calibration?) problems as well.
The authors should also have been suspicious, given the large differences related to the sounding mixing ratios.
L392. Something more specific than “same model level”. Perhaps ‘at 32 m, the tree-top level, or ’32 m, just above tree-top height’?
L398. Is this spike significant?
L424. (Just above Fig. 11). How is “CBL initiation” defined?
Suggest:
The 22 August simulations showed the model delays CBL initiation, as defined by …,.by about an hour,
(More properly, the buoyancy flux should be used rather than H).
And have August text closer to the August Figure!
Section 4. No apparent discussion of Fig 11, which shows the Sept case. At least mention that the biases are similar, with simulations showing higher flux values. I’m guessing perhaps the discussion near Fig. 11 is of September. PLEASE CHECK AND CORRECT.
Figure 11. Isn’t some of the night-time data dominated by spikes? (Are any other variables consistent with the 200 W/m2 LE value in Fig 11c?)
L445. Formula for Obukhov length is wrong! Should use the virtual temperature flux. Not that it will affect your results much here, but physically it’s the buoyancy flux and not the heat flux that we are concerned about, an error which can be 100% over the tropical oceans.
Also, should use OBSERVED data as well. In fact, if I had to choose one set of data, I would use the observations.
L450. Again, please use conventional terminology, as used in Moeng and Sullivan. Both shear- and convectively driven CBLs are “forced,” but by different mechanisms.
L450. Suggest replacing “statically” with “near”, and deleting the “free-convective, statically unstable.” All CBLs are stratigraphically unstable.
455-465. Description more mathematical than physical. I.e., it is more reader-friendly (and physical) to describe how –z_i/L varies with time; and how it differs for the two days rather than nothing what their extrema are. For example, by noting that –z_i/L for 25 September is about 30 times that for 23 August. Why is that significant? Because strong winds (and smaller –z_i/L ) is associated not only with more vertical shear of the horizontal wind (and more contribution of the shear terms in to the CBL’s turbulence kinetic energy) but also differences in convective structure, something first noticed (using wind speed) by Woodcock about 100 years ago through observations of gulls, Kuettner in the 1970s, and many of the authors you site. For LES, Deardorff in the early 1970s was the first, and the first to use smaller –z_i/L .
Figure 13. You don’t need to show fluxes again, since they appear in an earlier figure.
L461 delete “forced” (the boundary layer is already “driven”)
L469. “shear driven” CBL from 09:00 …
Figure 14. should depict either U and V wind components or hodographs rather than wind speed. Much of the shear in the shear driven PBL is directional. See Moeng and Sullivan (1984).
With such an excellent dataset, why aren’t you also showing the observations? If not just the radiosondes, from the tall tower as well? Don’t aircraft sample the wind as well? And better to show both wind components, since the much of the shear in the shear-driven CBL is directional.
Section 5.2. Horizontal structure and its evolution a more precise title.
L475. Quasi 2-3 locally. Could you be seeing gravity waves?
L478. Again, use terminology of Moeng and Sullivan. “Free-convective” strictly refers to no shear.
L479. “Spatial gradients”???? horizontal and vertical? Lots of detail here, but not telling me much. Any observations to support model conclusions?
L483-484. Looks just as 2-D as the August case to me. Again, though – should show structure within CBL. AND having rolls on 23 August is LESS likely than on 25 September
(NOTE re Figure 15 and related discussion: If you are looking for convective organization, you should look at horizontal fields or horizontal within the CBL. The structure at z_i might be related to shear at the CBL top, which could be determined if you had profiles of U and V, which you don’t. Moeng and Sullivan (1994) and LeMone et al. (2010, Part II) both use model output WITHIN the CBL.)
To be specific, you should (a) look at large-eddy structure within the CBL, (b) include mean CBL wind direction in the discussion (partially done, but you need U and V in Fig. 14), and (c) recognize that the scale of the rolls (and convection) would be larger on 25 September, and compare this wavelength to that associated with the typical horizontal-to-vertical 3:1 often reported. (See Young et al. 2002 reference). Since the horizontal maps of structure are at z_i the field could reflect just gravity waves or gravity waves modulating the top of the CBL (Possibly as well as rolls). Flatter rolls are possible with gravity-wave-CBL interaction (Clark et al. 1986).
Young, G. S., D. A. R. Kristovich, M. R. Hjelmfelt, and R. C. Foster, 2002: Rolls, streets, waves, and more: A review of quasi-two-dimensional structures in the atmospheric boundary layer. Bull. Amer. Meteor. Soc., 83, 997–1001
Clark, T. L., T. Hauf, and J. P. Kuettner, 1986: Convectively forced internal gravity waves: Results from two-dimensional experiments. Quart. J. Roy. Meteor. Soc., 112, 899–926,
Can you present any OBSERVATIONAL evidence of convective structure. From nearby weather radar (which “sees” convective structure), aircraft data, or other papers being written about CHEESEHEAD19?
L494. Moist values are wrong because the observations are wrong.
L508. Over land. There are several simulations of PBL structure sampled in field programs over the ocean that predate simulations of CBLs over land, the first of which may have been Sommeria and LeMone 1976, one of two simulations I know of measurements taken to the north of Puerto Rico in December, 1972, the other being Cuijpers et al. (1993).
Cuijpers, J.W.M. and Duynkerke, P.G., 1993. Large eddy simulation of trade wind cumulus clouds. Journal of the Atmospheric Sciences, 50(23), pp.3894-3908.
A second early simulation was by Nicholls et al. of a GATE case, again over the ocean,
Nicholls, S., LeMone, M.A. and Sommeria, G., 1982. The simulation of a fair weather marine boundary layer in GATE using a three‐dimensional model. Quarterly Journal of the Royal Meteorological Society, 108(455), pp.167-190.
Discussion of past simulations of field data. Does this really belong here? I’d be more interested in a discussion of the present field campaigns. This of course makes the ocean-based LES irrelevant.
Section 7
L592. Maybe, but the 32-m observed values of mixing ratio were very different (this because of some instrument problem, I’m sure).
L598-599. Nothing is mentioned about the surprising conclusion of less evident rolls when they should be more evident. (Not surprising, since the BL top is not the place to look). And nothing is mentioned about what was seen in the observations. Thus it is not clear that the LES reproduces the “evolution and characteristics” of the observations.
Citation: https://doi.org/10.5194/egusphere-2023-1721-RC2 -
RC3: 'Comment on egusphere-2023-1721', Anonymous Referee #3, 19 Jan 2024
Coupled large eddy simulations of land surface heterogeneity effects and diurnal evolution of late summer and early automn atmospheric boundary layers during CHEESEHEAD19 fiels campaign
S. Paleri, L. Wanner, M. Sühring, A. Desai and M. Mauder
This study presents a LES of 4 IOP days of the CHEESEHEAD19 campaign and evaluates these simultions comparing them with observations. This article is far from publishable as it stands, for a variety of reasons: poorly defined study objectives, poor structure and organisation, numerous contradictions, lack of informations/explanations, lack of justification of the scientific choices and discussions of the results, …
The most important comment I have is that I don't see any results worth publishing. This paper would be a first part about LES evaluation, with a second part about effects of heterogeneity, I would understand. But as it stands, this paper is about an LES whose results are no convincing.
All these factors make the paper difficult to read and understand. I think that the paper has not been sufficiently proofread and worked on by the authors, whose competence I do not doubt. I don't think it's the reviewers' role to list all the flaws in a paper when the co-authors haven't done the work themselves. So I won't make an exhaustive list of the changes needed, but I will illustrate the criticisms with a few examples.
Poorly defined article objectives:
1- The title suggests that the surface heterogeneity effects are simulated. No results in the simulation show the effects of the surface heterogeneity. The authors may have tried to address this point in the very last section (5.1), but the results and analysis are not at all convincing.
2- L92: At the end of the introduction, I understood that the objective of the article is written as follows: "Following through, we ask, can such a LES be used to evaluate mechanisms that generate surface-heterogeneity induced mesoscale circulations in the diurnal ABL? " . No mesoscale circulation induced by surface heterogeneity is shown in these simulations, so the question is not answered in the study.
Article organization and logic :
1- The field experiment is commented on without introducing Figure 1, which is the only very rough illustration of the experimental set-up.
2- many discussions based on information introduced later in the paper. Some example below
-
L104 : CDT introduced L190
-
L105 : Child01 introduced L170
3- Some sections need to be revised to improve the organization of the ideas. Example :
-
section 2.2 starts with the EC towers (L107-110), then continues with the measurements of leaf phenology (L110), comes back over EC tower (L111-113) and ends with Drone base lidar measurement.
-
The introduction to section 4 is another example of a poorly organized paragraph, jumping from one idea to another only to return to the first.
4- The information about LAD profile are in section 3.3.2, normally devoted to Plant canopy Model, whereas too few information about leaf phenology and LAD measurement by drone is given in section 2.2. What about moving L236 to L252 and associated figures in section 2.2 ?
5-It seems to me that part 3.2 should be presented first, as it is necessary for understanding the following sections.
6- L187 : why the airborne data are mentioned here whereas their are not introduced in section 2 and not used in the study. Also, virtual flight tracks are presented in the section 3.3.2 about Plant Canopy Model (!) and again in section 3.4 about Virtual observational infrastructure… no use since these virtual observations are not used.
7- I don’t understand the usefulness of section 3.5. Could be used as an introduction of section 4 ?
8- Figures 12 and 13 could be merged.
Missing information :
An article can refer to previous studies, but it must also be understandable on its own.
1- L98 : before going through the different data used, a rapid general presentation of the experimental set-up would be useful.
-
What are the horizontal scales of the surface heterogeneity ?
-
What is the surface flux heterogeneity ?
-
Are all the EC tower on forest sites ?
-
…
These would be useful to fully understand the choices made in this LES study.
2- section 3.1 : some methods like « self-nesting » or « offline-nesting » are not at all defined whereas this sentence (L174) : « Employing both the offline-nesting and self-nesting modules lets us include the synoptic scale effects over the simulation domain and model the influence of a heterogeneous land surface and plant canopy over a wide range of scales. » tells that the effects is huge as it can be seen on the model-observation comparisons (section 4.3). These methods are neither defined nor justified.
3- Section 3.3.2 : the Plant Canopy Model is not described : 5 lines over two pages of this section.
Scientific choices justifications :
1- It is written in the abstract that the runs have no cloud simulation (“The simulations were run without clouds which resulted in higher daytime sensible heat fluxes for some scenarios”). No explanation is given on this very important set up choice in the article. L415, the model-observation difference in terms of sensible heat flux is then explained by the lack of clouds in the model. I think the authors should further explain this choice and also justify the interest of realistic simulation in which the clouds are not represented.
2- Section 3.3.2 & fig 2 : A leaf fall is defined for standard forest and wetland forest. Besides it is the first time in the paper that forest over wetland is discussed (and we don’t know the proportion), it seems that the leaf fall for standard and wetland are the same. The mean curves are different because the statistic over wetland is really poor. So why the authors defined two curves.
3- L295-296 : Why the authors use different data output frequency for August (30 minutes) and September (15 minutes) IOPS?
4- Section 5: very little is written to justify and explain why the authors want to compare the August and September IOPs in terms of stability and horizontal evolution. What do we learn from this?
5- The choice to assign forest to short grass to avoid double-accounting for surface radiative effects remains a mystery even if I do not doubt that this choice is the good one. This goes with the poor presentation of the PCM.
Insufficient explanations:
1- Line 263-264 : “This helps us to include effects of the spatially heterogeneous
plant canopy and high clouds on the simulated surface radiation and flux budgets.”. I don’t understand why the use of HRRR data over the Parent model domain helps to include effects of the spatially heterogeneous plant canopy ? It include horizontally heterogeneous surface energy balance between shaded and unshaded surfaces, but what is the link with the heterogeneous plant canopy?
2- L309-312: “Gehrke et al. (2021) discusses this issue, suggesting the role of the SGS model and radiation scheme in combination with the grid resolution as well as the role of the LSM’s surface energy balance parameterisation in combination with Monin-Obukhov Similarity Theory based computation of atmospheric fluxes at the first model grid point.”. Concerning the important under-estimation of the simulated temperature close to surface in the morning, the explanation given by Gehrke cited by the authors does not help a lot, since all the possible causes are listed there.
3- Concerning the bias between simulated and observed flux, nothing is said about the surface energy balance non closure in the measurements which could, if it was considered here, reduce the difference.
Contradictions:
1- L285: “In this manuscript we focus on 3D data from the Child01 model for 23 August and 24 September simulations, when the model domain encompassed the whole of CBL”. In section 4.3, data from 22-23 August and 8-9 September from Child02 are analysed. A discussion would be useful on the effect of ABL developed vertically and encompassed or not in the child01 domain.
-
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
467 | 169 | 42 | 678 | 27 | 32 |
- HTML: 467
- PDF: 169
- XML: 42
- Total: 678
- BibTeX: 27
- EndNote: 32
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1