the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Zooming In: SCREAM at 100 m Using Regional Refinement over the San Francisco Bay Area
Abstract. Pushing global climate models to large-eddy simulation (LES) scales over complex terrain has remained a major challenge. This study presents the first known implementation of a global model—SCREAM (Simple Cloud-Resolving E3SM Atmosphere Model)—at 100 m horizontal resolution using a regionally refined mesh (RRM) over the San Francisco Bay Area. Two hindcast simulations were conducted to test performance under both strong synoptic forcing and weak, boundary-layer-driven conditions. We demonstrate that SCREAM can stably run at LES scales while realistically capturing topography, surface heterogeneity, and coastal processes. The 100 m SCREAM-RRM substantially improves near-surface wind speed, temperature, humidity, and pressure biases compared to the baseline 3.25 km simulation, and better reproduces fine-scale wind oscillations and boundary-layer structures. These advances leverage SCREAM's scale-aware SHOC turbulence parameterization, which transitions smoothly across scales without tuning. Performance tests show that while CPU-only simulations remain costly, GPU acceleration with SCREAMv1 on NERSC's Perlmutter system enables two-day hindcasts to complete in under two wall-clock days. Our results open the door to LES-scale studies of orographic flows, boundary-layer turbulence, and coastal clouds within a fully comprehensive global modeling framework.
- Preprint
(74342 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on egusphere-2025-2223', Anonymous Referee #1, 27 Jun 2025
Review of "SCREAM at 100 m Using Regional Refinement over the San Francisco Bay Area" by Jishi Zhang et al.
This manuscript describes a unique model configuration of a global model run with regional refinement down to 100m. It is generally well written and appropriate for GMD. The work does a lot of detailed discussion of the software engineering and configuration of the model at the front, and then a simulation performance description at the back. I think a lot of the software discussion could be usefully made a supplement or appendix to improve readability of the manuscript and make it a good description of model simulations. This should be publishable in GMD with some minor revisions as I note below.
General Points:
1. I appreciate the step by step description in the first sections of how the model is configured, but I think this is too detailed for the main text and could be put in an appendix.
2. There are some inconsistencies in the figure labeling, numbering and referencing that need to be corrected as noted below.
3. It would be nice to make a few more comments about turbulence across scales. It is hinted at in a few places (with some contradictions that need to be clarified as noted below), but clarification would be good. Specifically (as the text notes): the conventional wisdom says at 100m you need 3D turbulence, but you are using a unified bulk closure with SHOC and this 'seems to work'. That's great, but can you show a figure of turbulence or KE or something that does a bit more to convince a skeptic about it? That would be a great addition.
Specific Comments:
Page 4, L124: so except where noted, this is SCREAMv0 FORTRAN code? Would be good to be explicit about this.
Page 5, L136: this is not a ‘nest’ however right? Please be clear because ‘level’ might imply there are multiple grids underneath.
Page 6, L149: same refined grid correct?
Page 7, L186: so what is the actual resolution of the topography you can resolve? It seems like you go from 500m —> 800m and then down to 100m? Is there more information or is this smoothed at 800m. I assume there is higher resolution topography than 500m available for Northern California?
Page 8, L203: I appreciate the step by step description, but I think this is too detailed for the main text and could be put in an appendix. The source of the topo data should be identified, then the method can be a page in an appendix.
Page 8, L214: unclear what the difference is between you two 100m test runs. How was the simulation without the ‘steep topographic gradients’ specified? Also, a bit too much info perhaps, and maybe just note that smoothing is important. But doesn’t it affect the height of the topography?
Page 10, L239: so what timestep does an LES model with 100m resolution typically use? This seems VERY short (I’m guessing the LES is more like 1sec). So what does that say about the quality of the dynamics or the utility of this configuration? Seems like you could get quite a speed up (e.g. 20x) if you got the timestep to 0.5s)….
Page 10, L242: This paragraph states both “one simulated hour per wall clock hour” (i.e. 1SDPD) then “0.16 SDPD”. Please clarify.
Page 12, L269: The 100m and 3.25km are two different simulations right? What are the number of grid cells and the timestep in the 3.25km simulation? I assume this is just the 100m grid without the additional ‘level’ or refinement?
Page 15, L324: what is the timestep of the land model on this grid? And also while I am thinking about it, what is the timestep of the land model when run interactively?
Page 17, L367: I assume the soundings vary in time and space as they go up as well (different model grid boxes with height)? Or does this not matter.
Page 18, L368: are you using a fixed pressure for each level? Shouldn’t it vary with surface pressure and not using just the reference pressure? I assume you have a 3D pressure field from the model. The errors for the storm case (surface pressure well below 1000hPa) would be considerable in the lower troposphere.
Page 20, L396: what vertical level are you referring to for the Venturi effect.
Page 20, L399: again, what level? Where is this to be seen on Figure 7? Might need panel labels (A-L).
Page 24, L437: For fig 11, is the timeseries coarse because it’s a single observation? If so, then maybe showing the variation or average around the time of the observation from the model would be useful. E.g. If it is a point measurement, the model temporal variability could be within that envelope.
Page 26, L448: where is wind speed shown? Fig 11 and 12 are temperature and surface pressure.
Page 28, L471: reduces the wind bias from the 3km simulation? (Please be explicit).
Page 28, L474: is the improvement specifically around orography?
Page 28, L476: but increased turbulence (TKE) noted above would increase mixing and damp accumulation of cooler air masses.
Page 28, L478: reference figure 13? Or similar for this case.
Page 29, L487: Fig 14: is this a correlation over space? Can you put any variability on it? Say by correlating in space at different times?
Page 29, L488: Do you mean Figure 15 here? If not, you refer to figure 16 before figure 15.
Page 31, L510: but these discrepancies would be expected right? If you averaged the radiosonde over the SCREAM or ERA5 levels you probably would not see the features right?
Page 32, L525: is it really turbulence or just that the larger scale pattern is easier to initialize correctly and has less forcing than the other cases?
Page 32, L531: can you show this partitioning?
Page 32, L533: Your statement about avoiding the turbulence gray zone is contradicted by the statement below that turbulence should be modeled in 3D. What are the potential errors in this assumption?
Page 33, L554: You state that SHOC can smoothly transition without tuning. Is there a way to show this? KE spectra at different resolutions? It would be nice to show.
Page 33, L557: It’s probably better to say that you ignore the turbulence gray zone problems and it seems to work in these cases. That’s an interesting point (and useful). Again, would be nice to show a figure that illustrates this.
Citation: https://doi.org/10.5194/egusphere-2025-2223-RC1 -
RC2: 'Comment on egusphere-2025-2223', Anonymous Referee #2, 22 Jul 2025
Review of "Zooming In: SCREAM at 100 m Using Regional Refinement over the San Francisco Bay Area” by Jishi Zhang, Peter Bogenschutz, Mark Taylor, and Philip Cameron-smith.
The study presents the SCREAM/E3SM model run at global-scale to 100 m horizontal resolution over the San Francisco Bay Area in the US for two selected weather forcing periods.
Overall, the manuscript is written in much detail which is appreciated but feels lengthy. I strongly suggest moving the technical sections of the manuscript to either supplemental or appendix. That space could be used to discuss more about the capabilities of the model. The use of varying grid resolution for refining over regions of interest is quite challenging and is important as authors noted. Demonstrating that the model can deliver measurable skill gains relative to the baseline run at 3.25 km while remaining numerically stable is an advancement for the modeling community. To my knowledge, this is the first LES-scale within a global model framework.
Having said that, some scientific and technical issues need further treatment before the manuscript is publishable, especially the generalization of the model performance based on two case studies. I understand the computational cost associated with each case, however, to accept the SCREAM model for routine use in the LES community, it would be better to have either a small ensemble, or another synoptically driven flow (perhaps Diablo winds!) simulated, or a different region to answer few questions such as - How does the model perform for mid or high-latitudes? - What would be the sensitivity to the initial and lateral boundary conditions? I therefore recommend a major revision.
Specific comments:
- Sec 2.1: For a horizontal resolution of 100 m, the 30 m near surface layer thickness seems high. Are you sure you are resolving the surface shear with such an aspect ratio?
- Sec 2.3: The topography generation was discussed in much detail, however, no information was provided whether the extra smoothing has any effect on the terrain induced flows.
- Sec 2.4: What is the justification for increasing the hyper viscosity coefficient?
- Figure 5: Replace the GOES LWP map with a zoomed-in version so that it is easy to qualitatively compare against the model plots.
- Even though the observations are sparse, the selection of the Storm2008 case warrants a quick comparison between the model simulated precipitation amounts and the observed ones. Capturing the extreme precip amounts has been a challenge for many fine resolution models. Showing that SCREAM at 100m does a reasonable job would make the discussion more robust.
- Figure 6: Some station names are overlapped, making it difficult to identify them.
- Figures 10 and 11 could be converted to tables.
- Figure 13 shows a significant difference in the mixed layer profiles at Oakland. Especially at 23PST on 2008-01-05, the dew point difference got a lot better between the 3.25 km and 100 m runs. This is a considerable difference and needs a comment or two in the manuscript. Also, it would be better to compare the boundary layer height estimations against the soundings to see if the LES is capturing all the turbulent motions.
- Lack of turbulence characterization in the study. Given how much of the manuscript relies on the LES capability, the turbulence metrics such as velocity power spectra, resolved to sub-grid turbulent kinetic energy, flux-gradient relations are expected. LES models are often judged by these metrics, and providing at least one such metric would convince readers about the SCREAM model LES capabilities. Authors have hinted that improved performance could be linked to capturing turbulent mixing (L525) but did not provide any supporting evidence.
- As noted in the manuscript, despite improvements, the model consistently underestimates the surface pressure at the majority of stations. Where does this bias come from? Is it the model dynamics or the initial conditions? Authors should include some discussion on this.
Citation: https://doi.org/10.5194/egusphere-2025-2223-RC2
Data sets
Code and Model Data for SCREAM 100 m San Francisco Bay Area Regionally Refined Model 0.0 version Jishi Zhang https://doi.org/10.5281/zenodo.15288872
Model code and software
Code and Model Data for SCREAM 100 m San Francisco Bay Area Regionally Refined Model 0.0 version Jishi Zhang https://doi.org/10.5281/zenodo.15288872
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
1,071 | 48 | 12 | 1,131 | 12 | 30 |
- HTML: 1,071
- PDF: 48
- XML: 12
- Total: 1,131
- BibTeX: 12
- EndNote: 30
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1