the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
A dynamic approach to three-dimensional radiative transfer in numerical weather prediction models: the dynamic TenStream solver v1.0
Abstract. The increasing resolution of numerical weather prediction models makes three-dimensional (3D) radiative effects more and more important. However, 3D radiative transfer solvers are still computationally expensive, largely preventing their use in operational weather forecasting. To address this issue, Jakub and Mayer (2015) developed the TenStream solver. It extends the well-established two-stream method to three dimensions by using ten instead of two streams to describe the transport of radiative energy through Earth's atmosphere. Building upon this method, this paper presents the dynamic TenStream solver, which provides a further acceleration of the original TenStream solver. Compared to traditional solvers, this speed-up is achieved by utilizing two main concepts: First, radiation is not calculated from scratch every time the model is called. Instead, a time-stepping scheme is introduced to update the radiative field based on the result from the previous radiation time step. Secondly, the model is based on incomplete solves, performing just the first few steps of an iterative scheme towards convergence every time it is called. At its core, the model thereby just uses the ingoing fluxes of a grid box to update its outgoing fluxes. Combined, these two approaches put radiative transfer much closer to the way advection in the dynamical core of an NWP model is handled, as both use previously calculated results to update their variables and thereby just require access to the neighboring values of an individual model grid box, facilitating model parallelization. To demonstrate the feasibility of this new solver, we apply it to a precomputed shallow cumulus cloud time series and test its performance both in terms of speed and accuracy. In terms of speed, our new solver is shown to be about three times slower than a traditional 1D δ-Eddington solver, but noticeably faster than currently available 3D solvers. To evaluate the accuracy of our new solver, we compare its results, as well as calculations carried out by a 1D δ-Eddington solver and the original TenStream solver, to benchmark calculations performed with the 3D Monte Carlo solver MYSTIC. We demonstrate that our new solver is able to calculate heating rates and net surface irradiances very close to those obtained by the original TenStream solver, thus offering a noticeable improvement compared to the 1D δ-Eddington results even when operated at lower calling frequencies. At these lower calling frequencies, the incomplete solves in the dynamic TenStream solver lead to the build-up of a bias with time, which becomes larger the lower the calling frequency is. However, this increase in bias flattens out after a while and remains smaller than the heating rate bias introduced by the 1D δ-Eddington solver at any point in time. Most importantly, our new solver is shown to produce significantly better results when compared to 1D δ-Eddington solves carried out with a similar computational demand.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(4913 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(4913 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-2129', Anonymous Referee #1, 11 Nov 2023
This paper describes a method for 3D radiative transfer that could be computationally affordable enough to be used in high-resolution models. The idea of treating radiation more akin to dynamics is intriguing and as far as I know novel. The results presented are state-of-the-art in terms of speed-accuracy tradeoff (at least for 3D solvers) and potentially very significant for the advancement of NWP models, which are already configured at resolutions where 3D radiative effects are notable yet are currently ignored in all operational models.
My major comments are provided below and relate mainly to the computational aspects, which deserve more attention. Some of my questions may be adequate to address in the review and not in the paper, as it's already long (and concerned mainly with demonstrating the feasibility of the method - which it does excellently!), but a few clarifying sentences and providing absolute runtimes and/or measures of floating point operations in the paper would go a long way in informing the reader how fast dynamic tenStream potentially is, and whether it could be a real contender to operational radiation schemes outside of LES. Besides this, I think the paper would really benefit if the authors tried to make it more concise by avoiding repetition and removing unnecessary words and sentences. The results shown are relevant but they are sometimes described in a very wordy manner. Finally, the code does not seem to be actually available to download at current time which I understand is against GMD policy.Other major comments:
1a. In general it's a bit difficult to fully understand the method (although Figure 3 does a good job at illustrating it) especially when it comes its implementation in code and its parallelism. The future tense used in L198-204 implies that the parallelism is not yet implemented. My understanding of dynamic TenStream would be something like this for a simplified 1D case:
! Downwelling flux; boundary condition
fd(1) = incsol
fd(2:nlev) = fd_prev_timestep(2:nlev)
! Gauss seidel incomplete solves, not parallelizable
for jiter in 1,niter
! Vectorization or other parallelism, array notation
fd(2:nlev) = T(1:nlev-1)*fd(1:nlev-1)
This would correspond to the radiative flows in individual grid boxes being computed concurrently i.e. in parallel within a single step of Fig 3, is this right?
1B. How should the reader interpret the reported speed numbers in terms of effective speed against operational radiation schemes? Is the 1D delta-Eddington reference based on efficient, vectorized code? It is unclear how efficient dynamic TenStream is or could be compared to widely used two-stream codes such as ecRad, which expresses parallelism across g-points, or the RTE+RRTMGP scheme which vectorizes the column dimension instead. Comparison to other schemes could be greatly facilitated by reporting absolute runtimes, or you could run one of the aforementioned schemes. Potential lack of parallelism and optimization in its current stage can be stressed explicitly and of course, even if dynamic tenStream is currently much slower than operational schemes then it's not a bad result considering full 3D solvers have until now been many orders of magnitudes more expensive. Finally, it could be very useful to report the number of floating point operations (whether absolute or relative to delta-Eddington) but may require a library such as GPTL to estimate, and is perhaps not necessary if the other aspects are clarified.
2. Can you discuss whether you see dynamic TenStream to be a potentially viable scheme for global or regional NWP models as they approach kilometer scale resolution? And on cost again: as these models currently use a very coarse radiation time step compared to the ones reported in the paper, such as 15 minutes (AROME 2.5 km regional model) or 1 hour (IFS, but 9 km so not yet km-scale), does this mean that dynamic TenStream would in fact incur a much bigger cost increase for such models than those given in Table 1, or does the coarser spatial resolution compared to LES mean that dynamic TenStreams convergence would still be adequate with relatively coarse radiation time steps?
Minor comments:Section 2.1. For the direct radiation, what is the advantage of having 3 streams in the independent x,y,z directions rather than two streams to/from the direction of the sun?
L114: Does TenStreams use of an external linear algebra library mean that its implementation is computationally efficient and exploits parallelism but dynamic TenStream currently does not, if so can the speed-up reported in Table 1 be improved further in the future?
L114: Does PETSc run on GPUs? Do you think GPU acceleration is promising for (dynamic) tenStream?
L272. Has TenStream been evaluated across a wider range of solar zenith angles and is its performance sensitive to it?
L474-495. Interesting, what is the reason for tenStream having a worse surface irradiance bias than delta-Eddington?
L540-544. This is an example of probably unnecessarily detail and wordiness (4 lines of text to introduce a plot similar to one already shown)Citation: https://doi.org/10.5194/egusphere-2023-2129-RC1 -
RC2: 'Reply on RC1', Anonymous Referee #1, 13 Nov 2023
I have now received the software code from the editor (seems there was some mistake) but my comment can be considered complete
Citation: https://doi.org/10.5194/egusphere-2023-2129-RC2 - AC3: 'Reply on RC1', Richard Maier, 19 Jan 2024
-
RC2: 'Reply on RC1', Anonymous Referee #1, 13 Nov 2023
-
RC3: 'Comment on egusphere-2023-2129', Anonymous Referee #2, 15 Nov 2023
This is a very interesting paper on speeding up three-dimensional (3D) radiative transfer calculations toward potential use in numerical weather prediction (NWP).
I am very impressed by the paper. It is an important topic, as 3D radiative transfer will require attention as NWP models move to higher resolution.
The methodological advances are carefully designed and effective. I like that the basic ideas are simple and clever and intuitive (e.g., using time-stepping to update the radiative field, and using incomplete solves), while careful attention to details is also crucial to the success of the method (e.g., in the details of the Gauss-Seidel iterations).
The comparisons in the paper are also thorough and include comparisons to a 1D delta-Eddington solver, a 3D Monte Carlo solver, and the original TenStream solver. It is very valuable to have each one of these comparisons, since they span a range of options for speed and accuracy.
The limitations of different methods are also discussed. For instance, the new method is slightly slower than 1D delta-Eddington, and not as accurate 3D Monte Carlo when operated at lower calling frequencies. I appreciate the attention given to these limitations.
It is a very good paper in all aspects: comprehensive, careful, well-written. I appreciated the schematic illustrations, which are helpful for clarifying technical details and main ideas.
I think the paper could be accepted in its current form, but I will mention one specific comment that the authors may wish to address.
Specific comment:
The title mentions NWP as the aim. Then the paper presents results for hectometer-scale grid spacings of large-eddy simulations. On the other hand, I would imagine that NWP will be operating at kilometer-scale horizontal grid spacings for quite some time into the future. If that is the case, then will a major modification of your methods be required in order to work effectively with kilometer-scale horizontal grid spacings, where propagation of radiation in horizontal directions is not well-resolved? I would think so.
If you agree that major modification of your methods will be required in order to work effectively with kilometer-scale horizontal grid spacings, then I would suggest a change to the title (and also possibly some changes in the Introduction section and Summary and outlook section). For instance, in the title, possibly change 'A dynamic approach to' to 'A dynamic approach toward', or change 'in NWP' to 'in LES' or 'in hectometer-scale NWP'. Then you could save the NWP emphasis for a later paper when you can address the difficulties that will arise in using dynamic TenStream on actual NWP models with kilometer-scale grid spacing.
This was the only issue I want to mention, and I otherwise was pleased and impressed by the careful comparisons and discussions of limitations.
Technical correction:
Line 505: "In here" should be just "Here"
Citation: https://doi.org/10.5194/egusphere-2023-2129-RC3 - AC4: 'Reply on RC3', Richard Maier, 19 Jan 2024
-
CEC1: 'Comment on egusphere-2023-2129', Juan Antonio Añel, 19 Nov 2023
Dear authors,
After checking your manuscript, it has come to our attention that it does not comply with our Code and Data Policy.
https://www.geoscientific-model-development.net/policies/code_and_data_policy.html
You have archived your code in a web page (libtradtran.org) that does not comply with our trustable permanent archival policy. Therefore, you have to publish your code in one of the appropriate repositories according to our policy. In this way, you must reply to this comment with the link to the repository used in your manuscript, with its DOI. The reply and the repository should be available as soon as possible and before the Discussions stage is closed. Also, you must include in a potentially reviewed version of your manuscript the modified 'Code and Data Availability' section and the DOI of the code.Please note that if you fail to comply with this request, we will have to reject your manuscript for publication. Actually, your manuscript should not have been accepted in Discussions, given this lack of compliance with our policy.
Juan A. Añel
Geosci. Model Dev. Executive EditorCitation: https://doi.org/10.5194/egusphere-2023-2129-CEC1 -
CC2: 'Reply on CEC1', Bernhard Mayer, 06 Dec 2023
Dear Executive Editor,
our webpage (libRadtran.org) is online since more than 20 years. All model versions are available, including the first public libRadtran version 0.03 from 1997. I assume this exceeds the lifetime of most "official archives" and was therefore accepted by the handling editor. We were actually referring to your policy "Alternatively, for established models, there may be an existing means of accessing the code through a particular system", as stated at https://www.geoscientific-model-development.net/about/manuscript_types.html
A 25 year old model with typically one new version per year certainly qualifies as "established model".
Sincerely, Bernhard Mayer.
Citation: https://doi.org/10.5194/egusphere-2023-2129-CC2 -
CEC2: 'Reply on CC2', Juan Antonio Añel, 06 Dec 2023
Dear Prof. Maier,
Unfortunately, your reply does not address the issue that I have noted. We can not accept the libRadtran.org webpage as a repository. It does not comply with the requirements of trustable repositories. These acceptable repositories are very limited and meet several stringent conditions.
Therefore, I must insist you upload your code to one of the long-term, trusted repositories accepted according to our policy. If you do not do it, we will have to reject your manuscript for publication because it lacks compliance with the journal's policy.Juan A. Añel
Geosci. Model Dev. Executive Editor
Citation: https://doi.org/10.5194/egusphere-2023-2129-CEC2
-
CEC2: 'Reply on CC2', Juan Antonio Añel, 06 Dec 2023
-
AC1: 'Reply on CEC1', Richard Maier, 08 Dec 2023
Dear Executive Editor,
we have uploaded libRadtran version 2.0.5.1 including the new dynamic TenStream solver to Zenodo: https://zenodo.org/records/10288179 (DOI: 10.5281/zenodo.10288179). For the potentially reviewed version, we would change the “Code and Data Availability” section of the paper as follows:
“The newly developed dynamic TenStream solver presented in this paper was developed as part of the libRadtran library for radiative transfer (Emde et al., 2016) and can be accessed via Mayer et al. (2023). Its user manual can be found in the "doc" folder of the library. The shallow cumulus cloud time series used to evaluate the performance of the new solver has been published by Jakub and Gregor (2022), with the modifications and methods applied to it to reproduce the results of Sect. 4 described in Sect. 3 of this paper.”
where Mayer et al. (2023) refers to:
Mayer, B., Emde, C., Gasteiger, J., Kylling, A., Jakub, F., and Maier, R.: libRadtran – library for radiative transfer – version 2.0.5.1, https://doi.org/10.5281/ZENODO.10288179, Zenodo [code], 2023.
Kind regards,
Richard MaierCitation: https://doi.org/10.5194/egusphere-2023-2129-AC1
-
CC2: 'Reply on CEC1', Bernhard Mayer, 06 Dec 2023
-
RC4: 'Comment on egusphere-2023-2129', Anonymous Referee #3, 20 Nov 2023
* General comments:
This is a welcome update of the TenStream 3D radiative transfer (RT) code that already fills a major gap in LES modeling capability, namely, to perform 3D RT broadband radiation budget estimation for Large-Eddy Simulation (LES) models. LES is now routinely used in cloud-scale process modeling to address some of predictive climate science's most stubborn issues, such as cloud feedbacks and aerosol-cloud interactions.
However, in spite of generating fully 3D (i.e., vertically-developed) clouds driven by convective dynamics, the RT parameterizations used in LES are still too often heritage codes from Global Climate Models (GCMs) where nothing less than ~50 to 100 km in scale is resolved, hence all clouds and many cloud systems. A typical aspect ratio for a GCM cloudy column is therefore on the order of 1-to-10, thus, some form of 1D RT that captures the internal variability of the clouds (e.g., McICA) is justified since little radiation will be leaked through the horizontal boundaries anyway. In sharp contrast, a cloudy column in an LES has the opposite aspect ratio: say, 5 km by 50 m, hence about 100-to-1. Even cloud-resolving models (CRMs), say, at 5 km by 0.5 km are 10-to-1. NWP models are heading into that kind of spatial resolution as well. So there is plenty of opportunity for net horizontal fluxes to develop across grid-cell facets, starting with direct shadowing of neighboring cells in the anti-solar direction. The TenStream model is purposefully designed to account for this 3D RT in terms of radiation energetics, hence fluxes, not radiances, as required for computing heating rates profiles and net fluxes through the top and bottom boundaries.
The new _dynamic_ TenStream model is designed to address the issue of computational efficiency that is in the way of the general acceptance of TenStream in the LES community for operational implementation. Specifically, it brings CPU time allocation down to ~3x the baseline cost of 1D RT, and does so by cutting a few corners, which could carry a cost in accuracy. Therefore, dynamic TenStream is benchmarked for accuracy against the original TenStream, as well as 1D RT (delta-Eddington) and full 3D RT (MYSTIC). Its accuracy is at par with the original TenStream, which is already a vast improvement in accuracy for radiation budget estimation using standard 1D RT.
The paper is well written and illustrated. It should be published by GMD after a minor revision that addresses the following issues.
* Specific comments:
(1) Carefull attention is paid to the heating-rate profile and surface irradiance/flux. However, it seems to me that the outgoing TOA flux is also important. Maybe TenStream enforces radiant energy conservation is such a way that the TOA flux is as accurate as the rest, but that isn't obvious to this reviewer. At a minimum, some kind of statement on TOA flux accuracy is in order.
(2) The temporal down-sampling and the incomplet solves naturally cause the new model to drift away from the original counterpart. Would it not be beneficial to occasionally "reset" this drift to zero by calling the original TenStream? Of course there is a whole study to perform about when to do this operationally, without the benchmark information at hand.
(3) Although it should have been done when documenting the original TenStream model, it would be good to look into the past to find models with similar mathematical structure in terms of radical angular simplification compared to standard 3D RT solvers, more precisely with improved efficiency in mind. Can I suggest a few?
- an original "6-flux" model, applied to homogeneous plane-parallel media (but with potential for heterogeneous media):
Chu, C.M. and Churchill, S.W., 1955. Numerical solution of problems in multiple scattering of electromagnetic radiation. The Journal of Physical Chemistry, 59(9), pp.855-863.- a discrete-angle RT formalism predicated on regular tessellations of 2D and 3D spaces, seeking the minimal number of directions to capture 3D RT effects:
Lovejoy, S., Davis, A., Gabriel, P., Schertzer, D. and Austin, G.L., 1990. Discrete angle radiative transfer: 1. Scaling and similarity, universality and diffusion. Journal of Geophysical Research: Atmospheres, 95(D8), pp.11699-11715.- a 2D (4-stream) RT model in a deterministic fractal medium, emphasizing numerical implementation (successive over-relaxation scheme):
Davis, A., Gabriel, P., Lovejoy, S., Schertzer, D. and Austin, G.L., 1990. Discrete angle radiative transfer: 3. Numerical results and meteorological applications. Journal of Geophysical Research: Atmospheres, 95(D8), pp.11729-11742.- the same 2D (4-stream) RT model but in a random multifractal medium, emphasizing numerical implementation (Monte Carlo scheme):
Davis, A.B., Lovejoy, S. and Schertzer, D., 1991, November. Discrete-angle radiative transfer in a multifractal medium. In Wave Propagation and Scattering in Varied Media II (Vol. 1558, pp. 37-59). SPIE.- vastly faster solution of the 4-stream model using sparse matrix inversion:
Lovejoy, S., Watson, B.P., Grosdidier, Y. and Schertzer, D., 2009. Scattering in thick multifractal clouds, Part II: Multiple scattering. Physica A: Statistical Mechanics and its Applications, 388(18), pp.3711-3727.* Technical corrections:
Title: The application to NWP models is both inspirational and aspirational. Here, however, the authors only get as far as LES, or CRM (100 m grid spacing). A more accurate title is in order.
l. 99: i.e., e.g., (need commas, I think)
l. 100: "n1" --> what is the "1" for?
l. 104: first "out" --> not italics
Fig. 3: For SZA near 45 deg, one could use a diagonal sweep through the grid? Same for ~45 deg in azimuth? Admittedly more tricky to code, but it would follow more closely the propagation of direct sunlight. No?
1 2 6 7 3 5 8 14 4 9 13 10 12 (schematic below should be rendered with an equal size font)
_____ _____ _____ _____ ___ etc.
| | | | |
| 1 | 2 | 6 | 7 | 15
| | | | |
_____ _____ _____ _____ ___
| | | | |
| 3 | 5 | 8 | 14 |
| | | | |
_____ _____ _____ _____ ___
| | | | |
| 4 | 9 | 13 | |
| | | | |
etc.l. 190: "this direction" -->? horizontal scan
S. 3.1 (beginning): specify domain size in cells _and_ km
l. 262: specify domain height (in km too)
Eqs. (6)-(7): why not look at TOA fluxes as well?
l. 546: My first encounter with the notion of thermal "shadows". Is there a reference in the literture?
l. 575: Clarify "feedback effect". Are the LES dynamics driven by a 3D RT model? Or is this a purely (instantaneous) 3D RT effects? BTW, what radiation scheme was used in the LES runs? Should be specified in Section 3.1 (I'm assuming a standard 1D RT model, but may be wrong).
Citation: https://doi.org/10.5194/egusphere-2023-2129-RC4 - AC2: 'Reply on RC4', Richard Maier, 19 Jan 2024
-
RC5: 'Comment on egusphere-2023-2129', Anonymous Referee #4, 27 Nov 2023
Summary
This paper describes an updated version of the TenStream solver, which can be used to solve radiation in high-resolution numerical models such as atmospheric Large-Eddy Models. This new "dynamic" version represents an improvement in terms of computational speed compared to the original TenStream. It relies on the same radiative transfer model but its resolution is accelerated using two fundamental ideas. This first one is that previously computed radiation fields can be used as a first guess in the numerical resolution of the linear system corresponding to the TenStream model, which is refered to as a "dynamic" approach or "time-stepping" scheme because of the similarity with the resolution of advection in the dynamical core of atmospheric models. The second idea is that using an iterative method, namely, the Gauss-Seidel method, to solve the linear system starting from this first guess offers the possibility to stop the calculation after a few iterations, using the resulting field even if it has not converged toward the solution. This is refered to as "incomplete solve". After exposing these ideas and describing their implementation in the dynamical TenStream solver, the authors examine the errors introduced by the fact that infrequent calls to radiation will lead to starting from a "bad" first guess, increasing the error associated with incomplete solves compared to more frequent calls, for the same number of Gauss-Seidel iterations; as well as errors introduced by the fact that the solves are incomplete, by comparing their results with those predicted by the full TenStream solver given the same input fields. Their conclusions are that the dynamic TenStream is significantly faster than the original TenStream, while being mostly as accurate even using as few as 2 Gauss-Seidel iterations at each radiation call.
General comments
I find the paper of great interest. It reports important advances in the field of 3D radiation modeling and its numerical resolution, working towards replacing overly simplified and strongly biased 1D radiation models by their 3D counterparts. I find the manuscript very clear and well organized, and the demonstration of the capabilities of the dynamical TenStream solver convincing. I appreciated the detailed explanations on the models and evaluation methods. I found the part where the results are discussed a little less satisfying but I understand that much more work might be needed to really understand the biases of the different models and that it probably falls out of scope of the present study.
In the following I list some questions and suggestions that I think would make the manuscript even clearer. They are given in a chronological manner rather than per importance. I trust the authors' judgement in the relevance of my suggestions and questions and would recommend publication even if not all my comments are addressed in the revised version.
Specific comments
- Mostly in the Abstract and Introduction but also elsewhere in the paper: the distinction between sub-grid and inter-column "3D effects" is not clear enough and I am afraid it might be confusing for a non-expert reader. For instance in the Introduction L.30-33, it is mentioned that NWP models still use 1D ICA RT schemes, by which I think you mean "solve radiation independently in each model column". Immediatly after this statement comes "such as the McICA" which is indeed a 1D RT solver but here the ICA refers to the neglect of *subgrid* 3D effects (that is, between stochastically generated 1D profiles or "subcolumns"). Later on, you describe SPARTACUS, which is of a very different nature from the TenStream and NCA models, and only there the distinction between inter-column and sub-grid 3D effects is mentioned. I suggest you clarify since the beginning of the Introduction that this distinction exists and that your work relies to the resolution of inter-column horizontal transport. I also feel this distinction is lacking when you write that the 3D effects are becoming more important as the horizontal resolution of NWP models increases. I would rather say that the partition between subgrid and inter-column 3D effects depends on the host model horizontal resolution and that, as we go toward higher resolution, it becomes more important to solve horizontal transfers between columns and less so at the subgrid scale.
- One condition for the Dynamic TenStream to work is that the radiation field does not change too much between two radiation calls, so that the field used as first guess is already close enough to the solution that only a few iterations of the Gauss-Seidel algorithm are needed. It made me wonder if the radiation field was advected with the rest of the atmospheric fields so that it still matched an advected cloud field and the largest errors were mostly limited to cloud birth and death between two radiation time steps?
- How are the thermal sources handled by the Gauss-Seidel method? I imagine they are calculated at the beginning of the iterations and somehow part of the first guess but could you explain how it works exactly? Maybe comment on the fact that B is absent from eq. (2)?
- In Fig. 3, I don't understand how the fluxes entering the domain at the borders would systematically be "updated right from the beginning"? From what I understand, if the BC are periodic for instance, then the incoming flux at the left-side wall would be updated only after the outgoing fluxes at the right-side wall have been calculated? In a parallelized Dynamic TenStream, the fluxes at the subdomain boundaries would only be updated at the end of the calculation as mentioned at L.201 and hence the incoming fluxes at the borders used at a given time would be the ones from the calculations at the previous radiation call?
- L.154-156, solving for a clear-sky situation does not automatically imply that there is no horizontal variability in the model, e.g. specific humidity or surface albedo could still vary on the horizontal. In which case, shouldn't the spin-up be performed on the entire model grid? Would that still be manageable? Wouldn't it be cheaper to use the classical TenStream solver for initiating the Dynamic TenStream? At L.288, it is said that the classical TenStream is not used for initialization to avoid relying on PETSc library, could you elaborate a little more on that, and maybe mention it when the spin-up is first discussed in Sec. 2.2.2?
- L.254, "our solver does not yet take sub-grid scale cloud variability into account": any idea how you would do that? This is probably of great importance for NWP and without it the TenStream solver(s) might be restricted to LES where grid boxes might be considered homogeneous?
- L.257 "to avoid problems with artificially low LWC at cloud edges [...]" were you able to quantify the error in the radiative field induced by smoothing the cloud field vs. by subsampling it at a coarser resolution? Or could you cite a study demonstrating that one is better than the other?
- L.426-428, I disagree with "the newly developed solver is able to almost perfectly reproduce the results of the original TenStream solver whenever called". Looking at Fig. 6b, after a few time steps it seems that the Dynamic TenStream for dtrad=30s line is always above the TenStream lines. Similarly, I disagree with "our new solver even performs better than the delta-Eddington solver at a calling frequency of 10 s when it is operated at a calling frequency of 30 s" at L.430-431. Looking at Fig.6b again, it seems that the errors associated with the Dynamic TenStream for dtrad=30s become larger than those associated with the delta-Eddington for dtrad=10s after around 8200 s.
- Looking at Fig. 7b, it is interesting that the dynamic TenStream solver bias in the thermal partially compensates the original TenStream bias and it might not be for good reasons e.g. the original TenStream is not diffusive enough in the thermal and the incomplete solving in the dynamic approach adds numerical diffusion making the solution closer to the reference but for unphysical reasons?
- In Fig. 9a, it is also interesting that the mean bias is larger in the TenStream solvers than in the delta-Eddington. I think this might be very dependent on the solar zenith angle: 3D effects on the mean surface fluxes go from positive to negative as the sun goes from zenith to horizon and are usually close to zero for angles between 40 and 50 degrees from zenith in cumulus cloud fields (depending on cloud and surface properties). This is because the overestimation by 1D models of direct flux reaching the surface compensates the underestimation of diffuse almost perfectly at these angles. This solar angle dependence would not explain Fig. 9b though, but here the TenStream and delta-Eddington errors are of the same magnitude albeit of opposite sign.
- Even if the TenStream solvers clearly perform radically better than delta-Eddington, it is difficult to imagine how the remaining errors with respect to MYSTIC might affect the simulation once it is used online. Do you have any insights on that, from the literature maybe? For instance it is not obvious to me if it would be preferable to have the right mean flux but with the wrong spatial structure, or the opposite?
- I find it a little frustrating that all simulations have been performed with two Gauss-Seidel iterations. No information on convergence speed is provided in the paper whereas from what I understand of the method there is a tradeoff to be found between frequency of radiation call and number of iterations of the Gauss-Seidel method?
- L.559 I disagree with "almost perfectly". This formulation is not great anyway, as something that is not "entirely" perfect is by definition imperfect.
- L.570 I disagree with "full three-dimensional radiative transport" as it is far from being full considering the limited number of streams and other remaining approximations.
Technical corrections
- First paragraph of Introduction, I would also mention the importance of surface fluxes and not just heating rates.
- L.39-40, add "in the solar spectral range"?
- 2.1 title: I think you describe more than the TenStream "solver"; you describe the underlying radiative transfer "model". Would it be fair to say that this same model can either be solved as in the original TenStream solver, or as in the Dynamic TenStream?
-L.88 I was bothered by the use of "transmittance" here as the a-coefficient also account for incoming scattering and I thought that transmittance was defined as the complementary to extinction along a given line sight; but I might be wrong.
- In Fig. 3, it took me some time to understand that horizontal arrows between horizontally adjacent grid-boxes, as well as one of the two vertical downwelling arrows between vertically adjacent grid-boxes, represent direct solar radiation propagation. It might be worth it to mention it in the caption or to distinguish them somehow or maybe remove them from the schematics?
- In Fig.5, consider using a more contrasted color palette for the various circles?
- Figs. 6-9 are impossible to read for color-blind people.
- L.448 "the dynamic TenStream solver overestimates thermal heating rates" is not very clear here, do you mean overestimates their magnitude knowing that they are negative (i.e. they are more negative than the classical TenStream)?
- L.548 I was bothered by the use of "emission" here as I think it might be confusing; consider sticking to "flux" or "irradiance"?
- Page 27, why not use the more precise term of quadrature point instead of bands?
Citation: https://doi.org/10.5194/egusphere-2023-2129-RC5 - AC5: 'Reply on RC5', Richard Maier, 19 Jan 2024
-
CC1: 'Comment on egusphere-2023-2129: some suggestions to share after a group discussion', Chiel van Heerwaarden, 30 Nov 2023
I am writing this comment on behalf of our research group that works on understanding interactions between clouds, radiation, and the land surface. One of our main research topics is developing and using large-eddy simulations with coupled 3D radiation, and for that reason, we studied this paper together with great interest. Let me start by congratulating the authors with their paper. The Munich group has pioneered the coupling of large-eddy simulations with 3D radiation with their TenStream solver, and this method is a very interesting further development of the method. Based on our group discussion, we would like to share two suggestions that could help in improving the paper.
Suggestion 1: comparison to alternatives to n-stream methods
It would be nice if the authors could extend their introduction by adding some discussion on alternative methods to the TenStream solver. We believe that in recent years, there has been significant progress in ray tracing of large-eddy simulation fields of cloudy boundary layers, with the papers of Najda Villefranque and colleagues (JAMES, 2019) and Jake Gristey and colleagues (JAS, 2020, GRL 2020) as prominent examples. Also, in our group, we developed a GPU ray tracer, which we coupled to our large-eddy simulation code to study the evolution of shallow cumulus clouds (Veerman et al., 2022, GRL) inspired on earlier work by the Munich group. Then, the recent work of Du an Stechmann (JCP, 2023) on spectral element modeling looks rather promising as well, although coupling with cloud-resolving models remains future work there. To conclude, a more elaborate comparison of n-stream solvers to ray tracing and spectral elements methods could help the reader understand why the authors believe their method is the way to bring 3D radiation to operational weather prediction models.
Suggestion 2: discussion on memory usage of the solver
The authors present a very extensive performance analysis of their method, which shows that they can deliver an excellent speed up with respect to the original TenStream solver. This in itself is a great result, and the way this is achieved – keeping the fluxes in memory – is clever, because it removes the need for global communication and for a linear system solver. The description omits, however, a discussion on the most impactful consequence of keeping fluxes in memory, namely memory usage. We did a back of the envelope calculation: if every flux (10 diffuse, 3 direct, 10 thermal) for every quadrature point needs to be kept in memory, and one uses a set of 54 (SW) and 67 (LW) quadrature points, then the dynamic TenStream solver requires (10+3) * 54 + 10 * 67 = 1372 permanent three-dimensional fields for the solver. While the authors discuss in the final sections the benefit of smaller quadrature-point sets, the exact memory footprint of the dynamic solver with respect to the original TenStream is not discussed. We believe this number is very relevant if the ultimate aim is to include this solver in an operational weather model.
Citation: https://doi.org/10.5194/egusphere-2023-2129-CC1 - AC6: 'Reply on CC1', Richard Maier, 19 Jan 2024
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-2129', Anonymous Referee #1, 11 Nov 2023
This paper describes a method for 3D radiative transfer that could be computationally affordable enough to be used in high-resolution models. The idea of treating radiation more akin to dynamics is intriguing and as far as I know novel. The results presented are state-of-the-art in terms of speed-accuracy tradeoff (at least for 3D solvers) and potentially very significant for the advancement of NWP models, which are already configured at resolutions where 3D radiative effects are notable yet are currently ignored in all operational models.
My major comments are provided below and relate mainly to the computational aspects, which deserve more attention. Some of my questions may be adequate to address in the review and not in the paper, as it's already long (and concerned mainly with demonstrating the feasibility of the method - which it does excellently!), but a few clarifying sentences and providing absolute runtimes and/or measures of floating point operations in the paper would go a long way in informing the reader how fast dynamic tenStream potentially is, and whether it could be a real contender to operational radiation schemes outside of LES. Besides this, I think the paper would really benefit if the authors tried to make it more concise by avoiding repetition and removing unnecessary words and sentences. The results shown are relevant but they are sometimes described in a very wordy manner. Finally, the code does not seem to be actually available to download at current time which I understand is against GMD policy.Other major comments:
1a. In general it's a bit difficult to fully understand the method (although Figure 3 does a good job at illustrating it) especially when it comes its implementation in code and its parallelism. The future tense used in L198-204 implies that the parallelism is not yet implemented. My understanding of dynamic TenStream would be something like this for a simplified 1D case:
! Downwelling flux; boundary condition
fd(1) = incsol
fd(2:nlev) = fd_prev_timestep(2:nlev)
! Gauss seidel incomplete solves, not parallelizable
for jiter in 1,niter
! Vectorization or other parallelism, array notation
fd(2:nlev) = T(1:nlev-1)*fd(1:nlev-1)
This would correspond to the radiative flows in individual grid boxes being computed concurrently i.e. in parallel within a single step of Fig 3, is this right?
1B. How should the reader interpret the reported speed numbers in terms of effective speed against operational radiation schemes? Is the 1D delta-Eddington reference based on efficient, vectorized code? It is unclear how efficient dynamic TenStream is or could be compared to widely used two-stream codes such as ecRad, which expresses parallelism across g-points, or the RTE+RRTMGP scheme which vectorizes the column dimension instead. Comparison to other schemes could be greatly facilitated by reporting absolute runtimes, or you could run one of the aforementioned schemes. Potential lack of parallelism and optimization in its current stage can be stressed explicitly and of course, even if dynamic tenStream is currently much slower than operational schemes then it's not a bad result considering full 3D solvers have until now been many orders of magnitudes more expensive. Finally, it could be very useful to report the number of floating point operations (whether absolute or relative to delta-Eddington) but may require a library such as GPTL to estimate, and is perhaps not necessary if the other aspects are clarified.
2. Can you discuss whether you see dynamic TenStream to be a potentially viable scheme for global or regional NWP models as they approach kilometer scale resolution? And on cost again: as these models currently use a very coarse radiation time step compared to the ones reported in the paper, such as 15 minutes (AROME 2.5 km regional model) or 1 hour (IFS, but 9 km so not yet km-scale), does this mean that dynamic TenStream would in fact incur a much bigger cost increase for such models than those given in Table 1, or does the coarser spatial resolution compared to LES mean that dynamic TenStreams convergence would still be adequate with relatively coarse radiation time steps?
Minor comments:Section 2.1. For the direct radiation, what is the advantage of having 3 streams in the independent x,y,z directions rather than two streams to/from the direction of the sun?
L114: Does TenStreams use of an external linear algebra library mean that its implementation is computationally efficient and exploits parallelism but dynamic TenStream currently does not, if so can the speed-up reported in Table 1 be improved further in the future?
L114: Does PETSc run on GPUs? Do you think GPU acceleration is promising for (dynamic) tenStream?
L272. Has TenStream been evaluated across a wider range of solar zenith angles and is its performance sensitive to it?
L474-495. Interesting, what is the reason for tenStream having a worse surface irradiance bias than delta-Eddington?
L540-544. This is an example of probably unnecessarily detail and wordiness (4 lines of text to introduce a plot similar to one already shown)Citation: https://doi.org/10.5194/egusphere-2023-2129-RC1 -
RC2: 'Reply on RC1', Anonymous Referee #1, 13 Nov 2023
I have now received the software code from the editor (seems there was some mistake) but my comment can be considered complete
Citation: https://doi.org/10.5194/egusphere-2023-2129-RC2 - AC3: 'Reply on RC1', Richard Maier, 19 Jan 2024
-
RC2: 'Reply on RC1', Anonymous Referee #1, 13 Nov 2023
-
RC3: 'Comment on egusphere-2023-2129', Anonymous Referee #2, 15 Nov 2023
This is a very interesting paper on speeding up three-dimensional (3D) radiative transfer calculations toward potential use in numerical weather prediction (NWP).
I am very impressed by the paper. It is an important topic, as 3D radiative transfer will require attention as NWP models move to higher resolution.
The methodological advances are carefully designed and effective. I like that the basic ideas are simple and clever and intuitive (e.g., using time-stepping to update the radiative field, and using incomplete solves), while careful attention to details is also crucial to the success of the method (e.g., in the details of the Gauss-Seidel iterations).
The comparisons in the paper are also thorough and include comparisons to a 1D delta-Eddington solver, a 3D Monte Carlo solver, and the original TenStream solver. It is very valuable to have each one of these comparisons, since they span a range of options for speed and accuracy.
The limitations of different methods are also discussed. For instance, the new method is slightly slower than 1D delta-Eddington, and not as accurate 3D Monte Carlo when operated at lower calling frequencies. I appreciate the attention given to these limitations.
It is a very good paper in all aspects: comprehensive, careful, well-written. I appreciated the schematic illustrations, which are helpful for clarifying technical details and main ideas.
I think the paper could be accepted in its current form, but I will mention one specific comment that the authors may wish to address.
Specific comment:
The title mentions NWP as the aim. Then the paper presents results for hectometer-scale grid spacings of large-eddy simulations. On the other hand, I would imagine that NWP will be operating at kilometer-scale horizontal grid spacings for quite some time into the future. If that is the case, then will a major modification of your methods be required in order to work effectively with kilometer-scale horizontal grid spacings, where propagation of radiation in horizontal directions is not well-resolved? I would think so.
If you agree that major modification of your methods will be required in order to work effectively with kilometer-scale horizontal grid spacings, then I would suggest a change to the title (and also possibly some changes in the Introduction section and Summary and outlook section). For instance, in the title, possibly change 'A dynamic approach to' to 'A dynamic approach toward', or change 'in NWP' to 'in LES' or 'in hectometer-scale NWP'. Then you could save the NWP emphasis for a later paper when you can address the difficulties that will arise in using dynamic TenStream on actual NWP models with kilometer-scale grid spacing.
This was the only issue I want to mention, and I otherwise was pleased and impressed by the careful comparisons and discussions of limitations.
Technical correction:
Line 505: "In here" should be just "Here"
Citation: https://doi.org/10.5194/egusphere-2023-2129-RC3 - AC4: 'Reply on RC3', Richard Maier, 19 Jan 2024
-
CEC1: 'Comment on egusphere-2023-2129', Juan Antonio Añel, 19 Nov 2023
Dear authors,
After checking your manuscript, it has come to our attention that it does not comply with our Code and Data Policy.
https://www.geoscientific-model-development.net/policies/code_and_data_policy.html
You have archived your code in a web page (libtradtran.org) that does not comply with our trustable permanent archival policy. Therefore, you have to publish your code in one of the appropriate repositories according to our policy. In this way, you must reply to this comment with the link to the repository used in your manuscript, with its DOI. The reply and the repository should be available as soon as possible and before the Discussions stage is closed. Also, you must include in a potentially reviewed version of your manuscript the modified 'Code and Data Availability' section and the DOI of the code.Please note that if you fail to comply with this request, we will have to reject your manuscript for publication. Actually, your manuscript should not have been accepted in Discussions, given this lack of compliance with our policy.
Juan A. Añel
Geosci. Model Dev. Executive EditorCitation: https://doi.org/10.5194/egusphere-2023-2129-CEC1 -
CC2: 'Reply on CEC1', Bernhard Mayer, 06 Dec 2023
Dear Executive Editor,
our webpage (libRadtran.org) is online since more than 20 years. All model versions are available, including the first public libRadtran version 0.03 from 1997. I assume this exceeds the lifetime of most "official archives" and was therefore accepted by the handling editor. We were actually referring to your policy "Alternatively, for established models, there may be an existing means of accessing the code through a particular system", as stated at https://www.geoscientific-model-development.net/about/manuscript_types.html
A 25 year old model with typically one new version per year certainly qualifies as "established model".
Sincerely, Bernhard Mayer.
Citation: https://doi.org/10.5194/egusphere-2023-2129-CC2 -
CEC2: 'Reply on CC2', Juan Antonio Añel, 06 Dec 2023
Dear Prof. Maier,
Unfortunately, your reply does not address the issue that I have noted. We can not accept the libRadtran.org webpage as a repository. It does not comply with the requirements of trustable repositories. These acceptable repositories are very limited and meet several stringent conditions.
Therefore, I must insist you upload your code to one of the long-term, trusted repositories accepted according to our policy. If you do not do it, we will have to reject your manuscript for publication because it lacks compliance with the journal's policy.Juan A. Añel
Geosci. Model Dev. Executive Editor
Citation: https://doi.org/10.5194/egusphere-2023-2129-CEC2
-
CEC2: 'Reply on CC2', Juan Antonio Añel, 06 Dec 2023
-
AC1: 'Reply on CEC1', Richard Maier, 08 Dec 2023
Dear Executive Editor,
we have uploaded libRadtran version 2.0.5.1 including the new dynamic TenStream solver to Zenodo: https://zenodo.org/records/10288179 (DOI: 10.5281/zenodo.10288179). For the potentially reviewed version, we would change the “Code and Data Availability” section of the paper as follows:
“The newly developed dynamic TenStream solver presented in this paper was developed as part of the libRadtran library for radiative transfer (Emde et al., 2016) and can be accessed via Mayer et al. (2023). Its user manual can be found in the "doc" folder of the library. The shallow cumulus cloud time series used to evaluate the performance of the new solver has been published by Jakub and Gregor (2022), with the modifications and methods applied to it to reproduce the results of Sect. 4 described in Sect. 3 of this paper.”
where Mayer et al. (2023) refers to:
Mayer, B., Emde, C., Gasteiger, J., Kylling, A., Jakub, F., and Maier, R.: libRadtran – library for radiative transfer – version 2.0.5.1, https://doi.org/10.5281/ZENODO.10288179, Zenodo [code], 2023.
Kind regards,
Richard MaierCitation: https://doi.org/10.5194/egusphere-2023-2129-AC1
-
CC2: 'Reply on CEC1', Bernhard Mayer, 06 Dec 2023
-
RC4: 'Comment on egusphere-2023-2129', Anonymous Referee #3, 20 Nov 2023
* General comments:
This is a welcome update of the TenStream 3D radiative transfer (RT) code that already fills a major gap in LES modeling capability, namely, to perform 3D RT broadband radiation budget estimation for Large-Eddy Simulation (LES) models. LES is now routinely used in cloud-scale process modeling to address some of predictive climate science's most stubborn issues, such as cloud feedbacks and aerosol-cloud interactions.
However, in spite of generating fully 3D (i.e., vertically-developed) clouds driven by convective dynamics, the RT parameterizations used in LES are still too often heritage codes from Global Climate Models (GCMs) where nothing less than ~50 to 100 km in scale is resolved, hence all clouds and many cloud systems. A typical aspect ratio for a GCM cloudy column is therefore on the order of 1-to-10, thus, some form of 1D RT that captures the internal variability of the clouds (e.g., McICA) is justified since little radiation will be leaked through the horizontal boundaries anyway. In sharp contrast, a cloudy column in an LES has the opposite aspect ratio: say, 5 km by 50 m, hence about 100-to-1. Even cloud-resolving models (CRMs), say, at 5 km by 0.5 km are 10-to-1. NWP models are heading into that kind of spatial resolution as well. So there is plenty of opportunity for net horizontal fluxes to develop across grid-cell facets, starting with direct shadowing of neighboring cells in the anti-solar direction. The TenStream model is purposefully designed to account for this 3D RT in terms of radiation energetics, hence fluxes, not radiances, as required for computing heating rates profiles and net fluxes through the top and bottom boundaries.
The new _dynamic_ TenStream model is designed to address the issue of computational efficiency that is in the way of the general acceptance of TenStream in the LES community for operational implementation. Specifically, it brings CPU time allocation down to ~3x the baseline cost of 1D RT, and does so by cutting a few corners, which could carry a cost in accuracy. Therefore, dynamic TenStream is benchmarked for accuracy against the original TenStream, as well as 1D RT (delta-Eddington) and full 3D RT (MYSTIC). Its accuracy is at par with the original TenStream, which is already a vast improvement in accuracy for radiation budget estimation using standard 1D RT.
The paper is well written and illustrated. It should be published by GMD after a minor revision that addresses the following issues.
* Specific comments:
(1) Carefull attention is paid to the heating-rate profile and surface irradiance/flux. However, it seems to me that the outgoing TOA flux is also important. Maybe TenStream enforces radiant energy conservation is such a way that the TOA flux is as accurate as the rest, but that isn't obvious to this reviewer. At a minimum, some kind of statement on TOA flux accuracy is in order.
(2) The temporal down-sampling and the incomplet solves naturally cause the new model to drift away from the original counterpart. Would it not be beneficial to occasionally "reset" this drift to zero by calling the original TenStream? Of course there is a whole study to perform about when to do this operationally, without the benchmark information at hand.
(3) Although it should have been done when documenting the original TenStream model, it would be good to look into the past to find models with similar mathematical structure in terms of radical angular simplification compared to standard 3D RT solvers, more precisely with improved efficiency in mind. Can I suggest a few?
- an original "6-flux" model, applied to homogeneous plane-parallel media (but with potential for heterogeneous media):
Chu, C.M. and Churchill, S.W., 1955. Numerical solution of problems in multiple scattering of electromagnetic radiation. The Journal of Physical Chemistry, 59(9), pp.855-863.- a discrete-angle RT formalism predicated on regular tessellations of 2D and 3D spaces, seeking the minimal number of directions to capture 3D RT effects:
Lovejoy, S., Davis, A., Gabriel, P., Schertzer, D. and Austin, G.L., 1990. Discrete angle radiative transfer: 1. Scaling and similarity, universality and diffusion. Journal of Geophysical Research: Atmospheres, 95(D8), pp.11699-11715.- a 2D (4-stream) RT model in a deterministic fractal medium, emphasizing numerical implementation (successive over-relaxation scheme):
Davis, A., Gabriel, P., Lovejoy, S., Schertzer, D. and Austin, G.L., 1990. Discrete angle radiative transfer: 3. Numerical results and meteorological applications. Journal of Geophysical Research: Atmospheres, 95(D8), pp.11729-11742.- the same 2D (4-stream) RT model but in a random multifractal medium, emphasizing numerical implementation (Monte Carlo scheme):
Davis, A.B., Lovejoy, S. and Schertzer, D., 1991, November. Discrete-angle radiative transfer in a multifractal medium. In Wave Propagation and Scattering in Varied Media II (Vol. 1558, pp. 37-59). SPIE.- vastly faster solution of the 4-stream model using sparse matrix inversion:
Lovejoy, S., Watson, B.P., Grosdidier, Y. and Schertzer, D., 2009. Scattering in thick multifractal clouds, Part II: Multiple scattering. Physica A: Statistical Mechanics and its Applications, 388(18), pp.3711-3727.* Technical corrections:
Title: The application to NWP models is both inspirational and aspirational. Here, however, the authors only get as far as LES, or CRM (100 m grid spacing). A more accurate title is in order.
l. 99: i.e., e.g., (need commas, I think)
l. 100: "n1" --> what is the "1" for?
l. 104: first "out" --> not italics
Fig. 3: For SZA near 45 deg, one could use a diagonal sweep through the grid? Same for ~45 deg in azimuth? Admittedly more tricky to code, but it would follow more closely the propagation of direct sunlight. No?
1 2 6 7 3 5 8 14 4 9 13 10 12 (schematic below should be rendered with an equal size font)
_____ _____ _____ _____ ___ etc.
| | | | |
| 1 | 2 | 6 | 7 | 15
| | | | |
_____ _____ _____ _____ ___
| | | | |
| 3 | 5 | 8 | 14 |
| | | | |
_____ _____ _____ _____ ___
| | | | |
| 4 | 9 | 13 | |
| | | | |
etc.l. 190: "this direction" -->? horizontal scan
S. 3.1 (beginning): specify domain size in cells _and_ km
l. 262: specify domain height (in km too)
Eqs. (6)-(7): why not look at TOA fluxes as well?
l. 546: My first encounter with the notion of thermal "shadows". Is there a reference in the literture?
l. 575: Clarify "feedback effect". Are the LES dynamics driven by a 3D RT model? Or is this a purely (instantaneous) 3D RT effects? BTW, what radiation scheme was used in the LES runs? Should be specified in Section 3.1 (I'm assuming a standard 1D RT model, but may be wrong).
Citation: https://doi.org/10.5194/egusphere-2023-2129-RC4 - AC2: 'Reply on RC4', Richard Maier, 19 Jan 2024
-
RC5: 'Comment on egusphere-2023-2129', Anonymous Referee #4, 27 Nov 2023
Summary
This paper describes an updated version of the TenStream solver, which can be used to solve radiation in high-resolution numerical models such as atmospheric Large-Eddy Models. This new "dynamic" version represents an improvement in terms of computational speed compared to the original TenStream. It relies on the same radiative transfer model but its resolution is accelerated using two fundamental ideas. This first one is that previously computed radiation fields can be used as a first guess in the numerical resolution of the linear system corresponding to the TenStream model, which is refered to as a "dynamic" approach or "time-stepping" scheme because of the similarity with the resolution of advection in the dynamical core of atmospheric models. The second idea is that using an iterative method, namely, the Gauss-Seidel method, to solve the linear system starting from this first guess offers the possibility to stop the calculation after a few iterations, using the resulting field even if it has not converged toward the solution. This is refered to as "incomplete solve". After exposing these ideas and describing their implementation in the dynamical TenStream solver, the authors examine the errors introduced by the fact that infrequent calls to radiation will lead to starting from a "bad" first guess, increasing the error associated with incomplete solves compared to more frequent calls, for the same number of Gauss-Seidel iterations; as well as errors introduced by the fact that the solves are incomplete, by comparing their results with those predicted by the full TenStream solver given the same input fields. Their conclusions are that the dynamic TenStream is significantly faster than the original TenStream, while being mostly as accurate even using as few as 2 Gauss-Seidel iterations at each radiation call.
General comments
I find the paper of great interest. It reports important advances in the field of 3D radiation modeling and its numerical resolution, working towards replacing overly simplified and strongly biased 1D radiation models by their 3D counterparts. I find the manuscript very clear and well organized, and the demonstration of the capabilities of the dynamical TenStream solver convincing. I appreciated the detailed explanations on the models and evaluation methods. I found the part where the results are discussed a little less satisfying but I understand that much more work might be needed to really understand the biases of the different models and that it probably falls out of scope of the present study.
In the following I list some questions and suggestions that I think would make the manuscript even clearer. They are given in a chronological manner rather than per importance. I trust the authors' judgement in the relevance of my suggestions and questions and would recommend publication even if not all my comments are addressed in the revised version.
Specific comments
- Mostly in the Abstract and Introduction but also elsewhere in the paper: the distinction between sub-grid and inter-column "3D effects" is not clear enough and I am afraid it might be confusing for a non-expert reader. For instance in the Introduction L.30-33, it is mentioned that NWP models still use 1D ICA RT schemes, by which I think you mean "solve radiation independently in each model column". Immediatly after this statement comes "such as the McICA" which is indeed a 1D RT solver but here the ICA refers to the neglect of *subgrid* 3D effects (that is, between stochastically generated 1D profiles or "subcolumns"). Later on, you describe SPARTACUS, which is of a very different nature from the TenStream and NCA models, and only there the distinction between inter-column and sub-grid 3D effects is mentioned. I suggest you clarify since the beginning of the Introduction that this distinction exists and that your work relies to the resolution of inter-column horizontal transport. I also feel this distinction is lacking when you write that the 3D effects are becoming more important as the horizontal resolution of NWP models increases. I would rather say that the partition between subgrid and inter-column 3D effects depends on the host model horizontal resolution and that, as we go toward higher resolution, it becomes more important to solve horizontal transfers between columns and less so at the subgrid scale.
- One condition for the Dynamic TenStream to work is that the radiation field does not change too much between two radiation calls, so that the field used as first guess is already close enough to the solution that only a few iterations of the Gauss-Seidel algorithm are needed. It made me wonder if the radiation field was advected with the rest of the atmospheric fields so that it still matched an advected cloud field and the largest errors were mostly limited to cloud birth and death between two radiation time steps?
- How are the thermal sources handled by the Gauss-Seidel method? I imagine they are calculated at the beginning of the iterations and somehow part of the first guess but could you explain how it works exactly? Maybe comment on the fact that B is absent from eq. (2)?
- In Fig. 3, I don't understand how the fluxes entering the domain at the borders would systematically be "updated right from the beginning"? From what I understand, if the BC are periodic for instance, then the incoming flux at the left-side wall would be updated only after the outgoing fluxes at the right-side wall have been calculated? In a parallelized Dynamic TenStream, the fluxes at the subdomain boundaries would only be updated at the end of the calculation as mentioned at L.201 and hence the incoming fluxes at the borders used at a given time would be the ones from the calculations at the previous radiation call?
- L.154-156, solving for a clear-sky situation does not automatically imply that there is no horizontal variability in the model, e.g. specific humidity or surface albedo could still vary on the horizontal. In which case, shouldn't the spin-up be performed on the entire model grid? Would that still be manageable? Wouldn't it be cheaper to use the classical TenStream solver for initiating the Dynamic TenStream? At L.288, it is said that the classical TenStream is not used for initialization to avoid relying on PETSc library, could you elaborate a little more on that, and maybe mention it when the spin-up is first discussed in Sec. 2.2.2?
- L.254, "our solver does not yet take sub-grid scale cloud variability into account": any idea how you would do that? This is probably of great importance for NWP and without it the TenStream solver(s) might be restricted to LES where grid boxes might be considered homogeneous?
- L.257 "to avoid problems with artificially low LWC at cloud edges [...]" were you able to quantify the error in the radiative field induced by smoothing the cloud field vs. by subsampling it at a coarser resolution? Or could you cite a study demonstrating that one is better than the other?
- L.426-428, I disagree with "the newly developed solver is able to almost perfectly reproduce the results of the original TenStream solver whenever called". Looking at Fig. 6b, after a few time steps it seems that the Dynamic TenStream for dtrad=30s line is always above the TenStream lines. Similarly, I disagree with "our new solver even performs better than the delta-Eddington solver at a calling frequency of 10 s when it is operated at a calling frequency of 30 s" at L.430-431. Looking at Fig.6b again, it seems that the errors associated with the Dynamic TenStream for dtrad=30s become larger than those associated with the delta-Eddington for dtrad=10s after around 8200 s.
- Looking at Fig. 7b, it is interesting that the dynamic TenStream solver bias in the thermal partially compensates the original TenStream bias and it might not be for good reasons e.g. the original TenStream is not diffusive enough in the thermal and the incomplete solving in the dynamic approach adds numerical diffusion making the solution closer to the reference but for unphysical reasons?
- In Fig. 9a, it is also interesting that the mean bias is larger in the TenStream solvers than in the delta-Eddington. I think this might be very dependent on the solar zenith angle: 3D effects on the mean surface fluxes go from positive to negative as the sun goes from zenith to horizon and are usually close to zero for angles between 40 and 50 degrees from zenith in cumulus cloud fields (depending on cloud and surface properties). This is because the overestimation by 1D models of direct flux reaching the surface compensates the underestimation of diffuse almost perfectly at these angles. This solar angle dependence would not explain Fig. 9b though, but here the TenStream and delta-Eddington errors are of the same magnitude albeit of opposite sign.
- Even if the TenStream solvers clearly perform radically better than delta-Eddington, it is difficult to imagine how the remaining errors with respect to MYSTIC might affect the simulation once it is used online. Do you have any insights on that, from the literature maybe? For instance it is not obvious to me if it would be preferable to have the right mean flux but with the wrong spatial structure, or the opposite?
- I find it a little frustrating that all simulations have been performed with two Gauss-Seidel iterations. No information on convergence speed is provided in the paper whereas from what I understand of the method there is a tradeoff to be found between frequency of radiation call and number of iterations of the Gauss-Seidel method?
- L.559 I disagree with "almost perfectly". This formulation is not great anyway, as something that is not "entirely" perfect is by definition imperfect.
- L.570 I disagree with "full three-dimensional radiative transport" as it is far from being full considering the limited number of streams and other remaining approximations.
Technical corrections
- First paragraph of Introduction, I would also mention the importance of surface fluxes and not just heating rates.
- L.39-40, add "in the solar spectral range"?
- 2.1 title: I think you describe more than the TenStream "solver"; you describe the underlying radiative transfer "model". Would it be fair to say that this same model can either be solved as in the original TenStream solver, or as in the Dynamic TenStream?
-L.88 I was bothered by the use of "transmittance" here as the a-coefficient also account for incoming scattering and I thought that transmittance was defined as the complementary to extinction along a given line sight; but I might be wrong.
- In Fig. 3, it took me some time to understand that horizontal arrows between horizontally adjacent grid-boxes, as well as one of the two vertical downwelling arrows between vertically adjacent grid-boxes, represent direct solar radiation propagation. It might be worth it to mention it in the caption or to distinguish them somehow or maybe remove them from the schematics?
- In Fig.5, consider using a more contrasted color palette for the various circles?
- Figs. 6-9 are impossible to read for color-blind people.
- L.448 "the dynamic TenStream solver overestimates thermal heating rates" is not very clear here, do you mean overestimates their magnitude knowing that they are negative (i.e. they are more negative than the classical TenStream)?
- L.548 I was bothered by the use of "emission" here as I think it might be confusing; consider sticking to "flux" or "irradiance"?
- Page 27, why not use the more precise term of quadrature point instead of bands?
Citation: https://doi.org/10.5194/egusphere-2023-2129-RC5 - AC5: 'Reply on RC5', Richard Maier, 19 Jan 2024
-
CC1: 'Comment on egusphere-2023-2129: some suggestions to share after a group discussion', Chiel van Heerwaarden, 30 Nov 2023
I am writing this comment on behalf of our research group that works on understanding interactions between clouds, radiation, and the land surface. One of our main research topics is developing and using large-eddy simulations with coupled 3D radiation, and for that reason, we studied this paper together with great interest. Let me start by congratulating the authors with their paper. The Munich group has pioneered the coupling of large-eddy simulations with 3D radiation with their TenStream solver, and this method is a very interesting further development of the method. Based on our group discussion, we would like to share two suggestions that could help in improving the paper.
Suggestion 1: comparison to alternatives to n-stream methods
It would be nice if the authors could extend their introduction by adding some discussion on alternative methods to the TenStream solver. We believe that in recent years, there has been significant progress in ray tracing of large-eddy simulation fields of cloudy boundary layers, with the papers of Najda Villefranque and colleagues (JAMES, 2019) and Jake Gristey and colleagues (JAS, 2020, GRL 2020) as prominent examples. Also, in our group, we developed a GPU ray tracer, which we coupled to our large-eddy simulation code to study the evolution of shallow cumulus clouds (Veerman et al., 2022, GRL) inspired on earlier work by the Munich group. Then, the recent work of Du an Stechmann (JCP, 2023) on spectral element modeling looks rather promising as well, although coupling with cloud-resolving models remains future work there. To conclude, a more elaborate comparison of n-stream solvers to ray tracing and spectral elements methods could help the reader understand why the authors believe their method is the way to bring 3D radiation to operational weather prediction models.
Suggestion 2: discussion on memory usage of the solver
The authors present a very extensive performance analysis of their method, which shows that they can deliver an excellent speed up with respect to the original TenStream solver. This in itself is a great result, and the way this is achieved – keeping the fluxes in memory – is clever, because it removes the need for global communication and for a linear system solver. The description omits, however, a discussion on the most impactful consequence of keeping fluxes in memory, namely memory usage. We did a back of the envelope calculation: if every flux (10 diffuse, 3 direct, 10 thermal) for every quadrature point needs to be kept in memory, and one uses a set of 54 (SW) and 67 (LW) quadrature points, then the dynamic TenStream solver requires (10+3) * 54 + 10 * 67 = 1372 permanent three-dimensional fields for the solver. While the authors discuss in the final sections the benefit of smaller quadrature-point sets, the exact memory footprint of the dynamic solver with respect to the original TenStream is not discussed. We believe this number is very relevant if the ultimate aim is to include this solver in an operational weather model.
Citation: https://doi.org/10.5194/egusphere-2023-2129-CC1 - AC6: 'Reply on CC1', Richard Maier, 19 Jan 2024
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
485 | 159 | 42 | 686 | 24 | 19 |
- HTML: 485
- PDF: 159
- XML: 42
- Total: 686
- BibTeX: 24
- EndNote: 19
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Richard Maier
Fabian Jakub
Claudia Emde
Mihail Manev
Aiko Voigt
Bernhard Mayer
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(4913 KB) - Metadata XML