the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Identification of ice-over-water multilayer clouds using multispectral satellite data in an artificial neural network
Abstract. An artificial neural network (ANN) algorithm, employing several Aqua MODerate-resolution Imaging Spectroradiometer (MODIS) channels, the retrieved cloud phase and total cloud visible optical depth, and temperature and humidity vertical profiles is trained to detect multilayer (ML) ice-over-water cloud systems identified by matched 2008 CloudSat and CALIPSO (CC) data. The trained MLANN was applied to 2009 MODIS data resulting in combined ML and single layer detection accuracies of 87 % (89 %) and 86 % (89 %) for snow-free (snow-covered) regions during the day and night, respectively. Overall, it detects 55 % and ~30 % of the CC ML clouds over snow-free and snow-covered surfaces, respectively, and has a relatively low false alarm rate. The net gain in accuracy, which is the difference between the true and false ML fractions, is 7.5 % and ~2.0 % over snow-free and snow/ice-covered surfaces. Overall, the MLANN is more accurate than most currently available methods. When corrected for the viewing-zenith-angle dependence of each parameter, the ML fraction detected is relatively invariant across the swath. Compared to the CC ML variability, the MLANN is robust seasonally and interannually, and produces similar distribution patterns over the globe, except in the polar regions. Additional research is needed to conclusively evaluate the VZA dependence and further improve the MLANN accuracy. This approach should greatly improve the monitoring of cloud vertical structure using operational passive sensors.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(16543 KB)
-
Supplement
(9326 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(16543 KB) - Metadata XML
-
Supplement
(9326 KB) - BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-2804', Anonymous Referee #1, 11 Jan 2024
The authors have submitted an interesting manuscript that presents the principle and building of a artifical neural network (ANN) based algorithm and its results. The objective is the identification of the multilayer character of ice-over-water multilayer clouds from MODIS/AQUA measurements. The ANN is trained with information from the active sensors of the A Train. While several approachs and attempts to retrieve such information about multilayer clouds from passive sensors exist, and they are listed in the manuscript and used for comparison, multilayer qualification is still a quite innovative retrieval target.
Authors give arguments about the importance to get this single layer (SL) and multilayer (ML) qualifications in order to improve the estimate of radiative budget, and to assimilate with improved confidence informations about clouds in different applications.
Data and methodology are clearly presented as well as the results. The analysis of the results are interesting : recall of ML cloud detected as a function of total and upper layer COD, probability distributions of true and false SL and ML clouds as a function of upper layer COD, attempts to correct and validate the use of the neural network as a function of the viewing angle in order to exploit it over the entire MODIS swath, presentation of case studies, demonstration with inferred climatologies of ML cloud occurrence of the performance and value of the approach, operationnal considerations.
Overall, the performance of the algorithm appears robust, consistent and, even if the comparison is not so straightforward. the performance’s scores seem higher that those of other algorithms. The statistical bias of the algorithm appears to lie between -5 and -10 % (ML occurrence), bias that disappears when studying monthly anomalies following the fact that the bias is latitude independent and certainly, not shown, timely independent. Two very valuable aspects of this algorithm are that is performs during both day and night and also above snow-covered surfaces.
This work and its results fit well in the scope of EGUsphere, topic Atmospheric sciences and AMT.
Â
As a reviewer, I will call here for minor revisions of this manuscript.
Some omissions bring some lack of clarity.
Â
First, the neural network algorithm appears here a little bit as a black box. It is understandable, as it is, in a way, a black box, but some descriptions could have been turned differently.
The usage, two times, of ‘artificial intelligence’, to describe its operation, participates in this shortcut. The ANN works here thanks to adequate choice of inputs and outputs and trainings. So it is not by itself intelligent.
Authors don’t give two much technical details about how they design for their usage the NN, letting the reader find it in previous studies as Minnis et al (2016, 2019) and Sun-Mack et al (2017). And this technique seems to be in the continuity of the works of Sun-Mack et al (2017) and Minnis et al (2019). The current manuscript makes numerous times reference to these two previous communications. The first one is a conference paper, not necessarily a problem, but the communication is not very long and detailed. More problematic, the second one, Minnis et al (2019) is not given in the references’ page of the paper. So one should accept all what is said about this reference (and there are some lines as on lines 93 to 98, etc.) without having the possibility to read it … The reference should be given.
Finally, the (too?) short description of the physical basis behind the usage of the ANN leads to some lack of clarity :
- concerning the contraint used for the training (top of page 8), the rationale is not given : why no temperature inversion between 273 and 253 K ? the rationale should certainly be added in order to better understand the applied filtering and the definiton of what are here selected as multilayer cloud situations, that would help in the appreciation of the difference between the presented results and the ones of the others references (MYD06 C6, POLDER, ...)
- It is said on line 325 that ‘The results represent a significant improvement over the previous formulation. Much of the increased accuracy is due to …’ but the rationale of it could be clearer.
- On line 332 : ‘improvements arise from using additional input parameters, including [IR and NIR] channels’. Also on line 333 : ‘vertical profile of relative humidity’. Wasn’t there a way to argue or illustrate more clearly, if not demonstrate, these added values ? (why and how much the vertical profile of relative humidity adds useful information ?).
Â
Minor Questions :
- a basic question : why the choice of 50-70 neurons for the hidden layer is a good choice ?
- Concerning the choice of defining a cloudy pixel as ML when the MLANN output probability is higher than 0.5Â :
What is the consequence of the threshold’s choice ?
Have you thought about defining instead as output a probability between 0 and 1, as in Desmons et al (2017), with a definition of a binary threshold that would maximize the algorithm accuracy ?
- a distinction is made between snow-free and snow-covered surfaces, which makes sense. Wouldn’t it have been interesting to present performances with a distinction ocean/land ? (that would have shown a better performance during the night over land?)
- on line 270 : if CoS is called Single-layer Confidence, why not defining for consistency on line 268 PR as Multilayer confidence (CoM) ?
- on line 270 : isn’t the definition or equation wrong ? Isn’t it instead False SL rate ? Or SS/(SS+SM). Values given in Tables seem correct ; but the definition is certainly wrong here.
- on line 511 : why is it ‘not shown’ ?
- From the results of Figure 5 : would there be an interest to plot the difference (night minus day) and show the capacity of MLANN to get right this difference ?
Â
Minor comments or typo :
- line 88 : does the reference to Minnis et al (2023) exists ? It should be given.
- line 108Â : Venetsanopoulos
- line 120Â : equation with equal sign not to be cutted in two
- line 315Â : Table 3 instead of Table 2
- sentence on line 368 and 369Â : is it really two conditions on \tau_{CM}Â ?
- line 444Â : Desmons instead of Desmond
- line 446Â : Marchant et al (2020) instead of (2017)Â ?
- on line 573Â : Fig. 14 instead of Fig. 16
- line 613Â : decreases from 87Â %?
- line 693Â : Sourdeval instead of Sourdevall
- on the legend of Fig. 3Â : The acronyms SF and SC could be detailed
- in Table 3 : on column 9 : some totals are wrong :
94.4 instead of 69.5 on line 1
5.7 instead of 30.5 on line 2
90.9 instead of 71.3 on line 5
9.1 instead of 28.7 on line 6
Â
Â
Citation: https://doi.org/10.5194/egusphere-2023-2804-RC1 - AC1: 'Reply on RC1', S.S. Sun-Mack, 11 Feb 2024
-
RC2: 'Comment on egusphere-2023-2804', Anonymous Referee #2, 16 Jan 2024
The submitted manuscript describes research towards an operational retrieval of vertical cloud structure from space-borne, imaging spectroradiometers. It continues development, by the same research team, of the MLANN algorithm: the present iteration 1) further subsets the algorithm (which already separately models day and night observations) to separately model observations from snow-or-ice covered and snow-or-ice free surfaces, and 2) uses updated labeled datasets. For the first time, the authors also consider the application of MLANN (which is trained on near-nadir viewing angles) to full swaths (i.e. including far-from-nadir viewing angles).
The manuscript is lengthy and often poorly organized, making a comprehensive list of revisions, sufficient to merit publication, too time-consuming to generate. I've listed several of my key concerns with the manuscript below, along with several examples of unnecessary additions to the manuscript's "cognitive load" which, if reduced, would allow this reviewer to complete a comprehensive list of recommendations.
-
The discussion begins with an unsupported statement: "These results represent a significant improvement over the previous MLANN formulation." It's important that a revised manuscript include evidence supporting this conclusion. If this iteration of the MLANN has not shown improvements, then an entirely different manuscript crafted around a negative result is needed. While I'm sure that is not the case, where is the comparison? Section 5.2 and Table 5 have neglected that critical comparison.
-
Sections 2 and 3 are not laid out in a way that helps the reader. ANNs are data-driven, so presenting the model before the data demands the reader maintain abstractions "x" and "y" until the next section. I recommend reversing the presentation, so the reader knows the inputs and outputs. The "Input and Output Layers" subsection of 3 includes aspects of methodology (data splits, number of models trained) and feature definition (cloud layer classification) that don't have anything to do with the subsection heading.
-
The accuracy metric featured foremost in the abstract and conclusion is sensitive to imbalance in the test data, which is nearly 80:20 in this manuscript. The null model for an 80:20 split also already has 68% accuracy (not the 50% naively assumed for a binary classification problem). You might consider using instead the Mathews correlation coefficient. You definitely do not want to compare, as in Table 5, the accuracy of models with different degrees of imbalance in the test data.
-
Terminology and acronyms are abundant and sometimes more confusing than helpful. One example: does "multi-layer" in MLANN refer to neural network layers or cloud layers? MLNN is a widely used acronym for neural networks with multiple layers of nodes, so most readers will associate the ML in MLANN with nodes not clouds. A second example: the MS, SM, MM, SS acronyms for the confusion matrix are unnecessary departures from the true positive (TP), false positive (FP), etc. terminology widely used for binary classification problems. You could eliminate a whole Table by ditching MS, SM, MM, and SS in favor of conventional terminology. A third example: what is the purpose of inventing "net gain of accuracy" when recall (of the multi-layer class) already quantifies MM with respect to MS and is both familiar and normalized? A fourth example: CoS is a new term and acronym for precision of the single-layer class. (At least, it is if I'm correct that the "1 - " in equation 6 is a mistake.) Every new term or acronym, especially ones close to but deviating from familiar ones, increases the cognitive load the reader must maintain while interpreting the study.
-
The presentation of the algorithm spends too long reviewing general theory and barely touches useful specifics. I appreciate brevity allowed by references to prior papers documenting the MLANN, although I did not take time to read them too. Nevertheless, specifics that either differ, are essential for reproducibility, or simply quick to convey ought to be included (e.g. exact network shape for each model, the loss function). The description also has to be self-inconsistent, which it is not in describing 1) how "testing" and "validation" data are (both?) used to terminate training, and 2) describing y as a probability in the text but as any real number in Figure 1. One thing the cited papers may explain, but I would like to (also) see here, is the reasoning or optimization that 1) limited the ANN to a single hidden layer, and 2) allowed collinear variables as inputs (in the brightness temperatures along with brightness temperature differences).
-
The optimization appears to take not precautions against local minima or overfitting. Widely used software for neural network optimization use some form of stochastic gradient descent, whereas this study uses Levenberg-Marquette. How are local minima avoided? The large numbers of parameters in neural network optimization can lead to overfitting, but the authors do not indicate any steps taken to avoid overfitting or indicate that none was observed.
-
The function "g" in Figure 1 is a shifted and scaled sigmoid activation function, but neither the shifting nor scaling will have any effect given the free weights and biases. This unnecessary "tweak" is another addition to the cognitive load that makes reading this manuscript a real slog.
-
The research ought to be reproducible. Normally I would say "more easily reproducible", but this work has not reached the bar of reproducible at all. The software and data ought to be provided, preferably coupled with a clear pipeline for training and evaluating the model. Software the authors didn't write must be cited (what software evaluated and optimized the neural network?), and any software the authors wrote ought to be included as a supplement. If a pipeline includes downloading and processing of raw data from some CERES Ordering Tool API, then the link-to-the-data provided in the manuscript is sufficient; if it does not, the processed data ought to be published.
-
Minnis et al. 2019 is not in the references.
-
The subscript on f in Figure 1 is not needed. In fact, it is a misdirection because the subscript indicate the existence of hidden layers which do not exist. The nodes currently labelled f_1 should be labelled u_1, u_2, u_3, etc. What is the difference in Figure 1 between x_1, x_2, x_3, etc. and the first layer of nodes with no label?
-
Line 207-209 is an incomplete sentence.
-
The studies included in table 5 do not use the same test data, so comparing the metrics is okay but not entirely quantitative. Indicating the "best" result with a bold-face font pushes the comparison too far.
-
The similarity of the CC and MLANN cloud fractions when stratified over space (Fig 14) or time (Fig 15) could be quantified with a correlation coefficient.
Citation: https://doi.org/10.5194/egusphere-2023-2804-RC2 - AC2: 'Reply on RC2', S.S. Sun-Mack, 11 Feb 2024
-
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-2804', Anonymous Referee #1, 11 Jan 2024
The authors have submitted an interesting manuscript that presents the principle and building of a artifical neural network (ANN) based algorithm and its results. The objective is the identification of the multilayer character of ice-over-water multilayer clouds from MODIS/AQUA measurements. The ANN is trained with information from the active sensors of the A Train. While several approachs and attempts to retrieve such information about multilayer clouds from passive sensors exist, and they are listed in the manuscript and used for comparison, multilayer qualification is still a quite innovative retrieval target.
Authors give arguments about the importance to get this single layer (SL) and multilayer (ML) qualifications in order to improve the estimate of radiative budget, and to assimilate with improved confidence informations about clouds in different applications.
Data and methodology are clearly presented as well as the results. The analysis of the results are interesting : recall of ML cloud detected as a function of total and upper layer COD, probability distributions of true and false SL and ML clouds as a function of upper layer COD, attempts to correct and validate the use of the neural network as a function of the viewing angle in order to exploit it over the entire MODIS swath, presentation of case studies, demonstration with inferred climatologies of ML cloud occurrence of the performance and value of the approach, operationnal considerations.
Overall, the performance of the algorithm appears robust, consistent and, even if the comparison is not so straightforward. the performance’s scores seem higher that those of other algorithms. The statistical bias of the algorithm appears to lie between -5 and -10 % (ML occurrence), bias that disappears when studying monthly anomalies following the fact that the bias is latitude independent and certainly, not shown, timely independent. Two very valuable aspects of this algorithm are that is performs during both day and night and also above snow-covered surfaces.
This work and its results fit well in the scope of EGUsphere, topic Atmospheric sciences and AMT.
Â
As a reviewer, I will call here for minor revisions of this manuscript.
Some omissions bring some lack of clarity.
Â
First, the neural network algorithm appears here a little bit as a black box. It is understandable, as it is, in a way, a black box, but some descriptions could have been turned differently.
The usage, two times, of ‘artificial intelligence’, to describe its operation, participates in this shortcut. The ANN works here thanks to adequate choice of inputs and outputs and trainings. So it is not by itself intelligent.
Authors don’t give two much technical details about how they design for their usage the NN, letting the reader find it in previous studies as Minnis et al (2016, 2019) and Sun-Mack et al (2017). And this technique seems to be in the continuity of the works of Sun-Mack et al (2017) and Minnis et al (2019). The current manuscript makes numerous times reference to these two previous communications. The first one is a conference paper, not necessarily a problem, but the communication is not very long and detailed. More problematic, the second one, Minnis et al (2019) is not given in the references’ page of the paper. So one should accept all what is said about this reference (and there are some lines as on lines 93 to 98, etc.) without having the possibility to read it … The reference should be given.
Finally, the (too?) short description of the physical basis behind the usage of the ANN leads to some lack of clarity :
- concerning the contraint used for the training (top of page 8), the rationale is not given : why no temperature inversion between 273 and 253 K ? the rationale should certainly be added in order to better understand the applied filtering and the definiton of what are here selected as multilayer cloud situations, that would help in the appreciation of the difference between the presented results and the ones of the others references (MYD06 C6, POLDER, ...)
- It is said on line 325 that ‘The results represent a significant improvement over the previous formulation. Much of the increased accuracy is due to …’ but the rationale of it could be clearer.
- On line 332 : ‘improvements arise from using additional input parameters, including [IR and NIR] channels’. Also on line 333 : ‘vertical profile of relative humidity’. Wasn’t there a way to argue or illustrate more clearly, if not demonstrate, these added values ? (why and how much the vertical profile of relative humidity adds useful information ?).
Â
Minor Questions :
- a basic question : why the choice of 50-70 neurons for the hidden layer is a good choice ?
- Concerning the choice of defining a cloudy pixel as ML when the MLANN output probability is higher than 0.5Â :
What is the consequence of the threshold’s choice ?
Have you thought about defining instead as output a probability between 0 and 1, as in Desmons et al (2017), with a definition of a binary threshold that would maximize the algorithm accuracy ?
- a distinction is made between snow-free and snow-covered surfaces, which makes sense. Wouldn’t it have been interesting to present performances with a distinction ocean/land ? (that would have shown a better performance during the night over land?)
- on line 270 : if CoS is called Single-layer Confidence, why not defining for consistency on line 268 PR as Multilayer confidence (CoM) ?
- on line 270 : isn’t the definition or equation wrong ? Isn’t it instead False SL rate ? Or SS/(SS+SM). Values given in Tables seem correct ; but the definition is certainly wrong here.
- on line 511 : why is it ‘not shown’ ?
- From the results of Figure 5 : would there be an interest to plot the difference (night minus day) and show the capacity of MLANN to get right this difference ?
Â
Minor comments or typo :
- line 88 : does the reference to Minnis et al (2023) exists ? It should be given.
- line 108Â : Venetsanopoulos
- line 120Â : equation with equal sign not to be cutted in two
- line 315Â : Table 3 instead of Table 2
- sentence on line 368 and 369Â : is it really two conditions on \tau_{CM}Â ?
- line 444Â : Desmons instead of Desmond
- line 446Â : Marchant et al (2020) instead of (2017)Â ?
- on line 573Â : Fig. 14 instead of Fig. 16
- line 613Â : decreases from 87Â %?
- line 693Â : Sourdeval instead of Sourdevall
- on the legend of Fig. 3Â : The acronyms SF and SC could be detailed
- in Table 3 : on column 9 : some totals are wrong :
94.4 instead of 69.5 on line 1
5.7 instead of 30.5 on line 2
90.9 instead of 71.3 on line 5
9.1 instead of 28.7 on line 6
Â
Â
Citation: https://doi.org/10.5194/egusphere-2023-2804-RC1 - AC1: 'Reply on RC1', S.S. Sun-Mack, 11 Feb 2024
-
RC2: 'Comment on egusphere-2023-2804', Anonymous Referee #2, 16 Jan 2024
The submitted manuscript describes research towards an operational retrieval of vertical cloud structure from space-borne, imaging spectroradiometers. It continues development, by the same research team, of the MLANN algorithm: the present iteration 1) further subsets the algorithm (which already separately models day and night observations) to separately model observations from snow-or-ice covered and snow-or-ice free surfaces, and 2) uses updated labeled datasets. For the first time, the authors also consider the application of MLANN (which is trained on near-nadir viewing angles) to full swaths (i.e. including far-from-nadir viewing angles).
The manuscript is lengthy and often poorly organized, making a comprehensive list of revisions, sufficient to merit publication, too time-consuming to generate. I've listed several of my key concerns with the manuscript below, along with several examples of unnecessary additions to the manuscript's "cognitive load" which, if reduced, would allow this reviewer to complete a comprehensive list of recommendations.
-
The discussion begins with an unsupported statement: "These results represent a significant improvement over the previous MLANN formulation." It's important that a revised manuscript include evidence supporting this conclusion. If this iteration of the MLANN has not shown improvements, then an entirely different manuscript crafted around a negative result is needed. While I'm sure that is not the case, where is the comparison? Section 5.2 and Table 5 have neglected that critical comparison.
-
Sections 2 and 3 are not laid out in a way that helps the reader. ANNs are data-driven, so presenting the model before the data demands the reader maintain abstractions "x" and "y" until the next section. I recommend reversing the presentation, so the reader knows the inputs and outputs. The "Input and Output Layers" subsection of 3 includes aspects of methodology (data splits, number of models trained) and feature definition (cloud layer classification) that don't have anything to do with the subsection heading.
-
The accuracy metric featured foremost in the abstract and conclusion is sensitive to imbalance in the test data, which is nearly 80:20 in this manuscript. The null model for an 80:20 split also already has 68% accuracy (not the 50% naively assumed for a binary classification problem). You might consider using instead the Mathews correlation coefficient. You definitely do not want to compare, as in Table 5, the accuracy of models with different degrees of imbalance in the test data.
-
Terminology and acronyms are abundant and sometimes more confusing than helpful. One example: does "multi-layer" in MLANN refer to neural network layers or cloud layers? MLNN is a widely used acronym for neural networks with multiple layers of nodes, so most readers will associate the ML in MLANN with nodes not clouds. A second example: the MS, SM, MM, SS acronyms for the confusion matrix are unnecessary departures from the true positive (TP), false positive (FP), etc. terminology widely used for binary classification problems. You could eliminate a whole Table by ditching MS, SM, MM, and SS in favor of conventional terminology. A third example: what is the purpose of inventing "net gain of accuracy" when recall (of the multi-layer class) already quantifies MM with respect to MS and is both familiar and normalized? A fourth example: CoS is a new term and acronym for precision of the single-layer class. (At least, it is if I'm correct that the "1 - " in equation 6 is a mistake.) Every new term or acronym, especially ones close to but deviating from familiar ones, increases the cognitive load the reader must maintain while interpreting the study.
-
The presentation of the algorithm spends too long reviewing general theory and barely touches useful specifics. I appreciate brevity allowed by references to prior papers documenting the MLANN, although I did not take time to read them too. Nevertheless, specifics that either differ, are essential for reproducibility, or simply quick to convey ought to be included (e.g. exact network shape for each model, the loss function). The description also has to be self-inconsistent, which it is not in describing 1) how "testing" and "validation" data are (both?) used to terminate training, and 2) describing y as a probability in the text but as any real number in Figure 1. One thing the cited papers may explain, but I would like to (also) see here, is the reasoning or optimization that 1) limited the ANN to a single hidden layer, and 2) allowed collinear variables as inputs (in the brightness temperatures along with brightness temperature differences).
-
The optimization appears to take not precautions against local minima or overfitting. Widely used software for neural network optimization use some form of stochastic gradient descent, whereas this study uses Levenberg-Marquette. How are local minima avoided? The large numbers of parameters in neural network optimization can lead to overfitting, but the authors do not indicate any steps taken to avoid overfitting or indicate that none was observed.
-
The function "g" in Figure 1 is a shifted and scaled sigmoid activation function, but neither the shifting nor scaling will have any effect given the free weights and biases. This unnecessary "tweak" is another addition to the cognitive load that makes reading this manuscript a real slog.
-
The research ought to be reproducible. Normally I would say "more easily reproducible", but this work has not reached the bar of reproducible at all. The software and data ought to be provided, preferably coupled with a clear pipeline for training and evaluating the model. Software the authors didn't write must be cited (what software evaluated and optimized the neural network?), and any software the authors wrote ought to be included as a supplement. If a pipeline includes downloading and processing of raw data from some CERES Ordering Tool API, then the link-to-the-data provided in the manuscript is sufficient; if it does not, the processed data ought to be published.
-
Minnis et al. 2019 is not in the references.
-
The subscript on f in Figure 1 is not needed. In fact, it is a misdirection because the subscript indicate the existence of hidden layers which do not exist. The nodes currently labelled f_1 should be labelled u_1, u_2, u_3, etc. What is the difference in Figure 1 between x_1, x_2, x_3, etc. and the first layer of nodes with no label?
-
Line 207-209 is an incomplete sentence.
-
The studies included in table 5 do not use the same test data, so comparing the metrics is okay but not entirely quantitative. Indicating the "best" result with a bold-face font pushes the comparison too far.
-
The similarity of the CC and MLANN cloud fractions when stratified over space (Fig 14) or time (Fig 15) could be quantified with a correlation coefficient.
Citation: https://doi.org/10.5194/egusphere-2023-2804-RC2 - AC2: 'Reply on RC2', S.S. Sun-Mack, 11 Feb 2024
-
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
350 | 92 | 28 | 470 | 36 | 16 | 16 |
- HTML: 350
- PDF: 92
- XML: 28
- Total: 470
- Supplement: 36
- BibTeX: 16
- EndNote: 16
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Cited
1 citations as recorded by crossref.
Sunny Sun-Mack
Patrick Minnis
Yan Chen
Gang Hong
William L. Smith Jr.
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(16543 KB) - Metadata XML
-
Supplement
(9326 KB) - BibTeX
- EndNote
- Final revised paper