the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
A deep learning approach to increase the value of satellite data for PM2.5 monitoring in China
Abstract. Limitations in the current capability of monitoring PM2.5 adversely impact air quality management and health risk assessment of PM2.5 exposure. Commonly, ground-based monitoring networks are established to measure the PM2.5 concentrations in highly populated regions and protected areas such as national parks, yet large gaps exist in spatial coverage. Satellite-derived aerosol optical properties serve to complement the missing spatial information of ground-based monitoring networks. However, such attempts are hampered under cloudy/hazy conditions or during nighttime. Here we strive to overcome the long-standing restriction that surface PM2.5 cannot be constrained with satellite remote sensing under cloudy/hazy conditions or during nighttime. We introduce a deep spatiotemporal neural network (ST-NN) and demonstrate that it can artfully fill these observational gaps. We use sensitivity analysis and visualization technology to open the neural network black box data model, and quantitatively discuss the potential impact of the input data on the target variables. This technique provides ground-level PM2.5 concentrations with high spatial resolution (0.01°) and 24-hour temporal coverage. Better constrained spatiotemporal distributions of PM2.5 concentrations will help improve health effects studies, atmospheric emission estimates, and predictions of air quality.
- Preprint
(3189 KB) - Metadata XML
-
Supplement
(4426 KB) - BibTeX
- EndNote
Status: closed
-
CC1: 'Comment on egusphere-2022-578', Hua Lin, 23 Aug 2022
This study reported a study about obtaining the near-ground PM2.5 concentrations from satellite observation via deep learning method. The demonstration and verification of the article are sufficient for me. But, I still have several questions about this research:
1, The filtering method utilized in CNEMC dataset should be introduced in the manuscript. Besides, I wander how the filtered data affect the results?
2, What factors affect the training efficiency of ST-NN model? Does different learning rate have significant effect on training efficiency? How do you choose the learning rate?
3, Whether different optimizers affect the results? (SGD,Adam)
I find some typos in the Supplement:
Table. S10 spot and leisure services -> sport and leisure services
Table. S13 Rate (%) -> Rate
Citation: https://doi.org/10.5194/egusphere-2022-578-CC1 -
AC1: 'Reply on CC1', Bo Li, 25 Aug 2022
Thank you very much for your attention.
1·
Data filtering meets the following conditions:
1.Null value
2.Constant value for more than 6 hours
3.The series were transformed into z scores. The points in the transformed time series meeting one of the three conditions were then rejected and removed from the original time series (1, 2):
(1). having an absolute z score larger than 4 (|zt| > 4),
(2). the increment from the previous value being larger than 9 (zt − zt − 1 > 9),
(3). the ratio of the value to its centred moving average of order 3 (MA3) being larger than 2 (zt/MA3(zt) > 2)
Reference:
M. A. Barrero, J. A. Orza, M. Cabello, L. Canton, Categorisation of air quality monitoring stations by evaluation of PM(10) variability. Sci Total Environ 524-525, 225-236 (2015).First, we perform a broad filtering scheme, and we only remove null data and constant value for more than 6 hours.
Later, for comparison, we implemented a stricter screening scheme, and the data that met one of the third conditions were eliminated (note that the stricter data screening scheme was only used in the comparison test here, and the rest were all relaxed data screening schemes). The proportion of abnormal data was about 15%. The results are shown in Table S10.S11.2·
learning rate, Optimization function, batch size all affect the training efficiency of the model. The learning rate has a significant impact on the training efficiency. The larger the batch size, the more memory it occupies (increase time), and the faster the training speed (decrease time). Considering the equipment, we select batch size as 4.
A larger learning rate has a faster training efficiency at the beginning, but with the optimization of the model, it is more and more difficult to train. At this time, it is necessary to reduce the learning rate. The setting of the learning rate is as in equation (2).
3·
Different optimizers have little effect on the final result, and mainly affect the training efficiency. We find that SGD has faster training efficiency.
Citation: https://doi.org/10.5194/egusphere-2022-578-AC1
-
AC1: 'Reply on CC1', Bo Li, 25 Aug 2022
-
RC1: 'Comment on egusphere-2022-578', Anonymous Referee #1, 01 Sep 2022
Major Comments:
The introduction lacks background on why and how PM2.5 is created, removed from the atmosphere, or where it lingers in the atmosphere. This needs to be expanded upon along with its connection to AOD. Further, it needs to be discussed why AOD may not be an appropriate proxy for PM2.5 at times and how this has been a limiting factor in the past to using AOD to effectively monitor PM2.5 from space.
The setup of the model, data, and testing of the model is difficult to understand. The methodology portion needs major restructuring to understand what data was used, at what scale, what meteorological inputs were used, if they were all at the same grid sized or kept somehow at their native resolutions, what time steps were included of each, how MODIS AOD at a single time step is integrated into the series of AOD from Himiwari-8, etc.
Further, the section on the model configuration is extremely muddled. I do not undestand how k-means was used, what a contingency table is, how the sensitivity analysis fits into the data/model configuration, and why only 10% of data is used as a test case instead of the standard 20% test, 20% validate, and 60% train. It seems as if they only use 10% to test their model, and given their numbers it wouldn't be surprising if that meant the model was overfit and not enough variability in the test samples existed to find that. They state that they did cross validation but the accuracy is never shown. The number of samples is never stated nor is the resolution or the exact inputs of the model clearly stated. The authors claim that they are predicting PM2.5 on an hourly timescale, but it is never clearly stated if that is what they actually trained their model to do. They use sensitivity analysis to test what inputs to use in their model then somehow also use that analysis to verify their model.
Minor comments:
Line 55: A map would be useful to understand where the locations are since I assume these were used as the true labels in training/testing.
Line 73: What limites these past studies from being able to fill gaps in time?
Line 88: Why is AOD only available for 33% of China?
Line 91: What causes haze in rural areas?
Line 97: Spell out what ST-NN stands for.
Line 115: Define WRF as an acryonym as it is used later.
Line 118: A table would be useful of all the inputs since the meteorological inputs are never clearly stated.
Line 135: I do not understand why the Pearson correlation is used to find"containd imension of time"?
Line 136: What is CNEMC and why/how is a Chi-suqared test used?
Line 143: Why is a k-means used? What does the discreteness of variables mean in this context? What and why is a contingency table used?
Line 147: Where are these layers used? Why? How do they affect model performance?
Line 161: Hw many samples? Are they all put into a common grid? What is the time resolution?
Section 2.4 Sensitivity Analysis: What do the levels mean? What does this section mean?
Line 231: What is 26 micrograms m3 in terms of percentage? Or how does it compared to usual values? Is that significant?
Line 234: Why is the model good at nighttime prediction?
Line 240: What does it mean the data are influenced by meteorological and aerosol data at .05?
Line 264: What are the AOD conditions in cloudy scenes? How does the model predict without AOD if it is one of the main predictands?
Line 306: Have any other studies ever used a NN or RF to predict PM2.5?
Line 316: How do your inputs compare to past studies?
Line 323: How good of a proxy is AOD for PM2.5 or PM10?
Citation: https://doi.org/10.5194/egusphere-2022-578-RC1 -
AC2: 'Reply on RC1', Bo Li, 21 Nov 2022
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2022/egusphere-2022-578/egusphere-2022-578-AC2-supplement.pdf
-
AC2: 'Reply on RC1', Bo Li, 21 Nov 2022
-
RC2: 'Comment on egusphere-2022-578', Anonymous Referee #2, 30 Sep 2022
General comments
This work concerns the development of a deep learning model to estimate PM2.5 concentrations from satellite data also under cloudy conditions. Although the topic addressed is interesting and quite original, the manuscript is not in a good shape for being published. I share most of the comments from Anonymous Referee #1 in the lack of clarity of some sections, especially as for the methods and the many missing details. In the Introduction section, lack of knowledge of the main physicochemical processes affecting PM production and removal seem to emerge. This is not negligible, since these can be connected with the discussion of the results obtained. Indeed, also the discussion section right now seems more like a sort of celebration of the results achieved, while a clear discussion on the results obtained, including reasons (processes?) and references, is totally missing. Clear linkages between the different variables that are obtained from ground based stations (PM concentrations, in ug m-3) and from satellites (AOT/AOD data) are also completely missing. Again, this is very useful for understanding the differences between the use and meaning of the variables, and why it is in any case not trivial to use satellite data for estimating PM concentrations at ground.
Specific comments
Line 30: It is not clear which attempts you are referring to. Please rephrase.
Line 31: “constrained” does not seem the most appropriate term in this context. Rephrase.
Line 32: Add “In this work,” before “We introduce”.
Lines 33-34: Absolutely unclear what you mean by “We use sensitivity analysis and visualization technology to open the neural black box data model”: rephrase.
Lines 35-37: Details on the accuracy and errors of the method need also to be given here.
Line 40: What do you mean by “impediments to human health”? Revise.
Lines 41-42: Please explain better, quite obscure the meaning of this sentence.
Lines 44-45: Not totally true, considering that PM2.5 (and PM10) precursors are in some cases not regulated (consider for instance ammonium deriving from ammonia) or limitations are due to the fact that precursors themselves are pollutants/toxic. Also, secondary aerosols are transported from long-range. So, this sentence contains many technical faults which need to be addressed and better clarified as the formation of secondary aerosols is key and one of the main reasons we have difficulties in addressing limit PM concentration values worldwide.
Lines 46-47: The reason of this long residence time is well known and must be added.
Lines 47-51: Not well explained, needs revision.
Line 69-70: Unclear for people not using those techniques.
Lines 75-76: It would be interesting to know the reason of such low temporal resolution.
Line 78. Rephrase “…Yu et al., 2017). In addition, errors in this method can derive from the uncertainties…”
Line 80: Change “offered” to “obtained”. What do you mean by “hourly predictions of daytime PM2.5”?
Lines 87-89: Statistical analyses on what? And why this result?
Line 90: Add “often impacted by” after “regions” (remove “with”).
Line 93: Change “elements” to “variables”. And explain better whether “geographical information” is just position or also other details such as topography, land use, or other..
Lines 93-95: This type of statements are not appropriate for the Introduction section. Better to replace it with information on the article structure.
Lines 97-99: Please add more details on the regions (e.g., better description, maps). This sentence is not that useful in the current version. Also you can introduce the different subsections that the reader will encounter in this section.
Lines 101-104: Please provide references on the data sources (e.g.. websites).
Line 108: Until now, you have always referred to AOD. Please explain what is aerosol optical thickness (AOT) and its relation with AOD.
Lines 107-110: References needed.
Line 119: Which meteorological fields? Of which variables?
Lines 121-122: Validation against what? Please provide a discussion on the validation results.
Line 122: Here and throughout the text, I don’t understand the need to have a point between “Figure” and the figure number.
Lines 127-128: Not clear.
Lines 134-135: Not clear.
Lines 135-137: How did you classify the severity of pollution?
Lines 125-159: This section is very unclear. Several methods are listed, but I cannot understand how most of them were used.
Lines 161-163: So you used just 10% of data for testing: isn’t this test period too short?
Lines 169-170: References to the libraries are needed.
Lines 161-170: Also this section is quite unclear, without references and details that can help the reader to repeat the process if needed.
Lines 172-204: The aim of this section is quite obscure.
Lines 209-210: Well, not really, as the methodology section lacks many details for instance on the kind of meteorological data used.
Line 211: Change “that” to “when”.
Line 216: What about the R2 or R value?
Lines 225: Change “pleasant” to “good”.
Lines 222-226: What about the other parameters (RMSE, MAE, …)?
Lines 210-212: If you do not have sampling sites and satellite retrievals, how can you train the model and how can you test (I mean, which data can you use as “measured PM2.5” in Figure 1)?
Line 227: Not all aerosols survive that long in the atmosphere! And also, the residence time is essentially driven by wet deposition processes. I assume that under monsoons period aerosols do not last long in the atmosphere..
Lines 231-232: This value is not that low..
Line 232: Please avoid the use of these non scientific terms (“delightful”).
Lines 242-247: Please discuss also other metrics apart from R2.
Lines 281-282: Wet removal is not connected with presence of clouds but of precipitation.
Lines 311-315: This seems a sort of repetition of the Introduction.
Lines 306-361: The Discussion section seems just like a list of advantages of the methods, rather than a true discussion of the results. For instance, I failed to understand what are the variables that finally enter the model, and why the other variables are probably not affecting PM. Also, reasons for limitations, issues are discussed with small details. References against which to compare the results are given, but references on how to interpret the results are instead not given. Finally, a conclusion section is missing.
Citation: https://doi.org/10.5194/egusphere-2022-578-RC2 -
AC3: 'Reply on RC2', Bo Li, 21 Nov 2022
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2022/egusphere-2022-578/egusphere-2022-578-AC3-supplement.pdf
-
AC3: 'Reply on RC2', Bo Li, 21 Nov 2022
Status: closed
-
CC1: 'Comment on egusphere-2022-578', Hua Lin, 23 Aug 2022
This study reported a study about obtaining the near-ground PM2.5 concentrations from satellite observation via deep learning method. The demonstration and verification of the article are sufficient for me. But, I still have several questions about this research:
1, The filtering method utilized in CNEMC dataset should be introduced in the manuscript. Besides, I wander how the filtered data affect the results?
2, What factors affect the training efficiency of ST-NN model? Does different learning rate have significant effect on training efficiency? How do you choose the learning rate?
3, Whether different optimizers affect the results? (SGD,Adam)
I find some typos in the Supplement:
Table. S10 spot and leisure services -> sport and leisure services
Table. S13 Rate (%) -> Rate
Citation: https://doi.org/10.5194/egusphere-2022-578-CC1 -
AC1: 'Reply on CC1', Bo Li, 25 Aug 2022
Thank you very much for your attention.
1·
Data filtering meets the following conditions:
1.Null value
2.Constant value for more than 6 hours
3.The series were transformed into z scores. The points in the transformed time series meeting one of the three conditions were then rejected and removed from the original time series (1, 2):
(1). having an absolute z score larger than 4 (|zt| > 4),
(2). the increment from the previous value being larger than 9 (zt − zt − 1 > 9),
(3). the ratio of the value to its centred moving average of order 3 (MA3) being larger than 2 (zt/MA3(zt) > 2)
Reference:
M. A. Barrero, J. A. Orza, M. Cabello, L. Canton, Categorisation of air quality monitoring stations by evaluation of PM(10) variability. Sci Total Environ 524-525, 225-236 (2015).First, we perform a broad filtering scheme, and we only remove null data and constant value for more than 6 hours.
Later, for comparison, we implemented a stricter screening scheme, and the data that met one of the third conditions were eliminated (note that the stricter data screening scheme was only used in the comparison test here, and the rest were all relaxed data screening schemes). The proportion of abnormal data was about 15%. The results are shown in Table S10.S11.2·
learning rate, Optimization function, batch size all affect the training efficiency of the model. The learning rate has a significant impact on the training efficiency. The larger the batch size, the more memory it occupies (increase time), and the faster the training speed (decrease time). Considering the equipment, we select batch size as 4.
A larger learning rate has a faster training efficiency at the beginning, but with the optimization of the model, it is more and more difficult to train. At this time, it is necessary to reduce the learning rate. The setting of the learning rate is as in equation (2).
3·
Different optimizers have little effect on the final result, and mainly affect the training efficiency. We find that SGD has faster training efficiency.
Citation: https://doi.org/10.5194/egusphere-2022-578-AC1
-
AC1: 'Reply on CC1', Bo Li, 25 Aug 2022
-
RC1: 'Comment on egusphere-2022-578', Anonymous Referee #1, 01 Sep 2022
Major Comments:
The introduction lacks background on why and how PM2.5 is created, removed from the atmosphere, or where it lingers in the atmosphere. This needs to be expanded upon along with its connection to AOD. Further, it needs to be discussed why AOD may not be an appropriate proxy for PM2.5 at times and how this has been a limiting factor in the past to using AOD to effectively monitor PM2.5 from space.
The setup of the model, data, and testing of the model is difficult to understand. The methodology portion needs major restructuring to understand what data was used, at what scale, what meteorological inputs were used, if they were all at the same grid sized or kept somehow at their native resolutions, what time steps were included of each, how MODIS AOD at a single time step is integrated into the series of AOD from Himiwari-8, etc.
Further, the section on the model configuration is extremely muddled. I do not undestand how k-means was used, what a contingency table is, how the sensitivity analysis fits into the data/model configuration, and why only 10% of data is used as a test case instead of the standard 20% test, 20% validate, and 60% train. It seems as if they only use 10% to test their model, and given their numbers it wouldn't be surprising if that meant the model was overfit and not enough variability in the test samples existed to find that. They state that they did cross validation but the accuracy is never shown. The number of samples is never stated nor is the resolution or the exact inputs of the model clearly stated. The authors claim that they are predicting PM2.5 on an hourly timescale, but it is never clearly stated if that is what they actually trained their model to do. They use sensitivity analysis to test what inputs to use in their model then somehow also use that analysis to verify their model.
Minor comments:
Line 55: A map would be useful to understand where the locations are since I assume these were used as the true labels in training/testing.
Line 73: What limites these past studies from being able to fill gaps in time?
Line 88: Why is AOD only available for 33% of China?
Line 91: What causes haze in rural areas?
Line 97: Spell out what ST-NN stands for.
Line 115: Define WRF as an acryonym as it is used later.
Line 118: A table would be useful of all the inputs since the meteorological inputs are never clearly stated.
Line 135: I do not understand why the Pearson correlation is used to find"containd imension of time"?
Line 136: What is CNEMC and why/how is a Chi-suqared test used?
Line 143: Why is a k-means used? What does the discreteness of variables mean in this context? What and why is a contingency table used?
Line 147: Where are these layers used? Why? How do they affect model performance?
Line 161: Hw many samples? Are they all put into a common grid? What is the time resolution?
Section 2.4 Sensitivity Analysis: What do the levels mean? What does this section mean?
Line 231: What is 26 micrograms m3 in terms of percentage? Or how does it compared to usual values? Is that significant?
Line 234: Why is the model good at nighttime prediction?
Line 240: What does it mean the data are influenced by meteorological and aerosol data at .05?
Line 264: What are the AOD conditions in cloudy scenes? How does the model predict without AOD if it is one of the main predictands?
Line 306: Have any other studies ever used a NN or RF to predict PM2.5?
Line 316: How do your inputs compare to past studies?
Line 323: How good of a proxy is AOD for PM2.5 or PM10?
Citation: https://doi.org/10.5194/egusphere-2022-578-RC1 -
AC2: 'Reply on RC1', Bo Li, 21 Nov 2022
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2022/egusphere-2022-578/egusphere-2022-578-AC2-supplement.pdf
-
AC2: 'Reply on RC1', Bo Li, 21 Nov 2022
-
RC2: 'Comment on egusphere-2022-578', Anonymous Referee #2, 30 Sep 2022
General comments
This work concerns the development of a deep learning model to estimate PM2.5 concentrations from satellite data also under cloudy conditions. Although the topic addressed is interesting and quite original, the manuscript is not in a good shape for being published. I share most of the comments from Anonymous Referee #1 in the lack of clarity of some sections, especially as for the methods and the many missing details. In the Introduction section, lack of knowledge of the main physicochemical processes affecting PM production and removal seem to emerge. This is not negligible, since these can be connected with the discussion of the results obtained. Indeed, also the discussion section right now seems more like a sort of celebration of the results achieved, while a clear discussion on the results obtained, including reasons (processes?) and references, is totally missing. Clear linkages between the different variables that are obtained from ground based stations (PM concentrations, in ug m-3) and from satellites (AOT/AOD data) are also completely missing. Again, this is very useful for understanding the differences between the use and meaning of the variables, and why it is in any case not trivial to use satellite data for estimating PM concentrations at ground.
Specific comments
Line 30: It is not clear which attempts you are referring to. Please rephrase.
Line 31: “constrained” does not seem the most appropriate term in this context. Rephrase.
Line 32: Add “In this work,” before “We introduce”.
Lines 33-34: Absolutely unclear what you mean by “We use sensitivity analysis and visualization technology to open the neural black box data model”: rephrase.
Lines 35-37: Details on the accuracy and errors of the method need also to be given here.
Line 40: What do you mean by “impediments to human health”? Revise.
Lines 41-42: Please explain better, quite obscure the meaning of this sentence.
Lines 44-45: Not totally true, considering that PM2.5 (and PM10) precursors are in some cases not regulated (consider for instance ammonium deriving from ammonia) or limitations are due to the fact that precursors themselves are pollutants/toxic. Also, secondary aerosols are transported from long-range. So, this sentence contains many technical faults which need to be addressed and better clarified as the formation of secondary aerosols is key and one of the main reasons we have difficulties in addressing limit PM concentration values worldwide.
Lines 46-47: The reason of this long residence time is well known and must be added.
Lines 47-51: Not well explained, needs revision.
Line 69-70: Unclear for people not using those techniques.
Lines 75-76: It would be interesting to know the reason of such low temporal resolution.
Line 78. Rephrase “…Yu et al., 2017). In addition, errors in this method can derive from the uncertainties…”
Line 80: Change “offered” to “obtained”. What do you mean by “hourly predictions of daytime PM2.5”?
Lines 87-89: Statistical analyses on what? And why this result?
Line 90: Add “often impacted by” after “regions” (remove “with”).
Line 93: Change “elements” to “variables”. And explain better whether “geographical information” is just position or also other details such as topography, land use, or other..
Lines 93-95: This type of statements are not appropriate for the Introduction section. Better to replace it with information on the article structure.
Lines 97-99: Please add more details on the regions (e.g., better description, maps). This sentence is not that useful in the current version. Also you can introduce the different subsections that the reader will encounter in this section.
Lines 101-104: Please provide references on the data sources (e.g.. websites).
Line 108: Until now, you have always referred to AOD. Please explain what is aerosol optical thickness (AOT) and its relation with AOD.
Lines 107-110: References needed.
Line 119: Which meteorological fields? Of which variables?
Lines 121-122: Validation against what? Please provide a discussion on the validation results.
Line 122: Here and throughout the text, I don’t understand the need to have a point between “Figure” and the figure number.
Lines 127-128: Not clear.
Lines 134-135: Not clear.
Lines 135-137: How did you classify the severity of pollution?
Lines 125-159: This section is very unclear. Several methods are listed, but I cannot understand how most of them were used.
Lines 161-163: So you used just 10% of data for testing: isn’t this test period too short?
Lines 169-170: References to the libraries are needed.
Lines 161-170: Also this section is quite unclear, without references and details that can help the reader to repeat the process if needed.
Lines 172-204: The aim of this section is quite obscure.
Lines 209-210: Well, not really, as the methodology section lacks many details for instance on the kind of meteorological data used.
Line 211: Change “that” to “when”.
Line 216: What about the R2 or R value?
Lines 225: Change “pleasant” to “good”.
Lines 222-226: What about the other parameters (RMSE, MAE, …)?
Lines 210-212: If you do not have sampling sites and satellite retrievals, how can you train the model and how can you test (I mean, which data can you use as “measured PM2.5” in Figure 1)?
Line 227: Not all aerosols survive that long in the atmosphere! And also, the residence time is essentially driven by wet deposition processes. I assume that under monsoons period aerosols do not last long in the atmosphere..
Lines 231-232: This value is not that low..
Line 232: Please avoid the use of these non scientific terms (“delightful”).
Lines 242-247: Please discuss also other metrics apart from R2.
Lines 281-282: Wet removal is not connected with presence of clouds but of precipitation.
Lines 311-315: This seems a sort of repetition of the Introduction.
Lines 306-361: The Discussion section seems just like a list of advantages of the methods, rather than a true discussion of the results. For instance, I failed to understand what are the variables that finally enter the model, and why the other variables are probably not affecting PM. Also, reasons for limitations, issues are discussed with small details. References against which to compare the results are given, but references on how to interpret the results are instead not given. Finally, a conclusion section is missing.
Citation: https://doi.org/10.5194/egusphere-2022-578-RC2 -
AC3: 'Reply on RC2', Bo Li, 21 Nov 2022
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2022/egusphere-2022-578/egusphere-2022-578-AC3-supplement.pdf
-
AC3: 'Reply on RC2', Bo Li, 21 Nov 2022
Data sets
POI the Resource and Environment Science Data Center https://www.resdc.cn/data.aspx?DATAID=330
GDP the Resource and Environment Science Data Center https://www.resdc.cn/data.aspx?DATAID=252
Population the Resource and Environment Science Data Center https://www.resdc.cn/data.aspx?DATAID=251
MODIS land cover type Mark Friedl https://doi.org/10.5067/MODIS/MCD12C1.006
DEM the Resource and Environment Science Data Center https://www.resdc.cn/data.aspx?DATAID=123
MODIS aerosol optical depth Rob Levy and Christina Hsu https://doi.org/10.5067/MODIS/MOD04_3K.061
Himawari-8 satellite aerosol optical depth Yoshida, M. https://doi.org/10.2151/jmsj.2018-039
site pm2.5 CNEMC http://www.cnemc.cn/
weather fields the National Centers for Environment Prediction https://www.mmm.ucar.edu/weather-research-and-forecasting-model
road network openstreetmap https://download.geofabrik.de/asia/china.html
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
759 | 310 | 52 | 1,121 | 107 | 25 | 27 |
- HTML: 759
- PDF: 310
- XML: 52
- Total: 1,121
- Supplement: 107
- BibTeX: 25
- EndNote: 27
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1