the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
AI4SeaIce: Task Separation and Multistage Inference CNNs for Automatic Sea Ice Concentration Charting
Abstract. We investigate how different Convolutional Neural Network (CNN) U-Net models specialised in addressing partial labelling tasks related to mapping Sea Ice Concentration (SIC) can improve performance. We use Sentinel-1 SAR images and human-labelled ice charts as the reference to train models that benefit from advantages gained from different model optimisation objectives by utilising a multistage inference scheme. We find our multistage model inference approach that apply a classification (CrossEntropy or Earth Mover's Distance squared) optimised model to separate open water, intermediate SIC and fully covered ice in conjunction with a regression (Mean Square Error or Binary CrossEntropy) optimised model, that assigns specific intermediate classes, to perform the best. To evaluate the models we introduce several specific metrics illustrating the performance in key areas, such as the separation of macro classes, intermediate class, and an accuracy metric better encapsulating uncertainties in the reference data. We achieve R2-score of ~93 %, similar to state-of-the-art in the literature (Kucik and Stokholm, 2023). However, our models exhibit significantly better open water and 100 % SIC detections. The multistage synergises high open water and fully covered sea ice accuracies achieved with classification optimised objectives with good intermediate class performance obtained by regressional loss functions. In addition, our findings indicate that the number of classes that the intermediate concentrations are compressed into does not influence the result significantly it is the loss function used to optimise the model that assigns the specific intermediate class to have the largest impact.
- Preprint
(28440 KB) - Metadata XML
- BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on egusphere-2023-976', Eric Wendoloski, 02 Aug 2023
Referee Comments
AI4SeaIce: Task Separation with Multistage Inference CNNs for Automatic Sea Ice Concentration Charting
Stokholm et al. 2023
General Comments:
This research intends to explore a SAR-based multistage expansion of sea ice concentration diagnoses by leveraging one UNet for bulk class prediction (open water, ice, consolidated intermediate concentrations) and another UNet to refine diagnoses of intermediate concentrations. The authors explore such an approach with a diversity of classification- and regression-focused loss functions. While this approach did not yield correlation coefficient improvements over a prior effort, an improvement was shown for intermediate concentration accuracies. As this is a quickly expanding field and the study applies to a very useful open-source dataset that will likely drive research in the near future, I believe this paper does have a place in the knowledge corpus being assembled in this domain. However, there are several key concerns that, when addressed, will make this a stronger submission. In its present state, I believe the manuscript requires major revisions.
Specific Comments:
Test set evaluation concerns
Despite the authors note, I am concerned about the level of potential cross contamination between the test set and training set imagery which could inflate metric scores. While there is brief mention of this possibility, the authors may mitigate this concern by clarifying minimal spatial/temporal overlap between test and train scenes. If the test scenes demonstrate minimal footprint overlap and/or adequate time deltas against training scenes in that location, potential strong correlation between train and test may be less impactful. Any overlapping train/test scenes where minimal changes in sea ice have between demonstrated between the two may be problematic as deep-learning algorithms are very much capable of memorizing training input. There is no mention of a specific validation dataset or early stopping based on such hold out scenes, further enhancing this concern.
Additionally, the author mentions all scenes spanned between March 2018 and May 2019 collection dates. It sounds like scenes from the recent Auto-ice challenge dataset were leveraged for this paper. I believe that dataset reference scenes from 2018, 2019, and 2020. Using scenes from a different year altogether (2020) would go a long way towards an independent test set and would certainly strengthen the arguments in this manuscript.
Class Imbalance
Aside from mentioning weighted losses based on class frequency, the authors make minimal effort to combat potential class imbalance issues. For example, there is no mention of attempting over/under sampling or including loss variations specifically designed for such issues (e.g., focal loss – an easy swap with CE). If class imbalance is minimal in this dataset, the inclusion of a label count table would be helpful in assuring the reader. The authors express disappointment towards not achieving greater than a 93% R2 throughout the updated experiments. One wonders if this is simply related to class imbalance, where improvements in intermediate class results are still not enough to affect change in R2 because it is dominated by 0% and 100%. This should be something that could be examined via confusion matrices and perturbation of such matrices to look at impact of imbalance. The matrices can be converted in this case of clean concentration intervals to match values given by the Sklearn function for correlation coefficient (I am speculating this is being used as it was noted in the Auto-ice challenge documentation). This may provide some insight.
On a related note, the paper shows relatively minimal sensitivity to the number of intermediate classes used. Is this again connected to class imbalance, or one or two intermediate classes are far more dominant and always selected, leading to similar results?
Multiple Models
While my guess is inference is relatively fast with standard CPU+GPU resources, even with two models, a multi-model approach may not be desirable in a limited resource scenario. Additionally, it is very likely weights from both models are performing similar feature extraction functions, especially along the encoder path. Have the authors considered a single model approach, perhaps a UNet with two heads, where one head predicts macro classes, and the other intermediate pixels, masking the loss from 0% and 100% pixels? Combining the results of losses from the two heads (e.g., Total Loss = w1*CE+w2*MSE) is not uncommon. Have the authors explored such an approach? If so, and the multistage approach outperforms, that would make an even stronger case for two models.
Composition Quality
There are a number of instances across the manuscript where the writing style confuses the intended message. I will note a number of such examples in “Technical Corrections”, but the authors should make a concerted effort to go line by line through the manuscript and rewrite/reword when necessary. Please note some technical corrections below will contain technical and grammatical suggestions.
Technical Corrections:
Line 4: “apply” to ”applies”
Line 10: A) Refrain from using the world “detections”. “Detection” is a specific task in the computer vision community and use of this is confusing when doing segmentation. “segmentations” may be a better term. B) “multistage synergises” to “the multistage approach synergises”
Line 12/13: “compressed” may also not be the best term as data compression is again an entire field of its own. Perhaps “consolidated” or “merged” is a better choice throughout manuscript.
Line 16-19: Rework this. It’s cumbersome in its current form.
Line 17: Remove second “sea ice” mention.
Line 23: “continue to cause” to exacerbate
Line 24: “could” to “should”; remove “furthermore” as it was just used above.
Line 31: “distant” location to remote? Not a great sentence in general. A rewording may flow better.
Line 33: Move “almost indistinguishable” to after “reflection”
Line 40: Use of “spatial resolution” but referencing pixel spacing – spatial resolution and pixel spacing are different concepts in this case. Maybe “...versatile measurement with pixel spacing typically between X and Y”. Also note the 10-40m pixel spacing is not inclusive enough for SAR in general. S1 EW GRD is 40m, but RADARSAT-2 and RCM (similarly useful C band collectors) have different values depending on product.
Line 46: Remove “a” before “resource”.
Line 78: “in relation” to “relative”?
Line 79/80: Is there a reference for rolling up to 11 concentrations? It sounds like total concentration was the label. Does your source contain values like “81”, if so, how is this treated, or is this different than typical products available through sources like the USNIC?
Line 79: Define SIGRID-3 here.
Line 94: How is it possible a model trained on hand drawn labels could exceed the original spatial resolution? Is it more related to a mix of labeling styles, where the more detailed analyst reports are showing through?
Line 103: “compress” again – may be a better word choice available.
Line 125: “contributed” to “attributed” – sentence is cumbersome.
Line 129: Comment on narrow dynamic range dB – I think this could be written more clearly.
Line 132: Is wind also a factor in the brightening here?
Line 148: SIGRID definition should have been earlier.
Line 190/191: Be cautious about how ignore index for the CE function in PyTorch may operate differently than multiplying by 0 in the other case. Depending on options, the “reduction” could be impacted.
Line 215-220: I believe this text could be reduced upon rewrite. I would also avoid calling out a “new” metric as this isn’t really novel. Simply calling it an average accuracy as you note is sufficient.
Line 262: Please be as clear as possible when using the Kucik and Stockholm (2023) models. I think these are your reference models, but you never actually call them the reference models explicitly when talking about the prior study. This applies here and later on.
Line 272: “reference 11-class models” to clarify as unweighted?
Line 274-275: Rewrite – placement of “outperforms the models” makes sentence cumbersome.
Line 277: “much better” – please just quantify based on tables – 4-7% improvement, etc.
Line 280: IceDiscstoIceDisc models
Line 281: Reorder wording
Line 289: Is the statement about the 7 class CE “approaches the class weighted reference model” correct? I am seeing 86.16% vs. 97.18%. Am I reading this incorrectly? Again, in this section, please be clearer on the exact origin of the reference models – they are from the prior paper?
Line 299: Clarify this statement – what difference is being referenced?
Line 311: The 3-class gives 93.01%, not the 6-class according to the table. Please clarify or fix.
Line 315: “clear tendency” statement is confusing. You mention 6 classes performs best, but the statement doesn’t make sense to me since 7 and 11 are both worse and are at times outperformed by <5 classes.
Line 440: You quote a “performance” – please be clear – is that R2 or accuracy?
Line 441: “labelling” – using “labelling” implies the ground truth is incorrect. Call it “segmentation” if you are referring to the prediction.
Line 446: See specific comment on “Class Imbalance”.
Line 456: This is an incomplete thought on my copy. I would be very careful in bringing up super resolution without a citation and clear thought. Super resolution is a huge topic in its own right. Dropping this in here without clarification or some evidence to do so is confusing. This is also probably better off in future work.
Citation: https://doi.org/10.5194/egusphere-2023-976-RC1 - AC1: 'Reply to review', Andreas Stokholm, 05 Sep 2023
-
RC2: 'Comment on egusphere-2023-976', Anonymous Referee #2, 19 Aug 2023
The key issues that this paper tries to address are to explore the benefits of combining classification with regression, and the performance differences of different losses. From a machine learning methodology perspective, the contribution is marginal. From the application perspective, it is also vague to know what are the implications and relevance for operational ice charting in terms of improving the current state-of-the-art approaches, e.g., the AutoICE competition models and results. The data used in the research is also partial. Please find below my detailed comments.
(1) By “In the same inference combination the 6-class version scores best on the multi-accuracy metric”, do you mean “In the same inference combination the 3-class version scores best on the multi-accuracy metric”. Please carefully review the manuscript to address similar problems.
(2) The analysis of the comparison results of these losses are very “shallow” in the sense that it only answers “what are the performance differences” but does not answer “why are they different”. Considering that a significant proportion and contribution of this paper is the comparison between different losses, e.g., CE, BCE, MSE, EMD, I suggest the authors introduce the differences of these losses in the methodology section of this paper, and use these theoretical differences to explain the patterns in the experimental results in sections 4.1, 4.2 and 4.3 of this paper.
(3) The “best” accuracies are highlighted in the tables. But, what does these accuracies mean from an application perspective? For example, in table 5, the “best” Int R2 value is 58.49%. Is this accuracy good enough for operational mapping, and why is the proposed approach significant for operational SIC charting? I suggest the authors illustrate all “best” accuracies from this perspective to better justify the proposed approach.
(4) What are the 11-class reference models? Why are comparisons with these models important? Are they published state-of-the-art models using the AutoICE dataset? It is more relevant to compare with the results from the winning solutions from the AutoICE competition.
(5) Scene acquired September 3, 2018 -> Scene acquired on September 3, 2018
(6) and the third row with the Earth Mover’s Distance squared loss function. -> and the third row with the EMD squared loss function. Please address similar issues.
(7) In Figure 4, the reference model using CE loss tends to generate a map that is more visually consistent with the HH and HV channels, although it is more inconsistent with the ice chart. How do you account for the uncertainties and even errors in polygon definition and SIC values assigned by human experts? To what extent do the errors and uncertainties in the ice chart lead to misleading conclusions in your experiments?
(8) For the question “It is somewhat surprising that separating pixels into fewer categories does not lead to greater separability improvements, as logically, it should be an easier task to predict fewer classes.”, the authors are encouraged to do analysis from the increased inner-class variation perspective. For example, combining classes will lead to large class heterogeneity/variation, leading to vague definition/signature of a class. Combining classes leads to a “weakly” supervised problem with larger ground truth vagueness and inexactness, but using 11 classes is a “strongly” supervised problem with fine-grained ground truth that is potentially more helpful for the classifiers to learn class signatures.
(9) Another avenue could involve investigating super resolution -> Another avenue could involve investigating super resolution.
(10) Why the authors did not use the ancillary data in AutoICE dataset, e.g., passive microwave (especially considering that the authors said that PM can be helpful), weather data, and distance to land data? Using these data can address many problems identified in this paper, and also allow fair comparison with the AutoICE competition results to better justify the proposed approach. So the authors are strongly suggested to do this.
(11) Why was the patch size set to be 768? Why did you use 8 blocks for Unet? Have the authors tried other patch sizes? How do you ensure that the base model is optimized in terms of input data preparation and hyperparameter tuning?
(12) In table 3, what are the differences between 11 cls and 11 cls in the reference model?
(13) there are too many measures of accuracy, some of which are not defined. For each measure, please define its equation explicitly, and explain the differences among these measures, and justify why so many measures are needed. Now, it is very confusing.
Citation: https://doi.org/10.5194/egusphere-2023-976-RC2 - AC1: 'Reply to review', Andreas Stokholm, 05 Sep 2023
Status: closed
-
RC1: 'Comment on egusphere-2023-976', Eric Wendoloski, 02 Aug 2023
Referee Comments
AI4SeaIce: Task Separation with Multistage Inference CNNs for Automatic Sea Ice Concentration Charting
Stokholm et al. 2023
General Comments:
This research intends to explore a SAR-based multistage expansion of sea ice concentration diagnoses by leveraging one UNet for bulk class prediction (open water, ice, consolidated intermediate concentrations) and another UNet to refine diagnoses of intermediate concentrations. The authors explore such an approach with a diversity of classification- and regression-focused loss functions. While this approach did not yield correlation coefficient improvements over a prior effort, an improvement was shown for intermediate concentration accuracies. As this is a quickly expanding field and the study applies to a very useful open-source dataset that will likely drive research in the near future, I believe this paper does have a place in the knowledge corpus being assembled in this domain. However, there are several key concerns that, when addressed, will make this a stronger submission. In its present state, I believe the manuscript requires major revisions.
Specific Comments:
Test set evaluation concerns
Despite the authors note, I am concerned about the level of potential cross contamination between the test set and training set imagery which could inflate metric scores. While there is brief mention of this possibility, the authors may mitigate this concern by clarifying minimal spatial/temporal overlap between test and train scenes. If the test scenes demonstrate minimal footprint overlap and/or adequate time deltas against training scenes in that location, potential strong correlation between train and test may be less impactful. Any overlapping train/test scenes where minimal changes in sea ice have between demonstrated between the two may be problematic as deep-learning algorithms are very much capable of memorizing training input. There is no mention of a specific validation dataset or early stopping based on such hold out scenes, further enhancing this concern.
Additionally, the author mentions all scenes spanned between March 2018 and May 2019 collection dates. It sounds like scenes from the recent Auto-ice challenge dataset were leveraged for this paper. I believe that dataset reference scenes from 2018, 2019, and 2020. Using scenes from a different year altogether (2020) would go a long way towards an independent test set and would certainly strengthen the arguments in this manuscript.
Class Imbalance
Aside from mentioning weighted losses based on class frequency, the authors make minimal effort to combat potential class imbalance issues. For example, there is no mention of attempting over/under sampling or including loss variations specifically designed for such issues (e.g., focal loss – an easy swap with CE). If class imbalance is minimal in this dataset, the inclusion of a label count table would be helpful in assuring the reader. The authors express disappointment towards not achieving greater than a 93% R2 throughout the updated experiments. One wonders if this is simply related to class imbalance, where improvements in intermediate class results are still not enough to affect change in R2 because it is dominated by 0% and 100%. This should be something that could be examined via confusion matrices and perturbation of such matrices to look at impact of imbalance. The matrices can be converted in this case of clean concentration intervals to match values given by the Sklearn function for correlation coefficient (I am speculating this is being used as it was noted in the Auto-ice challenge documentation). This may provide some insight.
On a related note, the paper shows relatively minimal sensitivity to the number of intermediate classes used. Is this again connected to class imbalance, or one or two intermediate classes are far more dominant and always selected, leading to similar results?
Multiple Models
While my guess is inference is relatively fast with standard CPU+GPU resources, even with two models, a multi-model approach may not be desirable in a limited resource scenario. Additionally, it is very likely weights from both models are performing similar feature extraction functions, especially along the encoder path. Have the authors considered a single model approach, perhaps a UNet with two heads, where one head predicts macro classes, and the other intermediate pixels, masking the loss from 0% and 100% pixels? Combining the results of losses from the two heads (e.g., Total Loss = w1*CE+w2*MSE) is not uncommon. Have the authors explored such an approach? If so, and the multistage approach outperforms, that would make an even stronger case for two models.
Composition Quality
There are a number of instances across the manuscript where the writing style confuses the intended message. I will note a number of such examples in “Technical Corrections”, but the authors should make a concerted effort to go line by line through the manuscript and rewrite/reword when necessary. Please note some technical corrections below will contain technical and grammatical suggestions.
Technical Corrections:
Line 4: “apply” to ”applies”
Line 10: A) Refrain from using the world “detections”. “Detection” is a specific task in the computer vision community and use of this is confusing when doing segmentation. “segmentations” may be a better term. B) “multistage synergises” to “the multistage approach synergises”
Line 12/13: “compressed” may also not be the best term as data compression is again an entire field of its own. Perhaps “consolidated” or “merged” is a better choice throughout manuscript.
Line 16-19: Rework this. It’s cumbersome in its current form.
Line 17: Remove second “sea ice” mention.
Line 23: “continue to cause” to exacerbate
Line 24: “could” to “should”; remove “furthermore” as it was just used above.
Line 31: “distant” location to remote? Not a great sentence in general. A rewording may flow better.
Line 33: Move “almost indistinguishable” to after “reflection”
Line 40: Use of “spatial resolution” but referencing pixel spacing – spatial resolution and pixel spacing are different concepts in this case. Maybe “...versatile measurement with pixel spacing typically between X and Y”. Also note the 10-40m pixel spacing is not inclusive enough for SAR in general. S1 EW GRD is 40m, but RADARSAT-2 and RCM (similarly useful C band collectors) have different values depending on product.
Line 46: Remove “a” before “resource”.
Line 78: “in relation” to “relative”?
Line 79/80: Is there a reference for rolling up to 11 concentrations? It sounds like total concentration was the label. Does your source contain values like “81”, if so, how is this treated, or is this different than typical products available through sources like the USNIC?
Line 79: Define SIGRID-3 here.
Line 94: How is it possible a model trained on hand drawn labels could exceed the original spatial resolution? Is it more related to a mix of labeling styles, where the more detailed analyst reports are showing through?
Line 103: “compress” again – may be a better word choice available.
Line 125: “contributed” to “attributed” – sentence is cumbersome.
Line 129: Comment on narrow dynamic range dB – I think this could be written more clearly.
Line 132: Is wind also a factor in the brightening here?
Line 148: SIGRID definition should have been earlier.
Line 190/191: Be cautious about how ignore index for the CE function in PyTorch may operate differently than multiplying by 0 in the other case. Depending on options, the “reduction” could be impacted.
Line 215-220: I believe this text could be reduced upon rewrite. I would also avoid calling out a “new” metric as this isn’t really novel. Simply calling it an average accuracy as you note is sufficient.
Line 262: Please be as clear as possible when using the Kucik and Stockholm (2023) models. I think these are your reference models, but you never actually call them the reference models explicitly when talking about the prior study. This applies here and later on.
Line 272: “reference 11-class models” to clarify as unweighted?
Line 274-275: Rewrite – placement of “outperforms the models” makes sentence cumbersome.
Line 277: “much better” – please just quantify based on tables – 4-7% improvement, etc.
Line 280: IceDiscstoIceDisc models
Line 281: Reorder wording
Line 289: Is the statement about the 7 class CE “approaches the class weighted reference model” correct? I am seeing 86.16% vs. 97.18%. Am I reading this incorrectly? Again, in this section, please be clearer on the exact origin of the reference models – they are from the prior paper?
Line 299: Clarify this statement – what difference is being referenced?
Line 311: The 3-class gives 93.01%, not the 6-class according to the table. Please clarify or fix.
Line 315: “clear tendency” statement is confusing. You mention 6 classes performs best, but the statement doesn’t make sense to me since 7 and 11 are both worse and are at times outperformed by <5 classes.
Line 440: You quote a “performance” – please be clear – is that R2 or accuracy?
Line 441: “labelling” – using “labelling” implies the ground truth is incorrect. Call it “segmentation” if you are referring to the prediction.
Line 446: See specific comment on “Class Imbalance”.
Line 456: This is an incomplete thought on my copy. I would be very careful in bringing up super resolution without a citation and clear thought. Super resolution is a huge topic in its own right. Dropping this in here without clarification or some evidence to do so is confusing. This is also probably better off in future work.
Citation: https://doi.org/10.5194/egusphere-2023-976-RC1 - AC1: 'Reply to review', Andreas Stokholm, 05 Sep 2023
-
RC2: 'Comment on egusphere-2023-976', Anonymous Referee #2, 19 Aug 2023
The key issues that this paper tries to address are to explore the benefits of combining classification with regression, and the performance differences of different losses. From a machine learning methodology perspective, the contribution is marginal. From the application perspective, it is also vague to know what are the implications and relevance for operational ice charting in terms of improving the current state-of-the-art approaches, e.g., the AutoICE competition models and results. The data used in the research is also partial. Please find below my detailed comments.
(1) By “In the same inference combination the 6-class version scores best on the multi-accuracy metric”, do you mean “In the same inference combination the 3-class version scores best on the multi-accuracy metric”. Please carefully review the manuscript to address similar problems.
(2) The analysis of the comparison results of these losses are very “shallow” in the sense that it only answers “what are the performance differences” but does not answer “why are they different”. Considering that a significant proportion and contribution of this paper is the comparison between different losses, e.g., CE, BCE, MSE, EMD, I suggest the authors introduce the differences of these losses in the methodology section of this paper, and use these theoretical differences to explain the patterns in the experimental results in sections 4.1, 4.2 and 4.3 of this paper.
(3) The “best” accuracies are highlighted in the tables. But, what does these accuracies mean from an application perspective? For example, in table 5, the “best” Int R2 value is 58.49%. Is this accuracy good enough for operational mapping, and why is the proposed approach significant for operational SIC charting? I suggest the authors illustrate all “best” accuracies from this perspective to better justify the proposed approach.
(4) What are the 11-class reference models? Why are comparisons with these models important? Are they published state-of-the-art models using the AutoICE dataset? It is more relevant to compare with the results from the winning solutions from the AutoICE competition.
(5) Scene acquired September 3, 2018 -> Scene acquired on September 3, 2018
(6) and the third row with the Earth Mover’s Distance squared loss function. -> and the third row with the EMD squared loss function. Please address similar issues.
(7) In Figure 4, the reference model using CE loss tends to generate a map that is more visually consistent with the HH and HV channels, although it is more inconsistent with the ice chart. How do you account for the uncertainties and even errors in polygon definition and SIC values assigned by human experts? To what extent do the errors and uncertainties in the ice chart lead to misleading conclusions in your experiments?
(8) For the question “It is somewhat surprising that separating pixels into fewer categories does not lead to greater separability improvements, as logically, it should be an easier task to predict fewer classes.”, the authors are encouraged to do analysis from the increased inner-class variation perspective. For example, combining classes will lead to large class heterogeneity/variation, leading to vague definition/signature of a class. Combining classes leads to a “weakly” supervised problem with larger ground truth vagueness and inexactness, but using 11 classes is a “strongly” supervised problem with fine-grained ground truth that is potentially more helpful for the classifiers to learn class signatures.
(9) Another avenue could involve investigating super resolution -> Another avenue could involve investigating super resolution.
(10) Why the authors did not use the ancillary data in AutoICE dataset, e.g., passive microwave (especially considering that the authors said that PM can be helpful), weather data, and distance to land data? Using these data can address many problems identified in this paper, and also allow fair comparison with the AutoICE competition results to better justify the proposed approach. So the authors are strongly suggested to do this.
(11) Why was the patch size set to be 768? Why did you use 8 blocks for Unet? Have the authors tried other patch sizes? How do you ensure that the base model is optimized in terms of input data preparation and hyperparameter tuning?
(12) In table 3, what are the differences between 11 cls and 11 cls in the reference model?
(13) there are too many measures of accuracy, some of which are not defined. For each measure, please define its equation explicitly, and explain the differences among these measures, and justify why so many measures are needed. Now, it is very confusing.
Citation: https://doi.org/10.5194/egusphere-2023-976-RC2 - AC1: 'Reply to review', Andreas Stokholm, 05 Sep 2023
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
345 | 140 | 41 | 526 | 28 | 26 |
- HTML: 345
- PDF: 140
- XML: 41
- Total: 526
- BibTeX: 28
- EndNote: 26
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1