the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Auroral alert version 1.0: Two-step automatic detection of sudden auroral intensification from all-sky jpeg images
Abstract. A real-time alert system of sudden and significant intensification of auroral arc with expanding motion (we call it "Local-Arc-Breaking" hereafter) was developed for Kiruna all-sky camera (ASC) using ASC jpeg images. The identification is made in two steps: (1) Using an "expert system" in which a combinations of simple criteria is applied to each pixels with calculations afterward (expert system), each jpeg image of the ASC is converted into a simple set of numbers, or "ASC auroral index", representing the occupancy of auroral pixels and characteristic intensity of the brightest aurora in the image. (2) Using this ASC auroral index, the level of auroral activity is estimated, aiming Level 6 as clear Local-Arc-Breaking and Level 4 as precursor for it (reserving Levels 1–3 for less active aurorae).
The first step is further divided into two stages: (1a) Using simple criteria for R (red), G (green), B (blue), and H (hue) values in the RGB and HLS colour codes, each pixel of a jpeg image is classified into several categories according to its colour as "visible diffuse", "green arc", "strong aurora" (which means saturated or mixed with N2 red line at 670 nm), "cloud", "artificial light", and "moon". (1b) The percentage of the occupying area (pixel coverage) for each category and the characteristic intensity of "strong aurora" are calculated.
The obtained ASC aurora index is posted in both a ascii format and plots on a real-time bases at https://www.irf.se/alis/allsky/nowcast/. When Level 6 is detected, automatic alert E-mail is sent out to the registered addresses immediately. The alert system started 5 November, 2021, and the results (both Level 6 detection and Level 4 detection) were compared to the manual (eye-)identification of the auroral activity during the rest of the auroral season of Kiruna ASC (i.e., total five months until April 2022). Unless the Moon or cloud blocks the brightened region, nearly one-to-one correspondence between Level 6 and Local-Arc-Breaking judged by original ASC images is achieved within ten minutes uncertainty.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(4167 KB)
-
Supplement
(27 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(4167 KB) - Metadata XML
-
Supplement
(27 KB) - BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2022-331', Anonymous Referee #1, 16 Jun 2022
The submitted Manuscript presents an approach for Aurora real-time detection based on hemispheric RGB camera images. In contrast to other approaches using deep learning, the introduced method uses spectra indices calculated from RGB image data.
Unfortunately, the authors assume exhaustive preknowledge on the detection / measurement of aurora that impedes an easy and exciting entry into the topic.
The method is very complex and hard to understand due to missing figures that would definetely help to grasp the matter. Furthermore, it sounds like the approach is only applicable to one study area using one specific camera setup where the authors developed the method. It was not stated that the approch was tested / validated at some else location or using some other camera configuration.
The evaluation is based on manual observations that AFAIK could be very subjective. Unfortunately it is not stated if the reference data is based on observations of several operateurs to have some quality control on the Groud Truth data.Due to the mentioned points I recommend to resubmit the paper after extensive rework as the method itself sounds interesting.
Some more comments are given directly in the attaced PDF.
-
AC1: 'Reply on RC1 (our plan toward revision)', Masatoshi Yamauchi, 27 Jun 2022
Thank you for your encouraging comments and for reminding that potential readers are much wider than the auroral observation community.
To cover wider readers, we will add three (or more) figures in the revision: keogram, ASC image with different category and area marked, and image showing N2 red line) with more explanation in text. We actually have them nearly ready (used in oral presentations for non-auroral community) but not included here (this is our mistake). We also plan to add a table of example of actual classification (to be used in combination with the new figure).
In the planed revision, we will also explain the speciality of aurora where no classification method (including machine learning) can be applied to different cameras without changing parameters or more fundamental selection (for machine learning method, training set should be different between different cameras, and definition of such training set is as subjective as the colour definition in the present method).
Citation: https://doi.org/10.5194/egusphere-2022-331-AC1
-
AC1: 'Reply on RC1 (our plan toward revision)', Masatoshi Yamauchi, 27 Jun 2022
-
CC1: 'Comment on egusphere-2022-331', Christian Kehl, 28 Jun 2022
I thank the authors for the interesting manuscript. The manuscripts details a simple method to classify images in an ASC aurora image collection according to the presented ASC index. The method is computationally lightweight, which is important for the goal of nowcasting local arc breaking of auroras in all-sky camera images. The single ASC index indicator makes the classification outcome easily interpretable by experts and novices in the field.
That said, next to the line-by-line review attached, there are some general comments that need addressing in a potential revision.
- The manuscript makes the data appear as very complex, multi-dimensional and challenging. Sadly, this only appears so due to ambiguous writing. In the end, after careful review, it appears the images are rather small (150,00 pixel) 3-band images, which is hidden behind confusing writing and misleading illustrations (fig. 1). This needs fixing.
- The manuscript makes the developed method appear as intricate-yet-powerful, which is related to how the data is presented. In the end, the presented method is a multi-level, double-bound thresholding with 5 different thresholding levels (i.e. categroies). This method is indeed simple, but introduces a considerable amount of fine-tuned parameters due to the lack of in-depth prior image processing. This also makes the formulation ambiguous and the method very difficult to replicate for other observatories.
- The manuscript requires more context information to make it accessible and comprehensible to researchers outside the expert aurora observation community. Special terms and phenomena such as the N2 red line or local arc breaking are neitehr explained in-text or referenced in literature. It is very hard to understand the message and research objectives of the manuscript for an common EGU audience. There is still more than sufficient space available for some added references to give the reader information on where to find further details on the assumption baseline of the manuscript.
- The manuscript lacks references to make some context information understandable and to to make certain design decisions of the method easier to reason and comprehend.
- The manuscript lacks a details discussion of the imaging parameters (camera resolution, quantisation, etc.). The authors use JPEG images with small resolutions, dynamic exposure times from the camera, hence automatic white balancing, and an ingrained 3-band 8-bit pixel quantisation. The paper lacks any in-depth discussion of the actual influence of those instrumentation parameters. The paper lacks any comparison and impact assessment of the image compression error from JPEG. Overall, the manuscript treats the actual imaging influences superficially.
- The manuscript lacks a comparison with alternative, more up-to-date methods of image pattern recognition.
- The manuscript needs considerable language revision, as minor and intermediate grammar mistakes are frequent.
- The actual presentation of the ASC images needs revision as the displayed metrics within the image are not indicative and certain features in the images require expert explanation. Furthermore, fewer-but-larger images would support reader comprehension.
I would appreciate a careful revision of the mentioned points, as well as the marked points in-text.
-
AC2: 'Reply on CC1', Masatoshi Yamauchi, 05 Jul 2022
Reply to comments by Christian Kehl (our plan toward revision)
Thank you for your detailed comments and for reminding that potential readers are much wider than the auroral observation community (main target readers). To cover wider readers, we will add more explanations and figures/table (as written in the reply to reviewer 1), e.g., explaining the aurora image, aurora activity itself, and how to interpret the aurora.
Also, we will make it clearer that the presented two step method is the first trial of "translation" of how auroral scientists actually judge "onset" of auroral activity in the sky: first evaluate the colour information to judge if it is aurora or not (just using green colour cannot distinguish diffuse aurora or cloud because the morphology is similar to each other, and this is why we need three-four colours), and then evaluate the activity level from both intensity and area within the field-of-view.
The main user is auroral community scientists and operators (they asked us to describe our method that is already in successful operation) who are familiar with classifying the auroral activity level and ready to apply this method (after modification of the parameters). This is why the old version 0 is already applied in Finland (private communication, 2022).
In the revision, we would stress that there is no automated identification scheme of "onset" in both machine learning method and expert system method. What so far exist is just "one" category each picture of types of aurora, without telling the activity level, although the activity level is the most important parameter. Thus this is the first trial of such, and evaluation of the method much be done against eye identification method but not machine learning method.
Even for just classification without intensity, there is no "up to date" automated classification scheme of the aurora better than human eye, which required training of many years. For example, Nanjo et al (2022, Figure 1) and its reference (Clausen & Nickisch, 2018, Figure 1) have three auroral categories "arc" "discrete" "diffuse" as one value for each picture (here "arc" is just only a special form of "discrete" from auroral science viewppoint), but "discrete aurora" and "diffuse aurora" always appears together for all active aurora or during the precursor of active aurora. Also, most of the auroral images are mixed with cloud (it is very rare to have clear sky during night), causing the most updated automated classification end up "ambiguous" (Clausen & Nickisch, 2018) or "Aurora and cloud" (Nanjo et al., 2022). Contrary, our method gives in addition to the percentage of sky coverage for each category, the activity level discrete aurora as L3 parameter.
Thanks to your comment, we now realised that we have to explain this background in the introduction for wider audience than the auroral community.
Clausen, L. B. N., & Nickisch, H.: Automatic classificationof auroral images from the OsloAuroral THEMIS (OATH) data set usingmachine learning, J. Geophys. Res., 123, 5640–5647,
https://doi.org/10.1029/2018JA025274, 2018Nanjo, S., Satonori Nozawa, S., Yamamoto, M., Kawabata T., Johnsen,M.G., Tsuda, T.T, Hosokawa, K.:
An a auroral detection system using deep learning: real-time operation in Troms{\o{}}, Norway,
https://doi.org/10.21203/rs.3.rs-1090985/v1, 2021.Citation: https://doi.org/10.5194/egusphere-2022-331-AC2 -
RC4: 'Reply on AC2', Christian Kehl, 28 Jul 2022
I agree with the porposed changes and I'd appreciate a better introduction to the topic. I also encourage the authors to give that imprived introduction, because readers will have an easier time appreciate and gauge the level of accomplishment if the method,
Citation: https://doi.org/10.5194/egusphere-2022-331-RC4 -
AC5: 'Reply on RC4', Masatoshi Yamauchi, 29 Jul 2022
Thank you again your encouraging comments. It is also encouraging to know that the topic is relevant to much wider community than auroral community.
Citation: https://doi.org/10.5194/egusphere-2022-331-AC5
-
AC5: 'Reply on RC4', Masatoshi Yamauchi, 29 Jul 2022
-
RC4: 'Reply on AC2', Christian Kehl, 28 Jul 2022
-
RC2: 'Comment on egusphere-2022-331', Christian Kehl, 04 Jul 2022
I thank the authors for the interesting manuscript. The manuscripts details a simple method to classify images in an ASC aurora image collection according to the presented ASC index. The method is computationally lightweight, which is important for the goal of nowcasting local arc breaking of auroras in all-sky camera images. The single ASC index indicator makes the classification outcome easily interpretable by experts and novices in the field.
That said, next to the line-by-line review attached, there are some general comments that need addressing in a potential revision.
- The manuscript makes the data appear as very complex, multi-dimensional and challenging. Sadly, this only appears so due to ambiguous writing. In the end, after careful review, it appears the images are rather small (150,00 pixel) 3-band images, which is hidden behind confusing writing and misleading illustrations (fig. 1). This needs fixing.
- The manuscript makes the developed method appear as intricate-yet-powerful, which is related to how the data is presented. In the end, the presented method is a multi-level, double-bound thresholding with 5 different thresholding levels (i.e. categroies). This method is indeed simple, but introduces a considerable amount of fine-tuned parameters due to the lack of in-depth prior image processing. This also makes the formulation ambiguous and the method very difficult to replicate for other observatories.
- The manuscript requires more context information to make it accessible and comprehensible to researchers outside the expert aurora observation community. Special terms and phenomena such as the N2 red line or local arc breaking are neitehr explained in-text or referenced in literature. It is very hard to understand the message and research objectives of the manuscript for an common EGU audience. There is still more than sufficient space available for some added references to give the reader information on where to find further details on the assumption baseline of the manuscript.
- The manuscript lacks references to make some context information understandable and to to make certain design decisions of the method easier to reason and comprehend.
- The manuscript lacks a details discussion of the imaging parameters (camera resolution, quantisation, etc.). The authors use JPEG images with small resolutions, dynamic exposure times from the camera, hence automatic white balancing, and an ingrained 3-band 8-bit pixel quantisation. The paper lacks any in-depth discussion of the actual influence of those instrumentation parameters. The paper lacks any comparison and impact assessment of the image compression error from JPEG. Overall, the manuscript treats the actual imaging influences superficially.
- The manuscript lacks a comparison with alternative, more up-to-date methods of image pattern recognition.
- The manuscript needs considerable language revision, as minor and intermediate grammar mistakes are frequent.
- The actual presentation of the ASC images needs revision as the displayed metrics within the image are not indicative and certain features in the images require expert explanation. Furthermore, fewer-but-larger images would support reader comprehension.
I would appreciate a careful revision of the mentioned points, as well as the marked points in-text.
-
AC3: 'Reply on RC2', Masatoshi Yamauchi, 05 Jul 2022
Please see Reply to CC1
Citation: https://doi.org/10.5194/egusphere-2022-331-AC3
-
RC3: 'Comment on egusphere-2022-331', Anonymous Referee #3, 24 Jul 2022
The paper presents a fast approach for real-time Aura detection by using RGB images acquired by an All-Sky-Camera (ASC) (i.e., full frame camera with a fish-eye lens, pointing towards the sky), located in an a heated-dome at the Kiruna Observatory. The aim of the authors is to identify intensification of auroral arc with expanding motion, named Local-Arc-Breaking (LAB), and send email alerts when the “Level 6” criterion for LAB aurora detection is satisfied.
The proposed approach is composed of two steps: 1) perform a pixel-wise classification of all the image pixels into different categories (including three aurora categories), based on the color information of the pixel itself; 2) compute a series of indexes based on the percentage of pixels detected for each category and the average luminosity of the most intense aurora pixels. Based on the computed indexes, the alert system can detect most relevant aurora events and trigger an alert.
The topic is interesting and the proposed solution for a real time alert is relevant, especially because it is a fast and not computationally expansive approach (which is crucial for real-time applications). However, there are some critical aspects that should be solved (or at least discussed):
- Computation of the thresholds and transferability of the method.
The pixel-wise classification is basically based on thresholding on RGB data, and it relies on a large number of thresholds. This makes the classification fast and computationally easy. However, it requires a proper fine tuning of the threshold levels, that is crucial for an accurate classification. I believe that threshold calibration is mostly based on the authors’ experience, as very little information about this is provided in the paper. The clear consequence of this, is that the procedure may not be transferable to another site, with different environmental conditions and different instruments (as the authors stated in the paper). With this idea in mind, it would be useful if the authors can discuss deeper their choices and provide some more considerations that have led to the derived threshold values, rather than just providing the values themselves.
- Camera geometry
Is the camera geometry considered in your study? I understand that you are not trying to reconstruct the location of the aurora (which may be an interesting extension of your study for the future), but the “occupancy” of the pixel classified as aurora (within the different classes), among the total amount of pixels. However, you are using a fish-eye lens (Nikon Nikkor 8 mm), with strong geometrical distortions, that require a proper camera model for correcting them (please, see e.g., https://docs.opencv.org/4.x/db/d58/group__calib3d__fisheye.html). I think that this aspect should be taken into account (e.g., by undistorting the images before any image processing) or at least discussed to see if geometrical distortions have a relevant impact on the accuracy of the classification.
- Comparison with different methods
It would be useful to have some comparison with different state-of-the-art approaches (or also widely established methods, if any) to fully grasp the potential of the proposed solution.
- Deep Learning
In the paper, the authors state that Neural Networks (NN) are black boxes, difficult to debug, and strongly dependent on the training data. I partially agree with the authors on these statements. However, I believe that Deep Learning (DL) is a powerful tool for identifying the presence of aural events and to classify them, according to a-priori defined classes (as it is done in this paper) and ground truth data (manually classified images, or images classified with the algorithm proposed in this paper). Moreover, if the NN layers are trained with datasets from different locations and cameras, and with proper data augmentation, this may result in a more transferable and generalized approach, that can be applied in different observatories. Additionally, NN is basically based on sequences of filter convolutions. Therefore, it may overcome the limitation of considering each pixel independently from its neighborhood for the classification (of course DL is not the only possible solution for this: spatial filters, Markov Random Fields are few other examples). Eventually, DL may be combined with the step 2 of the proposed solution to build a real time alert framework.
All this thing considered, I understand that the use of DL is out of the scope of this paper (I suggest considering it for future works, though), but I believe that some more references and discussions on recent works that involve DL for aurora image classification may improve the quality and completeness of this work.
- Figure and Tables
I believe that figures and tables can be improved. In my opinion, Figure 1 is not clear, and it should be revised (please, see comments in the pdf supplement). Additionally, figures with ASC images are often placed far from the text in which they are cited (even 3 or 4 pages afterwards), and this prevent a smooth read of the paper. I suggest revising figure placement in the text, reducing the number of ASC images to the really informative ones and making them bigger.
Moreover, I believe that the tables containing the thresholding conditions are too verbose and don’t add relevant information in the text. Moreover, they are splitted in different pages, making even harder to read the tables. I suggest summarizing the main concepts of the tables (e.g., the different classes and the main classification criteria) in just one or two tables to be included in the main text of the paper. On the other hand, all the thresholds can be condensed in one table in the appendix.
- Background knowledge
As commented by other reviewers, I also believe that some background knowledge should be included in the text to allow a wider community to understand your work.
_______________
Please, refer to the comments included in the pdf supplement for other minor comments.
-
AC4: 'Reply on RC3', Masatoshi Yamauchi, 27 Jul 2022
Reply to Reviewer #3 (our plan toward revision)
Thank you for your encouraging comments and specific suggestions toward improvements including PDF supplement. We plan to revise the paper with stress on following points
#threshold
We now realized that we did not explain the "general guideline" of how to choose each criterion, such as "how many non-auroral light sources we considered", "how can we avoid the overlapping of two criteria", "what is the rough indication of each sub-division in actual aurora (we add figures for this)" etc., such that third person can construct similar criteria for the other site easily.#camera geometry
Including camera geometry is a part of future task of considering field-of-view (section 4.5). As explained there, pixels at higher azimuth is less valued than pixels at zenith. We will enhance the explanation and also add explanation why we treat them equality in the method part too.#different method
Except Deep Learning, we are not aware of state-of-art method for auroral image analyses. There is a geomagnetic method using geomagnetic deviation only, but success rate is no high for Kiruna all-sky camera.#Deep Learning
Thank you for explaining the potential of Deep Learning, and we do actually considering using NN for step 2 (from index values to alert level). We include more explanation on the Deep Learning, and also mention from which part the NN can be combined with our method.#Figures
We add 3 or more figures (one or two figures before Figure 1). The placement of the figure and size of figure are LaTex program problem and we make sure they are located correct pages and size before during copy editing.#Tables
Placing Tables 1-7 in one place (one page for aurora and one page for non-aurora) such as appendix is a good idea. The Journal does not allow multi-entry table format, and this is why we separated into Tables 1-7. We try if your suggestion is technically possible.#Background knowledge
Yes we will add more explanation of background knowledgeCitation: https://doi.org/10.5194/egusphere-2022-331-AC4
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2022-331', Anonymous Referee #1, 16 Jun 2022
The submitted Manuscript presents an approach for Aurora real-time detection based on hemispheric RGB camera images. In contrast to other approaches using deep learning, the introduced method uses spectra indices calculated from RGB image data.
Unfortunately, the authors assume exhaustive preknowledge on the detection / measurement of aurora that impedes an easy and exciting entry into the topic.
The method is very complex and hard to understand due to missing figures that would definetely help to grasp the matter. Furthermore, it sounds like the approach is only applicable to one study area using one specific camera setup where the authors developed the method. It was not stated that the approch was tested / validated at some else location or using some other camera configuration.
The evaluation is based on manual observations that AFAIK could be very subjective. Unfortunately it is not stated if the reference data is based on observations of several operateurs to have some quality control on the Groud Truth data.Due to the mentioned points I recommend to resubmit the paper after extensive rework as the method itself sounds interesting.
Some more comments are given directly in the attaced PDF.
-
AC1: 'Reply on RC1 (our plan toward revision)', Masatoshi Yamauchi, 27 Jun 2022
Thank you for your encouraging comments and for reminding that potential readers are much wider than the auroral observation community.
To cover wider readers, we will add three (or more) figures in the revision: keogram, ASC image with different category and area marked, and image showing N2 red line) with more explanation in text. We actually have them nearly ready (used in oral presentations for non-auroral community) but not included here (this is our mistake). We also plan to add a table of example of actual classification (to be used in combination with the new figure).
In the planed revision, we will also explain the speciality of aurora where no classification method (including machine learning) can be applied to different cameras without changing parameters or more fundamental selection (for machine learning method, training set should be different between different cameras, and definition of such training set is as subjective as the colour definition in the present method).
Citation: https://doi.org/10.5194/egusphere-2022-331-AC1
-
AC1: 'Reply on RC1 (our plan toward revision)', Masatoshi Yamauchi, 27 Jun 2022
-
CC1: 'Comment on egusphere-2022-331', Christian Kehl, 28 Jun 2022
I thank the authors for the interesting manuscript. The manuscripts details a simple method to classify images in an ASC aurora image collection according to the presented ASC index. The method is computationally lightweight, which is important for the goal of nowcasting local arc breaking of auroras in all-sky camera images. The single ASC index indicator makes the classification outcome easily interpretable by experts and novices in the field.
That said, next to the line-by-line review attached, there are some general comments that need addressing in a potential revision.
- The manuscript makes the data appear as very complex, multi-dimensional and challenging. Sadly, this only appears so due to ambiguous writing. In the end, after careful review, it appears the images are rather small (150,00 pixel) 3-band images, which is hidden behind confusing writing and misleading illustrations (fig. 1). This needs fixing.
- The manuscript makes the developed method appear as intricate-yet-powerful, which is related to how the data is presented. In the end, the presented method is a multi-level, double-bound thresholding with 5 different thresholding levels (i.e. categroies). This method is indeed simple, but introduces a considerable amount of fine-tuned parameters due to the lack of in-depth prior image processing. This also makes the formulation ambiguous and the method very difficult to replicate for other observatories.
- The manuscript requires more context information to make it accessible and comprehensible to researchers outside the expert aurora observation community. Special terms and phenomena such as the N2 red line or local arc breaking are neitehr explained in-text or referenced in literature. It is very hard to understand the message and research objectives of the manuscript for an common EGU audience. There is still more than sufficient space available for some added references to give the reader information on where to find further details on the assumption baseline of the manuscript.
- The manuscript lacks references to make some context information understandable and to to make certain design decisions of the method easier to reason and comprehend.
- The manuscript lacks a details discussion of the imaging parameters (camera resolution, quantisation, etc.). The authors use JPEG images with small resolutions, dynamic exposure times from the camera, hence automatic white balancing, and an ingrained 3-band 8-bit pixel quantisation. The paper lacks any in-depth discussion of the actual influence of those instrumentation parameters. The paper lacks any comparison and impact assessment of the image compression error from JPEG. Overall, the manuscript treats the actual imaging influences superficially.
- The manuscript lacks a comparison with alternative, more up-to-date methods of image pattern recognition.
- The manuscript needs considerable language revision, as minor and intermediate grammar mistakes are frequent.
- The actual presentation of the ASC images needs revision as the displayed metrics within the image are not indicative and certain features in the images require expert explanation. Furthermore, fewer-but-larger images would support reader comprehension.
I would appreciate a careful revision of the mentioned points, as well as the marked points in-text.
-
AC2: 'Reply on CC1', Masatoshi Yamauchi, 05 Jul 2022
Reply to comments by Christian Kehl (our plan toward revision)
Thank you for your detailed comments and for reminding that potential readers are much wider than the auroral observation community (main target readers). To cover wider readers, we will add more explanations and figures/table (as written in the reply to reviewer 1), e.g., explaining the aurora image, aurora activity itself, and how to interpret the aurora.
Also, we will make it clearer that the presented two step method is the first trial of "translation" of how auroral scientists actually judge "onset" of auroral activity in the sky: first evaluate the colour information to judge if it is aurora or not (just using green colour cannot distinguish diffuse aurora or cloud because the morphology is similar to each other, and this is why we need three-four colours), and then evaluate the activity level from both intensity and area within the field-of-view.
The main user is auroral community scientists and operators (they asked us to describe our method that is already in successful operation) who are familiar with classifying the auroral activity level and ready to apply this method (after modification of the parameters). This is why the old version 0 is already applied in Finland (private communication, 2022).
In the revision, we would stress that there is no automated identification scheme of "onset" in both machine learning method and expert system method. What so far exist is just "one" category each picture of types of aurora, without telling the activity level, although the activity level is the most important parameter. Thus this is the first trial of such, and evaluation of the method much be done against eye identification method but not machine learning method.
Even for just classification without intensity, there is no "up to date" automated classification scheme of the aurora better than human eye, which required training of many years. For example, Nanjo et al (2022, Figure 1) and its reference (Clausen & Nickisch, 2018, Figure 1) have three auroral categories "arc" "discrete" "diffuse" as one value for each picture (here "arc" is just only a special form of "discrete" from auroral science viewppoint), but "discrete aurora" and "diffuse aurora" always appears together for all active aurora or during the precursor of active aurora. Also, most of the auroral images are mixed with cloud (it is very rare to have clear sky during night), causing the most updated automated classification end up "ambiguous" (Clausen & Nickisch, 2018) or "Aurora and cloud" (Nanjo et al., 2022). Contrary, our method gives in addition to the percentage of sky coverage for each category, the activity level discrete aurora as L3 parameter.
Thanks to your comment, we now realised that we have to explain this background in the introduction for wider audience than the auroral community.
Clausen, L. B. N., & Nickisch, H.: Automatic classificationof auroral images from the OsloAuroral THEMIS (OATH) data set usingmachine learning, J. Geophys. Res., 123, 5640–5647,
https://doi.org/10.1029/2018JA025274, 2018Nanjo, S., Satonori Nozawa, S., Yamamoto, M., Kawabata T., Johnsen,M.G., Tsuda, T.T, Hosokawa, K.:
An a auroral detection system using deep learning: real-time operation in Troms{\o{}}, Norway,
https://doi.org/10.21203/rs.3.rs-1090985/v1, 2021.Citation: https://doi.org/10.5194/egusphere-2022-331-AC2 -
RC4: 'Reply on AC2', Christian Kehl, 28 Jul 2022
I agree with the porposed changes and I'd appreciate a better introduction to the topic. I also encourage the authors to give that imprived introduction, because readers will have an easier time appreciate and gauge the level of accomplishment if the method,
Citation: https://doi.org/10.5194/egusphere-2022-331-RC4 -
AC5: 'Reply on RC4', Masatoshi Yamauchi, 29 Jul 2022
Thank you again your encouraging comments. It is also encouraging to know that the topic is relevant to much wider community than auroral community.
Citation: https://doi.org/10.5194/egusphere-2022-331-AC5
-
AC5: 'Reply on RC4', Masatoshi Yamauchi, 29 Jul 2022
-
RC4: 'Reply on AC2', Christian Kehl, 28 Jul 2022
-
RC2: 'Comment on egusphere-2022-331', Christian Kehl, 04 Jul 2022
I thank the authors for the interesting manuscript. The manuscripts details a simple method to classify images in an ASC aurora image collection according to the presented ASC index. The method is computationally lightweight, which is important for the goal of nowcasting local arc breaking of auroras in all-sky camera images. The single ASC index indicator makes the classification outcome easily interpretable by experts and novices in the field.
That said, next to the line-by-line review attached, there are some general comments that need addressing in a potential revision.
- The manuscript makes the data appear as very complex, multi-dimensional and challenging. Sadly, this only appears so due to ambiguous writing. In the end, after careful review, it appears the images are rather small (150,00 pixel) 3-band images, which is hidden behind confusing writing and misleading illustrations (fig. 1). This needs fixing.
- The manuscript makes the developed method appear as intricate-yet-powerful, which is related to how the data is presented. In the end, the presented method is a multi-level, double-bound thresholding with 5 different thresholding levels (i.e. categroies). This method is indeed simple, but introduces a considerable amount of fine-tuned parameters due to the lack of in-depth prior image processing. This also makes the formulation ambiguous and the method very difficult to replicate for other observatories.
- The manuscript requires more context information to make it accessible and comprehensible to researchers outside the expert aurora observation community. Special terms and phenomena such as the N2 red line or local arc breaking are neitehr explained in-text or referenced in literature. It is very hard to understand the message and research objectives of the manuscript for an common EGU audience. There is still more than sufficient space available for some added references to give the reader information on where to find further details on the assumption baseline of the manuscript.
- The manuscript lacks references to make some context information understandable and to to make certain design decisions of the method easier to reason and comprehend.
- The manuscript lacks a details discussion of the imaging parameters (camera resolution, quantisation, etc.). The authors use JPEG images with small resolutions, dynamic exposure times from the camera, hence automatic white balancing, and an ingrained 3-band 8-bit pixel quantisation. The paper lacks any in-depth discussion of the actual influence of those instrumentation parameters. The paper lacks any comparison and impact assessment of the image compression error from JPEG. Overall, the manuscript treats the actual imaging influences superficially.
- The manuscript lacks a comparison with alternative, more up-to-date methods of image pattern recognition.
- The manuscript needs considerable language revision, as minor and intermediate grammar mistakes are frequent.
- The actual presentation of the ASC images needs revision as the displayed metrics within the image are not indicative and certain features in the images require expert explanation. Furthermore, fewer-but-larger images would support reader comprehension.
I would appreciate a careful revision of the mentioned points, as well as the marked points in-text.
-
AC3: 'Reply on RC2', Masatoshi Yamauchi, 05 Jul 2022
Please see Reply to CC1
Citation: https://doi.org/10.5194/egusphere-2022-331-AC3
-
RC3: 'Comment on egusphere-2022-331', Anonymous Referee #3, 24 Jul 2022
The paper presents a fast approach for real-time Aura detection by using RGB images acquired by an All-Sky-Camera (ASC) (i.e., full frame camera with a fish-eye lens, pointing towards the sky), located in an a heated-dome at the Kiruna Observatory. The aim of the authors is to identify intensification of auroral arc with expanding motion, named Local-Arc-Breaking (LAB), and send email alerts when the “Level 6” criterion for LAB aurora detection is satisfied.
The proposed approach is composed of two steps: 1) perform a pixel-wise classification of all the image pixels into different categories (including three aurora categories), based on the color information of the pixel itself; 2) compute a series of indexes based on the percentage of pixels detected for each category and the average luminosity of the most intense aurora pixels. Based on the computed indexes, the alert system can detect most relevant aurora events and trigger an alert.
The topic is interesting and the proposed solution for a real time alert is relevant, especially because it is a fast and not computationally expansive approach (which is crucial for real-time applications). However, there are some critical aspects that should be solved (or at least discussed):
- Computation of the thresholds and transferability of the method.
The pixel-wise classification is basically based on thresholding on RGB data, and it relies on a large number of thresholds. This makes the classification fast and computationally easy. However, it requires a proper fine tuning of the threshold levels, that is crucial for an accurate classification. I believe that threshold calibration is mostly based on the authors’ experience, as very little information about this is provided in the paper. The clear consequence of this, is that the procedure may not be transferable to another site, with different environmental conditions and different instruments (as the authors stated in the paper). With this idea in mind, it would be useful if the authors can discuss deeper their choices and provide some more considerations that have led to the derived threshold values, rather than just providing the values themselves.
- Camera geometry
Is the camera geometry considered in your study? I understand that you are not trying to reconstruct the location of the aurora (which may be an interesting extension of your study for the future), but the “occupancy” of the pixel classified as aurora (within the different classes), among the total amount of pixels. However, you are using a fish-eye lens (Nikon Nikkor 8 mm), with strong geometrical distortions, that require a proper camera model for correcting them (please, see e.g., https://docs.opencv.org/4.x/db/d58/group__calib3d__fisheye.html). I think that this aspect should be taken into account (e.g., by undistorting the images before any image processing) or at least discussed to see if geometrical distortions have a relevant impact on the accuracy of the classification.
- Comparison with different methods
It would be useful to have some comparison with different state-of-the-art approaches (or also widely established methods, if any) to fully grasp the potential of the proposed solution.
- Deep Learning
In the paper, the authors state that Neural Networks (NN) are black boxes, difficult to debug, and strongly dependent on the training data. I partially agree with the authors on these statements. However, I believe that Deep Learning (DL) is a powerful tool for identifying the presence of aural events and to classify them, according to a-priori defined classes (as it is done in this paper) and ground truth data (manually classified images, or images classified with the algorithm proposed in this paper). Moreover, if the NN layers are trained with datasets from different locations and cameras, and with proper data augmentation, this may result in a more transferable and generalized approach, that can be applied in different observatories. Additionally, NN is basically based on sequences of filter convolutions. Therefore, it may overcome the limitation of considering each pixel independently from its neighborhood for the classification (of course DL is not the only possible solution for this: spatial filters, Markov Random Fields are few other examples). Eventually, DL may be combined with the step 2 of the proposed solution to build a real time alert framework.
All this thing considered, I understand that the use of DL is out of the scope of this paper (I suggest considering it for future works, though), but I believe that some more references and discussions on recent works that involve DL for aurora image classification may improve the quality and completeness of this work.
- Figure and Tables
I believe that figures and tables can be improved. In my opinion, Figure 1 is not clear, and it should be revised (please, see comments in the pdf supplement). Additionally, figures with ASC images are often placed far from the text in which they are cited (even 3 or 4 pages afterwards), and this prevent a smooth read of the paper. I suggest revising figure placement in the text, reducing the number of ASC images to the really informative ones and making them bigger.
Moreover, I believe that the tables containing the thresholding conditions are too verbose and don’t add relevant information in the text. Moreover, they are splitted in different pages, making even harder to read the tables. I suggest summarizing the main concepts of the tables (e.g., the different classes and the main classification criteria) in just one or two tables to be included in the main text of the paper. On the other hand, all the thresholds can be condensed in one table in the appendix.
- Background knowledge
As commented by other reviewers, I also believe that some background knowledge should be included in the text to allow a wider community to understand your work.
_______________
Please, refer to the comments included in the pdf supplement for other minor comments.
-
AC4: 'Reply on RC3', Masatoshi Yamauchi, 27 Jul 2022
Reply to Reviewer #3 (our plan toward revision)
Thank you for your encouraging comments and specific suggestions toward improvements including PDF supplement. We plan to revise the paper with stress on following points
#threshold
We now realized that we did not explain the "general guideline" of how to choose each criterion, such as "how many non-auroral light sources we considered", "how can we avoid the overlapping of two criteria", "what is the rough indication of each sub-division in actual aurora (we add figures for this)" etc., such that third person can construct similar criteria for the other site easily.#camera geometry
Including camera geometry is a part of future task of considering field-of-view (section 4.5). As explained there, pixels at higher azimuth is less valued than pixels at zenith. We will enhance the explanation and also add explanation why we treat them equality in the method part too.#different method
Except Deep Learning, we are not aware of state-of-art method for auroral image analyses. There is a geomagnetic method using geomagnetic deviation only, but success rate is no high for Kiruna all-sky camera.#Deep Learning
Thank you for explaining the potential of Deep Learning, and we do actually considering using NN for step 2 (from index values to alert level). We include more explanation on the Deep Learning, and also mention from which part the NN can be combined with our method.#Figures
We add 3 or more figures (one or two figures before Figure 1). The placement of the figure and size of figure are LaTex program problem and we make sure they are located correct pages and size before during copy editing.#Tables
Placing Tables 1-7 in one place (one page for aurora and one page for non-aurora) such as appendix is a good idea. The Journal does not allow multi-entry table format, and this is why we separated into Tables 1-7. We try if your suggestion is technically possible.#Background knowledge
Yes we will add more explanation of background knowledgeCitation: https://doi.org/10.5194/egusphere-2022-331-AC4
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
390 | 131 | 26 | 547 | 34 | 12 | 10 |
- HTML: 390
- PDF: 131
- XML: 26
- Total: 547
- Supplement: 34
- BibTeX: 12
- EndNote: 10
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Urban Brändstöm
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(4167 KB) - Metadata XML
-
Supplement
(27 KB) - BibTeX
- EndNote
- Final revised paper