Introducing the Video In Situ Snowfall Sensor (VISSS)
Abstract. The open source Video In Situ Snowfall Sensor (VISSS) is introduced as a novel instrument for the characterization of particle shape and size in snowfall. The VISSS consists of two cameras with LED backlights and telecentric lenses that allow accurate sizing and combine a large observation volume with relatively high resolution and a design that limits wind disturbance. VISSS data products include per-particle properties and integrated particle size distribution properties such as particle maximum extent, cross-sectional area, perimeter, complexity, and – in the future – sedimentation velocity. Initial analysis shows that the VISSS provides robust statistics based on up to 100,000 particles observed per minute. Comparison of the VISSS with collocated PIP and Parsivel instruments at Hyytiälä, Finland, shows excellent agreement with Parsivel, but reveals some differences for the PIP (Precipitation Imaging Package) that are likely related to PIP data processing and limitations of the PIP with respect to observing smaller particles. The open source nature of the VISSS hardware plans, data acquisition software, and data processing libraries invites the community to contribute to the development of the instrument, which has many potential applications in atmospheric science and beyond.
Maximilian Maahn et al.
Status: open (extended)
- RC1: 'Comment on egusphere-2023-655', Thomas Kuhn, 30 May 2023 reply
Maximilian Maahn et al.
VISSS, PIP, and Parsivel Snowfall Observations from Winter 2021/22 in Hyytiälä, Finland https://doi.org/10.5281/zenodo.7797286
Hardware Design of the Video In Situ Snowfall Sensor v2 (VISSS2) https://doi.org/10.5281/zenodo.7640821
Model code and software
Video In Situ Snowfall Sensor (VISSS) Data Processing Library V2023.1.6 https://doi.org/10.5281/zenodo.7650394
Video In Situ Snowfall Sensor (VISSS) Data Acquisition Software V0.3.1 https://doi.org/10.5281/zenodo.7640801
Maximilian Maahn et al.
Viewed (geographical distribution)
The manuscript describes a new instrument to image individual snowflakes. It represents a relevant and useful contribution to the relatively few instruments that image snowflakes and collect detailed information on snowfall in this way. VISSS, the new instrument, is in its working principle similar to the SVI/PIP as they both use video imaging of a relatively large sampling volume (~5cm x 5cm x 5cm) with illumination from the back. The VISSS has an improved resolution as well as better optics to minimizing sizing errors. The VISSS is different from SVI/PIP as it uses two video cameras with orthogonal viewing directions. The 2-DVD uses already a similar approach, however, with lower-resolution line cameras and issues when reconstructing images from the recorded lines. Thus, the VISSS provides more reliable data. The two viewing directions of the VISSS allow to properly define the sampling volume independent of the imaged particle’s size. This is an important advantage of the VISSS. In addition, the two views provide of course more information on each single particle. Even more information can be derived from the multiple exposures of the same snow particle as it is falling through the sampling volume, for example the fall speed.
I am complementing the authors to their open approach publishing all design and software.
I recommend publication of this manuscript after a minor revision that should address a few questions and issues that I am describing below. I am first raising a few important points and then give feedback on other minor things or suggest corrections.
Important points – specific comments
When talking about “resolution” (e.g. L 74 “resolution of 43 to 59 μm px−1”) you almost exclusively refer to what I would call “pixel resolution”, i.e. what size on the object does one pixel on the image correspond to. To properly characterize an instruments capability to resolve fine details one should give both the pixel resolution as well as the actual optical resolution that is realized with the imaging system. Optical resolution may be defined and measured in several ways, but I would propose to simply state (and show with examples) what the finest detail is that can be resolved. Even if the optical resolution of the optics may be better, I doubt that the finest detail that can be resolved is on the order of one pixel. Looking at example images in Fig.3, I would estimate the finest detail that can be resolved to be on the order of 100 μm.
In L 112 “quality of the lens proved to be borderline for the applications, resulting in slightly blurred particle images” you touch on optical resolution.
The calibration you present in Sect 3.6 compares the diameter in pixels determined using the image processing (Sect.3.1) to the actual diameter of reference spheres. The slope of the fitted relationship shown in Eq. 5 (or 6) corresponds to the pixel resolution (which you confirmed with an imaged millimeter scale). The interesting result of the calibration (that you cannot get from the millimeter scale) is the offset in Eq. 5. Of course, you should then use the inverse of Eq.5 to convert determined size in pixels to μm. Then, whatever caused the offset will be taken care of. I am wondering why a similar calibration is not done for area and perimeter. You could determine, similar to Eq.5, relationships between determined properties in pixels and actual properties of the reference sphere. This would account for certain effects of the image processing (at least for spheres). I think this would be more accurate than simply using the slope only for converting areas and perimeters.
I don’t agree with your explanation of the offset. L 283 “the Dmax estimator used to process the images often rounds up to the next full pixel” sounds difficult to believe. If this is true, then I highly recommend that you change the Dmax estimator function. I expect that the offset, at least in part, is due to image processing. After all processing steps (Gaussian blur, Canny filter, dilation, finding the contour, filling and eroding the contour) the resulting size may be offset with a certain bias. I would be curious what would happen to arteficial particle images during processing. Take for example a 2px by 2px square (which should have a Dmax of about 2.8px, cross-sectional area of 4px, and perimeter of 8px) and see what properties will be determined. If you would do this for a few sizes, you could find a relationship as in Eq.5.
3) Smallest particle
It would be interesting to know what the smallest particles are that can be measured (or are considered). I have read somewhere a condition of >= 2px for size and >=2px for area. I am not sure that 2px is really a meaningful limit. This is related to my comments on calibration and resolution above. After imposing your 2px-conditions, do you actually observe 2px-particles? If yes, did you examine them by looking at the actual images compared to the contour? If you took a 2px artificial particle, what size and area would be determined by image processing? After such testing, could you state what the smallest particles are that can be measured?
In Fig 7 you show only particles larger than about 10px. Is this the smallest particle?
4) Sizing errors
Apart from Eq. 5, you don’t seem to estimate an error in sizing. In addition to the uncertainties captured by the calibration (Eq. 5), I would expect image blur to cause an error. Particles moving at typical fall speeds, may blur during the 60 μs exposure time by about 1px. This may introduce an additional error. Was calibration done with moving or stationary reference spheres? As a result, calibration may or may not account for motion blur. All image processing related effects should be accounted for by calibration, then the error related to these effects could be less than 0.1px given the uncertainties in offset in Eq.s 5 and 6. It would be could to briefly discuss sizing errors, speculate about motion blur or potentially other error sources (e.g. can we assume that the telecentric lenses completely eliminate sizing errors), and give an estimated error (perhaps depending on size). Can this error be smaller than the finest detail that can be resolved (optical resolution of your imaging system, see my comments on Resolution above)?
L 369 “D < 0.3 mm indicating that discretization errors can become substantial for D <0.3 mm”: this "discretization" errors could be discussed better in the context of sizing errors.
5) Error if only one camera used
I find it misleading to call the difference in Dmax determined from the two cameras’ images a “sizing error” (L 97) or “errors in Dmax … if only a single camera were used” (Sect 4.3 L 380-381). The difference shows how much Dmax can vary with viewing direction for a particle with a certain shape. I would argue that this is not an error. While Dmax is defined for a two-dimensional images, it seems that you assume there is a true Dmax (max of Dmax as viewed from all different directions, equivalent to Dmax if one were to define it three-dimensionally). I am not sure about a potential radar bias from Dmax that is underestimated with respect to this max Dmax. Would the radar signal vary with particle orientation in a similar way as Dmax varies with orientation? Then the true radar signal could not be estimated assuming that all particles have max Dmax.
L 97-99: “Leinonen et al. (2021) found that using only a single perspective for
sizing snow particles can lead to a normalized root mean square error of 6% for Dmax and Wood et al. (2013) estimated the
resulting bias in simulated radar reflectivity to be 3.2 dB.”
Could you explain the 6% found by Leinonen 2021 (I couldn’t find it by quickly looking at this reference)? Wood 2013 refers to using disdrometer measurements of size taken instead of Dmax. They estimated that D_disdro = 0.82 Dmax, i.e. an effect of sizes 18% smaller than Dmax.
So, I am wondering how relevant this discussion around these “sizing errors” is. Additionally, I am wondering if Dmax is always the best size to use to simulate radar signals.
6) Sampling volume
The sampling volume is well defined by the intersection of the viewing volumes of the two cameras. Thus, deriving particle concentrations should be possible. It is not clear if this is actually done (or part of future work).
See also the comment about L 294-295 below: does the sampling volume depend on particle size due to the “buffer” that is removed?
7) Clarity in descriptions in Sect. 3 – technical corrections
In a few places, things remain unclear. It can be seemingly small details that make that things can be come unclear. In particular, many parts of Sect. 3 suffer from this and should be reviewed. Here are things that can be improved:
L144: specify more what ROI is here?
L146 “commonly used background detection algorithms”
What are these, algorithms to detect background?
L148 “few blurred pixels around the particle that would introduce a bias”
Unclear what this means.
L151-152: “Since filling the contour also closes potential holes in the
particles, the background detection and Canny filter masks are combined”
What are these two masks (only mentioned here), what is the result of this combination?
Define AR and alpha (is AR betw 0 and 1 or >1?; alpha is angle between?)
L 181-182 “The minimum resolution of 1 pixel is accounted for by integrating the probability density function (PDF) for an interval of +/- 0.5 pixels.”
What does this sentence mean? What is "minimum resolution of 1 px"?
L 183 “This process requires matching the time stamps ("capture time") of both cameras”
You say that matching requires "capture time", but then you match capture id instead.
Then you use "recording time" to match capture id's. This is confusing.
Unclear method to find capture id offset:
Why 500 frames?
Why "This takes advantage of the fact that only moving frames are recorded."?
Why max 1ms in recording time? Not using capture time, but then use time (not more than 1ms apart)?
Can there be missing frames or varying frame rate?
L 194 “The joint product of the integrated PDF intervals”
Is this the product of the probabilities (according to the PDFs) to have Delta h, z, i?
Can you explain why 0.1% are falsely rejected (due to lkarger than normal Delta i?)?
In Sect 3.3 you refer to effects of misalignment but may call it “vertical alignment” or “rotation”. Try to use a consistent terminology and clear and concise description.
When is a particle observed by only one camera? Only if outside common observation volume?
State that larger particles means lower ratio.
”impossible” (L 206) too strong, since you then show how it can be done:
Bayesian L 225 is applied to matched particles to get rotation state; matching done as described in L229-236 only using Delta h)
Potentially confusing that you use Y_L and Z_L, and then y_L and z_L, which are not the same. Maybe mention somewhere that you use capitals for…
Why can you assume psi=0? Eq. 2-4 should be simplified using psi=0.
Unclear why you mention that the retrieval is overconstrained. What does it mean? What are the consequences?
The procedure is unclear, try to reformulate. Refer to Eq. (2-4) if they are used in the procedure. Is the Bayesian estimation retrieval mentioned in the previous paragraph applied in this procedure? What are ”observed and retrieved particles”? How many manually selected cases? What are ”all” particles?
L 249 “pairing the particles closest in space of consecutive frames”
Can this be rephrased to make it clearer?
L 253 “The final tracking algorithm”
What is algorithm used here? "pairing particles closest in space"?
L 259 “measurements being slightly out of focus; this has been resolved for later campaigns”
Is this a result of camera alignment?
PSDs are not binned.
How is A binned with Dmax (or Deq)?
L275 “The VISSS calibration is tested …” could better be written as something like ”The sizing capabilities of the VISSS are calibrated …”.
How is Eq 5 used to ”calibrate” Dmax? The word ”calibrate” seems to be the wrong word here. After the calibration, Dmax/um is determined from Dmax/px by using the equation (derived from Eq5): Dmax/um = (Dmax/px -0.35px)*58.75um/px.
”Eqs. 5 and 6 are used to calibrate Dmax, but only the slope is used to calibrate
Deq, perimeter, and area because potential biases from the image processing routines have not been characterized” (unclear; refer also to comments under 2) Calibration).
L 288 “difference to the reference spheres is less than 2%.”
What difference? Pixel resolution (slope of Eq 5)?
L 289 “Part of the calibration is to…” doesn’t seem good English.
L 291 “rectangular cuboid”, better use “cuboid”?
L 291-292 “Therefore, the observation volumes are calculated separately for leader and follower, the eight vertices of the follower observation volume”
Unclear what is done here?
What are the eight vertices?
Would it be better to extend the depth of the follower volume before intersection
L 294-295 “a buffer of Dmax/2 to the edges of the image is used and the observation volume
is reduced accordingly. Finally, the volume is converted from pixels to m3 using the calibration factor estimated above”
What is a “buffer”? Comment on the Dmax/2 buffer, i.e. particle size dependent observing volume.
Other minor things - technical corrections
L 111: I don't see how 600mm working distance and 250Hz results in the given pixel resolution.
L 119 “rea- time” should be “real time”
L 132-133 “These three processing steps comprise the level1 products”
ENGLISH: object and subject swapped?
Anyway, not the processing steps: level1 comprises 5 properties for each particle.
L 134 “level1 observations are calibrated”
What does this ("calibrated") mean?
Fig 2.: metaRotation is missing?
L 175 “XF the vertical position in the follower” seems wrong, should be “X_F is the horizontal position in the follower image”
L 194 “Assuming that the probabilities for Δh and Δy” (Delta y) seems wrong, should be “… and Δz” (Delta z).
The observed offsets are not constant and can change due to wind load or pressure of accumulated snow on the VISSS frame.
Have changes in offset and/or rotation been notice on a short time scale (due to wind load)?
L 213 “reader” seems wrong, should be “leader”
It cannot be clearly seen, but the cloud of points doesn’t seem to follow well the shown parameterizations.
Inconsistently high precision: Intercepts in Eq5 and 6 (only 0.349+/-0.027)
”Resolution” of VISSS2 43.13.
L 344 “spectra”
I would be consistent in calling this PSD.
L 363 “small sample size”
Be more specific? Few drops?
How does sample size affect DSD?
L 399 “90deg angle to a common observation volume”
Better not refer the angle to a volume but: “90deg angle to each other and observe a common observation volume”
L 406 “and integration of particle properties over a size distribution”
Size distributions are determined in this step, NOT properties integrated over a size distribution (could be done as further step, but is not done and meant here).
Revise also sentence with “integrated particle size distribution properties” in the Abstract.