the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
AngleCam V2: Predicting leaf inclination angles across taxa from daytime and nighttime photos
Abstract. Understanding how plants capture light and maintain their energy balance is crucial for predicting how ecosystems respond to environmental changes. By monitoring leaf inclination angle distributions (LIADs), we can gain insights into plant behaviour that directly influences ecosystem functioning. LIADs affect radiative transfer processes and reflectance signals, which are essential components of satellite-based vegetation monitoring. Despite their importance, scalable methods for continuously observing these dynamics across different plant species throughout day-night cycles are limited.
We present AngleCam V2, a deep learning model that estimates LIADs from both RGB and near-infrared (NIR) night-vision imagery. We compiled a dataset of over 4,500 images across 200 globally distributed species to facilitate generalization across taxa. Moreover, we developed a method to simulate pseudo-NIR imagery from RGB imagery to enable an efficient training of a deep learning model for tracking LIADs across day and night. The model is based on a vision transformer architecture with mixed-modality training using the RGB and the synthetic NIR images.
AngleCam V2 achieved substantial improvements in generalization compared to AngleCam V1 (R² = 0.62 vs 0.12 on the same holdout dataset). Phylogenetic analysis across 100 genera revealed no systematic taxonomic bias in prediction errors. Testing against leaf angle dynamics obtained from multitemporal terrestrial laser scanning demonstrated the reliable tracking of diurnal leaf movements (R² = 0.61–0.75) and the successful detection of water limitation-induced changes over a 14-day monitoring period.
This method enables continuous monitoring of leaf angle dynamics using conventional cameras, enabling applications in ecosystem monitoring networks, plant stress detection, interpreting satellite vegetation signals, and citizen science platforms for global-scale understanding of plant structural responses.
Status: open (until 31 Jan 2026)
- RC1: 'Comment on egusphere-2025-5223', Anonymous Referee #1, 25 Nov 2025 reply
-
RC2: 'Comment on egusphere-2025-5223', Anonymous Referee #2, 13 Jan 2026
reply
General comments
This is a useful and timely preprint that makes a clear methodological advance on AngleCam V1 by greatly expanding taxonomic and site diversity, analyzing phylogenetic structure in errors, and validating temporal dynamics against TLS. The methods are generally transparent, and open code/data availability is excellent. But in the present form, it would merit revisions focused on sharpening and modestly tempering the headline claims, and clarifying evaluation design and limitations, especially for the pseudo-NIR.
- The Introduction devotes substantial space to LIADs and motivates the work around recovering full leaf angle distributions. However, in the Results the primary “headline” performance reporting is largely based on average leaf angle, with limited quantitative assessment of the predicted distribution shape. This creates a mismatch between what is promised early and what is actually presented. I recommend either: (1) adding distribution-level evaluation metrics (e.g., alpha, beta metrics or simply the skewness you briefly discussed) and reporting them prominently alongside mean-angle metrics; or (2) reframing the Introduction to emphasize mean angle as the primary validated output and positioning LIAD recovery as a secondary/ongoing capability. Without one of these changes, readers may reasonably expect stronger LIAD-level validation than is currently provided.
- Nighttime/NIR capability is not validated broadly enough to support strong claims. The paper introduces pseudo-NIR and mixed-modality training, but “validation was performed exclusively on original RGB images” and NIR generalization is inferred largely from limited indoor experiments. The conclusions claim reliable tracking “across day and night imagery,” which should be softened unless supported by broader true-NIR validation across more scenes/species/cameras. The manuscript should also more explicitly characterize what is and isn’t captured by the pseudo-NIR pipeline and provide ablations showing its benefit on real NIR test imagery.
Specific comments
L66-121: The ecological motivation is compelling but dense, with many overlapping citations about leaf angles and energy balance. Consider trimming slightly and re‑allocating space to more explicit “What’s new in AngleCam V2?”.
L106: “The plausibility and potential of AngleCam were underlined, given that predicted leaf inclination angle distributions over multiple months had a tight relationship to environmental conditions”. It is not convincing to show the plausibility by comparing with environmental conditions. There are cases where leaf angle was decoupled with those environmental conditions. The TLS method serves as a more convincing validation. It is also helpful to explain why you only did a single TLS scan, given the possible occlusion you mentioned in line 452. Multiple scans around the plant can overcome this issue, providing better reference data.
L172–L181: The labeling method is described as visual estimation of whole leaf surface inclination. Please clarify how occluded/curled leaves are handled, whether the 20 leaves are sampled across canopy strata, and whether annotators used any geometric cues (e.g., midrib line) consistently.
L199–229: Again, the synthetic NIR pipeline is clearly described, but its adequacy as a proxy for real night‑vision images is not quantitatively assessed beyond qualitative Fig. 2. Consider adding a supplemental figure comparing pseudo‑NIR and real NIR across several species/backgrounds showing similarities and known discrepancies.
L244–249: If possible, report separate performance metrics for daytime RGB vs pseudo‑NIR in the TLS experiments serving as the main evidence for NIR robustness.
L380–392: You nicely contrast V1 vs V2 on their respective validation sets and on the new dataset. Please clarify whether V1 was re‑trained on the expanded dataset or simply applied as originally trained. Current wording could be interpreted either way.
L370–379: The 5° net increase over 14 days (0.33°/day) is modest relative to the ~9° RMSE reported for validation. It would be valuable to include at least a p‑value for the trend line to show that the long‑term change is statistically significant from day‑to‑day variability. Please also clarify whether any concurrent measurements of soil moisture or plant water status were taken; currently the inference that changes are drought‑driven rests solely on watering history and visual interpretation.
L268–283: The genus‑level approach is sensible. But please indicate whether Pl@ntNet identifications were manually checked for obvious mislabels.
L344–353; 418–437: You conclude that there is “no phylogenetic structure” and “no systematic phylogenetic bias”. Given the relatively small sample of 100 genera, noisy MAE estimates, and possible genus‑ID uncertainty, it would be more cautious to write “no statistically significant phylogenetic structure was detected in the residuals.” It may be worth noting that absence of detectable structure does not preclude modest clade‑specific biases that could appear with larger sample sizes.
Technical corrections
L144–149 Bullet list: add or remove terminal periods for consistency
L244–245: “AngleCamV2” (without space) vs “AngleCam V2” elsewhere; choose one format and use consistently.
L370–373: “From this point onward, angles consistently exceeded baseline values…”. Consider specifying “pre‑December 25 baseline” for clarity.
Fig. 5 caption: “An interactive visualization can be assessed at Anonymous GitHub.”. Typo: change “assessed” to “accessed”
L488: “deployment capability canhelp to 488 address this knowledge gap.” Typo: add a space between “canhelp”
Citation: https://doi.org/10.5194/egusphere-2025-5223-RC2
Viewed
Since the preprint corresponding to this journal article was posted outside of Copernicus Publications, the preprint-related metrics are limited to HTML views.
| HTML | XML | Total | BibTeX | EndNote | |
|---|---|---|---|---|---|
| 172 | 0 | 3 | 175 | 0 | 0 |
- HTML: 172
- PDF: 0
- XML: 3
- Total: 175
- BibTeX: 0
- EndNote: 0
Viewed (geographical distribution)
Since the preprint corresponding to this journal article was posted outside of Copernicus Publications, the preprint-related metrics are limited to HTML views.
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
Leaf angle distribution is an important canopy structural variable, but its temporal and spatial dynamics are much less well understood compared to LAI. The primary reason is that we don’t have a reliable method to measure it at a large spatial scale yet. This work developed AngleCam V2, which uses a deep learning model to estimate leaf inclination angle distributions from RGB and night-vision NIR imagery. By expanding training data across species, canopy structures, and lighting conditions, as well as integrating synthetic near-infrared augmentation, the developed model shows improved generalization and temporal robustness, including nighttime compatibility, which is promising for large-scale applications. The current version is well written and clearly presents the advantage of the new version of AngleCam. My only concern is the accuracy of the training datasets, in terms of visual interpretation, 20 leaf samples, representiveness of the entire canopy,
Major concerns:
Minor concerns: