Abstract
Access to below-canopy volumetric vegetation data is crucial for understanding ecosystem
dynamics. We address the long-standing limitation of remote sensing to penetrate deep into
dense canopy layers. LiDAR and radar are currently considered the primary options for
measuring 3D vegetation structures, while cameras can only extract the reflectance and
depth of top layers. Using conventional, high-resolution aerial images, our approach allows
sensing deep into self-occluding vegetation volumes, such as forests. It is similar in spirit to
the imaging process of wide-field microscopy, but can handle much larger scales and strong
occlusion. We scan focal stacks by synthetic-aperture imaging with drones and reduce out-
of-focus signal contributions using pre-trained 3D convolutional neural networks with mean
squared error (MSE) as the loss function. The resulting volumetric reflectance stacks
contain low-frequency representations of the vegetation volume. Combining multiple
reflectance stacks from various spectral channels provides insights into plant health, growth,
and environmental conditions throughout the entire vegetation volume. Compared with
simulated ground truth, our correction leads to ~x7 average improvements (min: ~x2, max:
~x12) for forest densities of 220 trees/ha - 1680 trees/ha. In our field experiment, we
achieved an MSE of 0.05 when comparing with the top-vegetation layer that was measured
with classical multispectral aerial imaging.
dynamics. We address the long-standing limitation of remote sensing to penetrate deep into
dense canopy layers. LiDAR and radar are currently considered the primary options for
measuring 3D vegetation structures, while cameras can only extract the reflectance and
depth of top layers. Using conventional, high-resolution aerial images, our approach allows
sensing deep into self-occluding vegetation volumes, such as forests. It is similar in spirit to
the imaging process of wide-field microscopy, but can handle much larger scales and strong
occlusion. We scan focal stacks by synthetic-aperture imaging with drones and reduce out-
of-focus signal contributions using pre-trained 3D convolutional neural networks with mean
squared error (MSE) as the loss function. The resulting volumetric reflectance stacks
contain low-frequency representations of the vegetation volume. Combining multiple
reflectance stacks from various spectral channels provides insights into plant health, growth,
and environmental conditions throughout the entire vegetation volume. Compared with
simulated ground truth, our correction leads to ~x7 average improvements (min: ~x2, max:
~x12) for forest densities of 220 trees/ha - 1680 trees/ha. In our field experiment, we
achieved an MSE of 0.05 when comparing with the top-vegetation layer that was measured
with classical multispectral aerial imaging.
| Original language | English |
|---|---|
| Article number | 0907 |
| Number of pages | 28 |
| Journal | Journal of Remote Sensing |
| Volume | 5 |
| DOIs | |
| Publication status | E-pub ahead of print - 08 Oct 2025 |
Fields of science
- 102020 Medical informatics
- 102003 Image processing
- 102008 Computer graphics
- 103021 Optics
- 102015 Information systems
- 102 Computer Sciences
JKU Focus areas
- Digital Transformation