Abstract
Capturing exposure sequences to compute high dynamic range (HDR) images causes motion blur in cases of camera movement. This also applies to light-field cameras: frames rendered from multiple blurred HDR lightfield perspectives are also blurred. While the recording times of exposure sequences cannot be reduced for a single-sensor camera, we demonstrate how this can be achieved for a camera array. Thus, we decrease capturing time and reduce motion blur for HDR light-field video recording. Applying a spatio-temporal exposure pattern while capturing frames with a camera array reduces the overall recording time and enables the estimation of camera movement within one light-field video frame. By estimating depth maps and local point spread functions (PSFs) from multiple perspectives with the same exposure, regional motion deblurring can be supported. Missing exposures at various perspectives are then interpolated.
Original language | English |
---|---|
Number of pages | 10 |
Journal | Computer Graphics Forum |
Volume | 33 |
Issue number | 2 |
DOIs | |
Publication status | Published - Apr 2014 |
Fields of science
- 102 Computer Sciences
- 102003 Image processing
- 102008 Computer graphics
- 102015 Information systems
- 102020 Medical informatics
- 103021 Optics
JKU Focus areas
- Engineering and Natural Sciences (in general)