PE&RS March 2019 Public - page 197

Improved Camera Distortion Correction and
Depth Estimation for Lenslet Light Field Camera
Changkun Yang, Zhaoqin Liu, Kaichang Di, Yexin Wang, and Man Peng
Abstract
A light field camera can capture both radiance and angular
information, providing a novel solution for depth estima-
tion. The paper proposes two improved methods including
distortion model optimization and depth estimation refine-
ment for a lenslet light field camera. For distortion model
optimization, a novel 14-param
involves sub-aperture images ge
the light field camera images. F
ment, an algorithm reducing the
depth estimation in weak texture regions is proposed based
on multi-view stereo matching using the cost volume. Experi-
mental results show the projection error has decreased by
about 30% and the depth root-mean-squared error on real
world images has decreased by about 42% with our distortion
correction method and depth estimation method compared
with state of art algorithms. It verifies the correctness and
effectiveness of our proposed methods and show signifi-
cant improvement on accuracy of depth map estimation.
Introduction
Light field cameras have become popular in recent years in
computational photography, computer vision, and the close
range photogrammetry field because they can capture both the
radiance and angular information in a single snapshot thanks
to a micro-lens array placed between the main lens and sen-
sor. Typical applications include industrial measurement
(Heinze
et al
., 2016), measurement of the growth of plants
and animals (Apelt
et al
., 2015), visual odometry (Dansereau
et al
., 2011), simultaneous localization and mapping (
SLAM
)
(Dong
et al
., 2013). Light field cameras can be divided into
two categories depending on the distance between the micro-
lens array and sensor. In the first category called unfocused
plenoptic cameras (Adelson and Wang, 1992; Ng
et al
. 2005),
the distance is fixed to be equal to the micro-lens focal length,
such as in the commercial products Lytro and Lytro Illum
(Lytro, 2017). In the second one called focused plenoptic cam-
eras (Lumsdaine and Georgiev, 2009; Perwaß and Wietzke,
2012), the distance can be changed, such as Raytrix cameras
(Raytrix, 2017). In this study, we focus on unfocused plenop-
tic cameras and use a Lytro Illum camera.
Depth estimation is one of the most important research
topics for light field camera image postprocessing. The light
field images can be processed for multiple images from dif-
ferent views of the scene, namely sub-aperture images (which
will be described in detail in another section). The depth
estimation is based on the disparities observed in the adjacent
sub-aperture images, similar to stereo camera approaches.
Camera calibration is a necessary prerequisite for accurate
depth estimation. A number of methods have been proposed.
For the unfocused plenoptic cameras, Dansereau
et al
. (2013)
proposed a decoding, calibration, and rectification approach
meras, in which a 15-parameter
sented for calibration and distortion
013) calibrated a light field camera by
imization and estimating the rotation
of the micro-lens array in the frequency domain based on
Dansereau
et al
. (2013). Bok
et al
. (2014) proposed a more ac-
curate calibration method for a micro-lens light field camera
based on line features extracted from raw images directly.
However, in these calibration algorithms, distortion correc-
tions are all based on the radial distortion model, which does
not fit well with the lenslet light field camera. The unfocused
plenoptic camera calibration remains to be an important yet
challenging task for precision improvement of the subsequent
depth estimation. For focused plenoptic cameras, some meth-
ods on the metric calibration have been proposed (Heinze
et
al
., 2016; Zeller
et al
., 2016; Strobl and Lingenauber, 2016),
which are beyond the scope of this paper and will not be
detailed in the following sections.
Recently a number of depth estimation algorithms for
light field images have been proposed. Yu
et al
. (2013)
explored the 3D geometry of line in a light field image and
derived a disparity map using line matching between sub-
aperture images. Wanner and Goldluecke (2013) proposed a
local depth estimation algorithm using a structure tensor to
compute local slopes in epipolar plane image (
EPI
). Tao
et al
.
(2013) proposed a fusion approach that combined defocus
and correspondence cues to estimate the scene depth using
EPI
, and the global smoothness of depth map was refined by
Markov random fields. Tosic
et al
. (2014) proposed a depth
estimation algorithm by defining a description of
EPI
texture
and mapping this texture to scale-depth space. Sabater
et al
.
(2014) proposed a depth estimation algorithm based on block-
matching using the sub-aperture images without demosaick-
ing. Compared with above algorithms, Jeon
et al
. (2015) and
Zhang
et al
. (2015) achieved the sub-pixel shifts estimation
of sub-aperture images using the phase shift theorem in the
Fourier domain to obtain an accurate disparity map. In addi-
tion, Kim
et al
. (2013) estimated disparity maps using the 4D
light field captured by a digital single lens reflex (
DSLR
) with
movement. Chen
et al
. (2014) introduced a bilateral consis-
tency metric on the surface camera to estimate stereo matches
in the light image in the presence of occlusion. However, the
baseline of the light field images used in Kim
et al
. (2013)
and Chen
et al
. (2014) are much larger than the baseline of
Changkun Yang is with the State Key Laboratory of Remote
Sensing Science, Institute of Remote Sensing and Digital
Earth, Chinese Academy of Sciences; and also the University
of Chinese Academy of Sciences, Beijing 100049, China.
Zhaoqin Liu, Kaichang Di, Yexin Wang, and Man Peng are
with the State Key Laboratory of Remote Sensing Science,
Institute of Remote Sensing and Digital Earth, Chinese
Academy of Sciences, No. 20A, Datun Road, Chaoyang
District, Beijing 100101, China (
).
Photogrammetric Engineering & Remote Sensing
Vol. 85, No. 3, March 2018, pp. 197–208.
0099-1112/18/197–208
© 2018 American Society for Photogrammetry
and Remote Sensing
doi: 10.14358/PERS.85.3.197
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
March 2019
197
151...,187,188,189,190,191,192,193,194,195,196 198,199,200,201,202,203,204,205,206,207,...242
Powered by FlippingBook