PE&RS March 2014 - page 229

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
March 2014
229
Comparison of Two Panoramic Sensor Models
for Precise 3D Measurements
Shunping Ji, Yun Shi, Zhongchao Shi, Anmin Bao, Junli Li, Xiuxiao Yuan, Yulin Duan, and Ryosuke Shibasaki
Abstract
In this paper, the system errors produced by the most wide-
ly used ideal panoramic camera model for a close-range
multi-camera rig are indicated and analytically modeled
according to a rigorous panoramic camera model, and a
comprehensive comparison between the two models is given.
First, the 3
D
localization errors of the ideal model are an-
alyzed that shows the correlations with the object-image
distance and the viewing angles. Second, the epipolar errors
are analyzed and are observed to exhibit changes with the
rotation angles and z-coordinates of the image points. Finally,
tests are carried out in space resection, epipolar constraints,
and bundle adjustment with different sensor models. The
outdoor tests with small object-image distance (several me-
ters) show the difference between the two models is notably
slight. In contrast, the indoor tests with larger object-image
distance (more than 15 m) show the rigorous model pro-
duces 2 cm better measurement accuracy than the ideal.
Introduction
In many research studies and applications in recent years (Li
et al
., 2004; Anguelov
et al
., 2010), the close-range panoram-
ic camera has been employed in place of a traditional plane
camera because it features full panoramic information in a
single image and a simple structure: one projection center and
one projection sphere or cylinder. However, compared with a
plane camera, larger geometric distortions exist in a panoram-
ic camera or in a fish-eye camera even with a much smaller
field of view, which may result in poor imaging quality. From
a manufacturing perspective, there are three main methods
used to overcome the large distortions. One method employs
a dioptric multi-camera rig system, which reduces and shares
the deformation equally over several separate and fixed fish-
eye lenses. Further image stitching is required to form an en-
tire panoramic image, which causes the main drawback of this
structure that the projection radius should be fixed for a best
stitching effect. The Ladybug
®
system is an example of this
case (Sato
et al
., 2004; Sato and Yokoya, 2010; Ladybug, 2013).
The other method uses a linear-array-based camera, which can
obtain seamless panoramic images with a vertical and turn-
table axis, such as the EYESCAN camera system (Schneider
and Maas, 2006; Amiri and Gruen, 2010). This structure is
not suitable for high-speed platforms, and static or low-speed
platforms are preferred (Geyer and Kostas, 2001). A catadiop-
tric system is the third type of panoramic camera composed of
several lenses and parabolic mirrors (Geyer and Kostas, 2001;
Barreto and Araujo, 2005). This paper concentrates on the
multi-camera rig system with spherical projection.
The basic projective geometry of a panoramic camera is
still represented by the ideal pinhole model, which describes
the co-linearity that 3
D
object points, corresponding image
points in the sphere, and the panoramic center are in a line
(see Figure 1a). Kaess and Dellaert used a multi-camera rig
for simultaneous localization and mapping (
SLAM
) with the
pinhole spherical sensor model (Kaess and Dellaert, 2010).
Paya
et al.
(2010) concentrated on the global description of
each omni-directional image. Gutierrez
et al
. (2011)
con-
centrated on the rotation and scale invariance of descriptor
patches with a spherical camera model. Spherical perspective
transformation functions and stereo-homographies based on
the pinhole model are also covered by Mei
et al
. (2008). In
Silpa-Anan and Hartley (2005) the fundamental matrix of the
pinhole model is used as a geometric constraint between two
views. The pinhole model for spherical imaging used in these
articles (also referred to in this paper as the ideal panoramic
camera model) is adopted under the assumption that the cam-
era contains a unique spherical center as in Figure 1a. In fact,
a multi-camera rig system does not contain an entire sphere
but consists of several separate lenses with different projec-
tion centers and focal lengths (see Figure 1b). This internal
structure may introduce additional system errors if the ideal
panoramic camera model is applied.
Shunping Ji is with the School of Remote Sensing and In-
formation Engineering, Wuhan University, 129 Luoyu Road,
Wuhan, 430079, China; and the Center for Spatial Information
Science, University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa,
Chiba, 277-8568, Japan.
Yun Shi is with Key Laboratory of Agri-informatics, Minis-
try of Agriculture / Institute of Agricultural Resources and
Regional Planning, Chinese Academy of Agricultural Scienc-
es, 12 Southern Street of Zhongguanchun, Haidian, Beijing,
10008, China; and the Center for Spatial Information Science,
University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba,
277-8568, Japan.(shiyun@caas.cn).
Zhongchao Shi is with the Department of Restoration Ecology
and Built Environment, Faculty of Environmental Studies,
Tokyo City University, 3-3-1 Ushikubo-nishi, Tuzuki-ku, Yo-
kohama, Kanagawa, 224-8551, Japan.
Anmin Bao and Junli Li are with the Xinjiang Institute of
Ecology and Geography, Chinese Academy of Sciences, 818
Beijing South Road, Urumqi, 830011, China.
Xiuxiao Yuan is with the School of Remote Sensing and Infor-
mation Engineering, Wuhan University, 129 Luoyu Road,Wu-
han, 430079, China.
Yulin Duan and Ryosuke Shibasaki are with the Center for
Spatial Information Science, University of Tokyo, 5-1-5 Kashi-
wanoha, Kashiwa, Chiba, 277-8568, Japan.
Photogrammetric Engineering & Remote Sensing
Vol. 80, No. 3, March 2014, pp. 229–238.
0099-1112/14/8003–229
© 2014 American Society for Photogrammetry
and Remote Sensing
doi: 10.14358/PERS.80.3.229
195...,219,220,221,222,223,224,225,226,227,228 230,231,232,233,234,235,236,237,238,239,...286
Powered by FlippingBook