PE&RS January 2017 Public - page 25

Rectified Feature Matching for
Spherical Panoramic Images
T.Y. Chuang and N.H. Perng
Abstract
Spherical panoramic image processing has received renewed
interest in the fields of photogrammetry and computer vi-
sion. The difficulty in spherical matching is largely due to
inevitable image distortions introduced from the equirect-
angular projection. In this paper, we present an effective
strategy for tackling the problem of distortion to improve the
performance of spherical image matching. The effectiveness
of the rectified matching is evaluated with simulated data
and compared with state-of-art methods. In addition, experi-
ments with respect to matching between omnidirectional and
planar images, establishing 2D-to-3D correspondence with
lidar, and the pose estimation of a spherical image sequence
are conducted. The results verify the utility of the proposed
method, which provides stable and evenly distributed cor-
responding points, and it is suitable for integration with con-
ventional techniques for further 3D exploitation of imagery.
Introduction
The image matching of conventional planar images has been
widely studied in the fields of photogrammetry and computer
vision. Assigning corresponding points from a stereo image
pair has become an indispensable ingredient in the 3D ex-
ploitation of imagery. This process is involved in an extensive
range of applications such as visual odometry, structure from
motion, image-based modelling, and so forth. A wide variety
of approaches for planar image matching can be found in the
literature (e.g., Mikolajczyk
et al
., 2005, Tuytelaars and Miko-
lajczyk, 2007). In contrast, techniques specifically designed for
panoramic cameras have not been widely studied yet. Since
the metric documentation of architectural and cultural heri-
tage artifacts with multiple spherical panoramic images has
shown promising achievements, and following the progress in
spherical imaging systems, the image processing of spherical
vision has received renewed interest for intelligent control and
engineering purposes due to its convenience in capturing om-
nidirectional scenes. Typically, spherical vision is acquired by
a combination of a perspective camera and a hyperbolic mirror
or by constructed from stitching together partly overlapping
images captured by common digital cameras and projecting
them onto a virtual sphere (Tseng et
al
., 2016). More recently,
numerous point-and-shoot panoramic cameras, such as
RICOH
THETA
,
INSTA360
, and KeyMission 360, are launched to provide
a simple way generating an omnidirectional image of a scene.
Techniques designed for planar images, such as scale-invariant
feature transform (
SIFT
) (Lowe 1999), speeded up robust fea-
tures (
SURF
) (Bay
et al.
, 2006), oriented fast and rotated BRIEF
(
ORB
) (Rublee
et al.
, 2011), and accelerated-
KAZE
(
A-KAZE
)
(Alcantarilla
et al.
, 2013), are often employed for spherical im-
age processing. These methods are based on affine invariance
and largely depend on the appearance of local image regions
to form feature descriptors. However, problems would arise
because the equirectangular projection introduces non-linear
distortions when transforming a sphere to an omnidirectional
image. Thus, descriptions of a point pair may be quite differ-
ent in two images reducing the matching performance.
To tackle this problem, Nayar (1997) produces a perspec-
tive view from the information given by the omnidirectional
image, as long as the mirror parameters for generating pan-
orama are known. Ishiguro
et al
. (2003) simulate the motion
of virtual perspective cameras based on several omnidi-
rectional images. Svoboda and Pajdla (2001) use adaptive
windows around detected Harris points to create normalized
patches for matching. Mauthner
et al.
(2006) apply a virtual
perspective camera plane for the region of interest of match-
ing in omnidirectional images. However, these methods are
specifically designed for imaging mechanisms that combine a
curved mirror with a fisheye lens. On the other hand, Hansen
et al. (2007) present a
SIFT
-based approach to match features
in wide-angle images. In this method, they leverage spherical
Fourier transform to generate set of scale-space images for fea-
ture extraction and compute feature descriptors within a fixed
size patch circles each feature point based on assumption that
the image is locally perspective in a 3 × 3 pixel neighborhood.
Cruz-Mota
et al
. (2012) propose a delicate interest point ex-
tractor for spherical image matching. Specifically, they intro-
duce spherical scale-space representation in spherical Fourier
domain, which plays the role of the Gaussian kernel in
SIFT
algorithm, and they proposed two feature descriptors for
spherical and planar images, respectively. Taira
et al
. (2015)
propose an approach to improve feature matching in spheri-
cal panoramic images, which uses a similar idea with this
study. To ease the distortion of equirectangular projection,
they rotate the spherical coordinate of a panoramic image to
synthesise a pre-defined number of rotated images and detect
features in the less distorted area in each image.
In this paper, we contribute an effective and efficient
strategy, tackling image distortion, to improve spherical im-
age matching. First, we leverage
SURF
detector to determine
feature locations in an omnidirectional image, and we convert
the omnidirectional image into a sphere in the spherical
coordinate system. Then, small regions around each feature
point are cropped from the sphere. Pixels within a cropped
region are mapped onto the tangent plane cross the feature
points, forming an image patch with a similar perspective
view where a more reliable
SURF
description can be derived
accordingly. Finally, the resulting descriptions are combined
with the locations traced back to the original panoramic im-
ages to acquire complete
SURF
descriptions of the panoramic
image, and the features are matched by the ratio test proposed
T.Y. Chuang is with the Department of Civil Engineering,
National Taiwan University, Taipei, Taiwan
(
).
N.H. Perng is with Cheng-Yu Tech. Ltd, Taipei, Taiwan.
Photogrammetric Engineering & Remote Sensing
Vol. 84, No. 1, January 2018, pp. 25–32.
0099-1112/17/25–32
© 2017 American Society for Photogrammetry
and Remote Sensing
doi: 10.14358/PERS.84.1.25
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
January 2018
25
I...,15,16,17,18,19,20,21,22,23,24 26,27,28,29,30,31,32,33,34,35,...54
Powered by FlippingBook