PE&RS October 2017 Public - page 705

Registration of Images To Lidar and GIS Data
Without Establishing Explicit Correspondences
Gabor Barsai, Alper Yilmaz, Sudhagar Nagarajan, and Panu Srestasathiern
Abstract
Recovering the camera orientation is a fundamental problem
in photogrammetry for precision 3D recovery, orthophoto
generation, and image registration. In this paper, we achieve
this goal by fusing the image information with information
extracted from different modalities, including lidar and
GIS
. In contrast to other approaches, which require feature
correspondences, our approach exploits edges across the
modalities without the necessity to explicitly establish cor-
respondences. In the proposed approach, extracted edges
from different modalities are not required to have analyti-
cal forms. This flexibility is achieved by minimizing a new
cost function using a Bayesian approach, which takes the
Euclidean distances between the projected edges extracted
from the other data source and the edges extracted from
the reference image as its random variable. The proposed
formulation minimizes the overall distances between the sets
of edges iteratively, such that the end product results in the
correct camera parameters for the reference image as well as
matching features across the modalities. The initial solution
can be obtained from
GPS/IMU
data. The formulation is shown
to successfully handle noise and missing observations in
edges. Point matching methods may fail for oblique images,
especially high oblique images. We eliminate the require-
ment for exact point-to-point matching. The feasibility of the
method is experimented with nadir and oblique images.
Introduction
Registration is the process of aligning two or more datasets
acquired for the same site with different coordinate systems
to a single coordinate system. Considering the ever increas-
ing amount of multi-modal datasets with different geometric,
radiometric, temporal, and thematic resolutions, it is impor-
tant to register these datasets to a single coordinate system
by registration (Sester
et al
., 1998), which in turn provides us
with the ability to exploit the advantages offered by each of
the different modalities (Schenk and Csathó, 2002). The reg-
istration between such datasets is
traditionally
divided into
four steps, namely: feature extraction, matching, transforma-
tion, and resampling (Zitová and Flusser, 2003). In this paper,
we discuss a
nontraditional
method for registering aerial
images to
GIS
(Geographical Information Systems) and lidar
(Light Detection And Ranging) data. This is a continuation of
the paper on view invariant shape recognition (Yilmaz and
Barsai, 2008). This previous paper studied freeform matching
using Fourier descriptors.
Registration of aerial images is an important task in pho-
togrammetry, as it is a pre-requisite for 3D surface recovery
and orthophoto generation, as well as performing higher
level inference tasks such as object recognition. An image is
traditionally registered to a reference system using a set of
Ground Control Points (
GCPs
) and corresponding pixels in the
image. With the advancement in
GPS/ IMU
(Global Positioning
System/Inertial Measurement Unit) technology, exterior orien-
tation parameters (
EOPs
) are obtained from the instrument
measurements and aerial images are registered on the fly, also
called direct orientation. A major drawback of direct orienta-
tion is the computation of
EOP
without considering the inte-
rior orientation parameters (
IOPs
) that continually changes
due to environmental and mechanical factors (Schenk, 1999).
However, the
EOP
derived from direct orientation can be used
as initial approximations for precise mapping applications.
For indirect orientation, it is possible to use information
extracted from other datasets such as lidar and
GIS
. While very
accurate
GIS
and lidar datasets are publicly available from
USGS
(United States Geological Survey) and state agencies for
any given area, they are usually not used for registration due
to the fact that establishing correspondences between an im-
age and
GIS
/lidar is a challenging problem and is an ongoing
research (Heipke, 1997) (Shan and Toth, 2009) (Sourimant
et al
., 2011) (Bartie
et al
., 2011). In recent years, researchers
have exploited the features extracted from the lidar data, such
as lines and planes, to register them with the images (Habib
et al
., 2007) (Habib
et al
., 2005) (Jaw and Wu, 2006) (Schenk
and Csathó, 2002) (Nagarajan and Schenk, 2016). Extracting
linear and planar features from lidar data, however, is not
always intuitive due to surface patterns, noise, and lidar point
density. In contrast to registration of lidar with images, the
registration of
GIS
data with images has received less attention
except only a few attempts such as (Sester
et al.
, 1998, Shan,
2000) and (Chawathe, 2007).
GIS
features and the features ex-
tracted from a different sensory data may not be the same due
to different sampling of the real world (Schenk and Csathó,
2007). The number of vertices that constitute a shape can
also make features to look different. This paper demonstrates
a novel method to register an image with the
GIS
and lidar
data without establishing correspondences by eliminating the
feature matching step. Image registration methods can broadly
be categorized based on (a) the dataset representation, (b) the
model for establishing correspondences and (c) the choice
of mathematical models (Nagarajan, 2010). In the case when
Gabor Barsai is with Ferris State University, School of
Engineering & Computing Technology, 1009 Campus Drive, JOH
408, Big Rapids, MI 49307 (
).
Alper Yilmaz is with The Ohio State University,
Photogrammetric Computer Vision Laboratory, Bolz Hall,
2036 Neil Ave. Mall, Columbus, OH, 43210.
Sudhagar Nagarajan is with Florida Atlantic University, Civil,
Environmental and Geomatics Engineering, Building EG-36,
Room 222, 777 Blades Road, Boca Raton, FL, 33431.
Panu Srestasathiern is with the Geo-Informatics and Space
Technology Development Agency, 120 The Government
Complex, Building B, 6
th
and 7
th
Floor, Chaeng Wattana Road,
Lak Si, Bangkok 10210, Thailand.
Photogrammetric Engineering & Remote Sensing
Vol. 83, No. 10, October 2017, pp. 705–716.
0099-1112/17/705–716
© 2017 American Society for Photogrammetry
and Remote Sensing
doi: 10.14358/PERS.83.10.705
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
October 2017
705
651...,695,696,697,698,699,700,701,702,703,704 706,707,708,709,710,711,712,713,714,715,...718
Powered by FlippingBook