PE&RS February 2017 Public - page 123

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
February 2017
123
On the Fusion of Lidar and Aerial Color Imagery to
Detect Urban Vegetation and Buildings
Madhurima Bandyopadhyay, Jan A.N. van Aardt, Kerry Cawse-Nicholson, and Emmett Ientilucci
Abstract
Three-dimensional (3D) data from light detection and rang-
ing (lidar) sensor have proven advantageous in the remote
sensing domain for characterization of object structure and
dimensions. Fusion-based approaches of lidar and aerial
imagery also becoming popular. In this study, aerial color
(
RGB
) imagery, along with co-registered airborne discrete
lidar data were used to separate vegetation and buildings
from other urban classes/cover-types, as a precursory step
towards the assessment of urban forest biomass. Both spectral
and structural features such as object height, distribution of
surface normals from the lidar, and a novel vegetation metric
derived from combined lidar and
RGB
imagery, referred to as
the lidar-infused vegetation index (
LDVI
) were used in this
classification method. The proposed algorithm was tested
on different cityscape regions to verify its robustness. Results
showed a good separation of buildings and vegetation from
other urban classes with on average an overall classification
accuracy of 92 percent, with a kappa statistic of 0.85. These
results bode well for the operational fusion of lidar and
RGB
imagery, often flown on the same platform, towards improved
characterization of the urban forest and built environments.
Introduction
Accurate information on land cover change, including change
in classes related to natural resources, crop yield, and urban
expansion are important to a variety of managerial and polit-
ical endeavors. Remote sensing, a synoptic tool often used to
assess manmade and natural features, has been applied exten-
sively in land cover classification and Earth-resource studies
(Schott, 2007). As a relatively new remote sensing modality,
light detection and ranging (lidar) has gained traction in the
characterization of three-dimensional (3D) structure and
improved land cover classification (Charaniya
et al.
, 2004;
Brennan and Webster, 2006; Meng
et al.
, 2012; Matkan
et al.
,
2014; Zarea and Mohammadzadeh, 2015). Dense lidar point
clouds are used to produce accurate, high-resolution digital
elevation models (
DEM
) of the Earth’s surface (Gillin
et al.
,
2015). This modality and its output therefore are especially
useful in the context of 3D urban modeling, and the mapping
of urban carbon stocks (vegetation biomass).
A strong emphasis recently has been placed on the model-
ing of buildings and urban forests. Building models are useful
in applications such as urban planning, facility management,
emergency response, whereas information about the dynam-
ics of vegetation are important to the preservation of natural
resources, for environmental assessment, and biomass and car-
bon estimation, among other applications. As such, lidar has
been successfully used to detect buildings and vegetation from
urban scenes (e.g., Kwak
et al.
, 2007; Niemeyer
et al.
, 2014).
Building detection mainly has been based on segmentation
of point clouds (e.g., Vosselman
et al.
, 2004, Schnabel
et al.,
2007, Ahn, 2004). Apart from building detection, many stud-
ies focus on vegetation detection, tree structure measurements,
and modeling of the forest canopy using lidar data (Chen and
Zakhor, 2009; Liu
et al.
, 2013). Such object detection studies
have been extended to urban forest biomass estimation based
on the proven ability of lidar as input to non-urban biomass
estimation, i.e., in natural or managed forest environments,
(Hyyppä
et al.
, 2012; Khosravipour
et al.
, 2014).
Lidar has a distinct advantage when it comes to structural
characterization of objects, but it is difficult to extract cali-
brated spectral or textural information. On the other hand,
an object’s spectral signature and texture information can
be obtained from imagery. Thus, by combining these two
complimentary datasets, a more accurate classification result
theoretically can be achieved. A selected number of past
studies have evaluated the use of data fusion, i.e., including
multiple remote sensing modalities in the assessment process
(e.g., Haala
et al.
, 1998; Chen
et al.
, 2009). Examples include
Rottensteiner
et al.
(2005), who used the Dempster-Shafer
probabilistic theory (
DST
) to fuse lidar data and multispectral
imagery to classify land cover into building, tree, grassland,
and bare soil classes. They reported that 95 percent of build-
ings (>50m
2
) were detected by their algorithm and among
them 89 percent of detected buildings were correct. Chen
et
al.
(2009), in a similar study, used QuickBird multispectral
imagery with lidar data and derived the normalized difference
vegetation index (
NDVI
) and the normalized difference water
index (
NDWI
) to classify different urban objects, e.g., water,
vegetation, buildings. They demonstrated that the inclusion of
lidar height increased the classification accuracy from 85.25
percent to 92.09 percent and from 82.86 percent to 97.06
percent, for shrubs and grasslands, respectively. Qin and Fang
(2014) applied a hierarchical technique on high resolution
multi-spectral aerial images and matched the digital surface
model (
DSM
) from lidar to extract buildings. They reported a
94 percent building detection at 87 percent accuracy. How-
ever, a crucial step of any fusion-based study is the accurate
geometric registration of remote sensing datasets, i.e., improp-
er registration typically leads to erroneous results.
Madhurima Bandyopadhyay is with the Digital Imaging and
Remote Sensing Laboratory, Chester F. Carlson Center for
Imaging Science, Rochester Institute of Technology, 54 Lomb
Memorial Drive, Rochester, NY 14623
;
.
Jan A.N. van Aardt and Emmett Ientilucci are with the Digital
Imaging and Remote Sensing Laboratory, Chester F. Carlson
Center for Imaging Science, Rochester Institute of Technolo-
gy, 54 Lomb Memorial Drive, Rochester, NY 14623.
Kerry Cawse-Nicholson is with the Digital Imaging and
Remote Sensing Laboratory, Chester F. Carlson Center for
Imaging Science, Rochester Institute of Technology, 54 Lomb
Memorial Drive, Rochester, NY 14623, USA.
Photogrammetric Engineering & Remote Sensing
Vol. 83, No. 2, February 2017, pp. 123–136.
0099-1112/17/123–136
© 2017 American Society for Photogrammetry
and Remote Sensing
doi: 10.14358/PERS.83.2.123
67...,113,114,115,116,117,118,119,120,121,122 124,125,126,127,128,129,130,131,132,133,...166
Powered by FlippingBook