PE&RS January 2016 - page 21

Morphological House Extraction from Digital
Surface Model and Imagery of High-Density
Residential Areas
Yan Li, Lin Zhu, Kikuo Tachibana, Hideki Shimamura, and Manchun Li
Abstract
High-density residential area modeling is extremely difficult
because many houses reside close to or even touch each other;
it usually causes the error of under segmentation. Our method
solves this issue through scale detection and dome reshaping.
A modified granulometry using
opening-by-reconstruction
instead of
opening
is proposed to detect the principle scales of
the buildings. A morphological filtering algorithm at the detect-
ed continuous scales is developed to decompose a house into
reshaped slices, which will be reconstructed as a dome. Thus,
the domes are separate despite the joined houses, and are used
to extract the markers of the houses. Finally, processes based
on markers are developed to segment and model the houses.
Among which, the multiple spectral image is used to detect the
trees in the scene. Compared with other extraction methods,
our technique decreases the fragment rate from 7.4 percent to
0.9 percent, the mixing rate from 14.8 percent to 0.9 percent.
Introduction
The aim of this research is to provide an automatic method
for the building cartography in the urban areas using the re-
mote sensed data. The object of the study is the high-density
housing area, taking Japanese cities as an example. In this
paper, we propose a Mathematical Morphology (
MM
)-based
building extraction method. As we know, there are many low-
rise high-density housing areas in the suburbs of metropolises
all over the world. It is more challenging to extract and model
these kind of houses than low-density housing. The houses
may be so close that they appear to be attached in less accu-
rate DSMs. As a result, it is difficult to isolate one house from
the others, and even good methods may yield unexpected
results. It is often that two or more houses are detected as
one. In fact, this is a problem in the detection algorithm and a
problem of scale. Thus, we attempt to address the issue using
mathematical morphology. Comparing with other morpholo-
gy-based segmentation techniques, our method reduces frag-
ment rate and mixing rate greatly in dense residential areas.
Related Work on Building Extraction
There are tens of thousands of buildings in a city which are
labor intensive to manually model. Automatic modeling using
imagery or
DSM
has been studied for many years, but it is still
not sufficiently developed for practical applications. Thus,
semi-automatic building modeling has been widely applied
(Ameri
et al
., 2000; Förstner
et al
., 1997; Wang
et al.
, 2011;
Sampath
et al.,
2010).
Existing automatic 2
D
building extraction techniques can
be categorized into three main types. Our method falls in
the third category. The first category primarily extracts the
buildings using image segmentation and pattern recognition
on single or multiple images (Katartzis
et al
., 2008; Lari
et
al
., 2007; Fazan
et al
., 2010; Hao
et al
., 2010; Lee
et al
., 2003,
Odue Elberink
et al
. 2011). Although promising results have
been shown, the imagery-only approach does not gener-
ally perform well in densely built-up areas, partially due to
shadows, occlusions, and poor contrast (Awrangjeb
et al
.,
2014). Pesaresi
et al
. believed that many methods, such as
the watershed-plus-marker segmentation technique, were
not applicable to textured or very complex scenes, and often
lead to results that were not stable (Pesaresi
et al
., 2001).
The second category of methods uses the
DSM
or lidar point
cloud data (Cheng
et al
., 2011; Awrangjeb
et al
., 2012; Pfeifer
et al.
, 2007) to determine geometric characteristics such as
size, height, and gradient. One kind of method uses normal-
ized
DSM
(
nDSM
) to extract off-terrain grids based on a height
threshold, and then distinguishes buildings from trees using a
classification method (Awrangjeb
et al
., 2012; Hug
et al
., 1997;
Odue Elberink
et al
., 2000). Another kind of method applies
segmentation and then classification to distinguish buildings
from trees. The segmentation can be bottom-up (Matikainen
et al
., 2001) or region growing (Forlani
et al
., 2001). Both of
these methods use the height signal as a principal identifier.
It offers an improved level of automation when compared
to image-only methods (Rottensteiner
et al
., 2003, Zhang
et
al
., 2013). The third category of methods integrates aerial
imagery and
DSM
/lidar data to exploit the complementary
information from both data sources (Gerke, 2009; Awrangleb
et al
., 2013). Abundant spectral information and high preci-
sion boundaries, which are provided by the imagery, improve
the accuracies for recognizing and locating of the build-
ings. Using stereo images derived
DSM
instead of lidar point
clouds has the advantage of synchronism. However, this kind
of method still faces some difficulties that Awrangjeb
et al
.
indicated (Awrangjeb
et al.
, 2014): First is the accurately of
co-registered imagery and height data; this causes the hori-
zontal error of the footprints. Second, the segmentation result
of the high-resolution image may be too trivial and grouping
for footprints becomes difficult.
Related Work on MM for Building Extraction
MM
has been demonstrated to be an effective tool for process-
ing
DSM
data, particularly when extracting objects of given
shapes (Valero
et al
., 2010; Sagar
et al
., 2000; Soille
et al
.,
Yan Li and Manchun Li are with the Jiangsu Provincial Key
Laboratory of Geographic Information Science and Technology,
International Institute for Earth System Science, Nanjing
University, 210046 Nanjing Jiangsu (
).
Lin Zhu, Kikuo Tachibana, and Hideki Shimamura are with
the PASCO Corporation Japan, 1-1-2 Higashiyama, Megulo-ku
Tokyo, 153-0043, Japan.
Photogrammetric Engineering & Remote Sensing
Vol. 82, No. 1, January 2016, pp. 21–29.
0099-1112/16/21–29
© 2015 American Society for Photogrammetry
and Remote Sensing
doi: 10.14358/PERS.83.1.21
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
January 2016
21
I...,11,12,13,14,15,16,17,18,19,20 22,23,24,25,26,27,28,29,30,31,...74
Powered by FlippingBook