PE&RS February 2016 - page 150

regions are superior to pixels in many aspects, regions remain
limited in other aspects. For example, image edges, which
are critical in spatial information, are not considered dur-
ing common
OBIA
s. Image edge lines generally denote object
boundaries or borders and exhibit strong semantic connota-
tions. However, region boundaries obtained by image segmen-
tation do not necessarily match image edge lines in quantity
and location. Thus, the use of image edges in
OBIA
should be
enhanced to improve the performance of this technique.
In general, man-made objects are important targets when
extracting information from
HSR
images. Compared with
natural objects, man-made objects frequently have distinctive
shapes, e.g., straight line-shaped boundaries. Thus, straight
edge lines are used in this study. The information integration
of edges and regions is not a new concept in the field of image
processing. Several studies have combined these two ele-
ments to improve segmentation. However, the current study
presents a distinctive scheme of “region and line integration”
for
OBIA
. To improve performance, regions and lines are used
collaboratively throughout the entire technical chain of
OBIA
,
i.e., from low-level image processing (segmentation) and
feature extraction to high-level image analysis (classification
and recognition). Under this framework, several new analysis
techniques for object shapes and relationships have been de-
signed. In addition, we have designed a scheme for road net-
work extraction from
HSR
images to validate the framework.
In our experiments, these techniques have exhibited superior-
ity over common
OBIA
s, which demonstrates the application
values of the proposed framework.
The rest of this paper is organized as follows. The next Section
presents the proposed technical model, including region and line
primitive extraction, feature calculation, and relationship model-
ing, followed by a discussion of a case study of the framework,
which proposes the road extraction scheme. The framework and
the methods are also validated through several experiments in
this section. The final Section provides a summary of the study.
Methods
In a previous study (Wang
et al.
, 2015), we proposed a novel
IPSL
-neighborhood model based on region and line relation-
ship modeling, which further refined
HBC-SEG
and reduced its
over -segmentation errors. In the current study, region and line
relationship modeling is systemically extended and improved.
Several new concepts, indices, and operators are derived,
which facilitate subsequent
OBIA
steps, including feature
extraction and classification. Thus, the extended technical
model is called region-line primitive association framework
(
RLPAF
) for
OBIA
. We call region and line “primitive” because
both are utilized as the basic analyzing units for subsequent
image analyses. Then, we apply
RLPAF
on road network extrac-
tion from
HSR
images to validate the ideas and techniques.
Technical Framework of
RLPAF
The region-line primitive association framework (
RLPAF
) is
presented in Figure 1. First, region primitives are obtained
from
HBC-SEG
, which also produces image gradients (Wang
et al.
, 2015). The gradient map is regrouped into line sup-
port regions through Burn’s phase-grouping method (Burns
et al.
, 1986), and straight lines are then detected. Multiple
region and line features, including the spectra and shapes of
the regions as well the lengths and directions of the lines, are
calculated. Region and line topologies as well as their orienta-
tion relationships are then calculated to build the association
model. Several kinds of
OBIA
s, including thematic information
extraction, image classification, or change detection can be
conducted by using the aforementioned two kinds of primi-
tives. In the proposed framework, regions and lines are highly
integrated across the entire
OBIA
process through image seg-
mentation, feature extraction, and high-level image analysis.
Region and Line Primitive Extraction
In this technique, the Canny edge detection method (Canny,
1986) initially extracts image edges. Then, the edges are embed-
ded into watershed segmentation (Vincent and Soille, 1991) for
initial image segmentation. The initial sub-regions (i.e., the bases
of subsequent merging) are obtained after edge allocation. Subse-
quently, edge-constrained merging iteratively combines the sub-
regions until all merging costs exceed a maximum threshold,
which produces the initial regions. Non-constrained merging
that is controlled by a significantly small threshold converts the
initial regions into final regions. First-stage merging allows the
growth of regions but is limited by image edges. In the second
stage, trivial regions are removed by merging them with one
another or into large regions.
HBC-SEG
exhibits good segmenta-
tion accuracy, including over- and under-segmentation accura-
cies. The region boundary obtained using this method is highly
consistent with the actual boundaries (edges) of spatial objects,
which facilitates the modeling of region and line relationships.
Based on the gradient map obtained using
HBC-SEG
, straight
edge lines are detected using the phase-grouping method. This
method is based on phase (gradient orientation) and is differ-
ent from the edge-based straight line detection method. Pixels
with the same phases are grouped into regions, and the center
straight lines of the regions are obtained by least squares
fitting. This method can extract the so-called weak-contrast
straight lines from images because phase is used for line detec-
tion instead of gradient. In principle, this method is fast and
concise. Straight line features, including lengths, directions,
and densities, are calculated for use in subsequent analyses.
Figure1. Technical framework.
150
February 2016
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
71...,140,141,142,143,144,145,146,147,148,149 151,152,153,154,155,156,157,158,159,160,...171
Powered by FlippingBook