PE&RS February 2016 - page 123

the hue, saturation and intensity (
HSI
) color spaces and then
process them by using the intensity vector.
Regional Adaptive Marker-Based Watershed Segmentation
Regional adaptive marker-based watershed segmentation is an
improvement on base watershed segmentation. The watershed
segmentation algorithm is based on visualizing an image in
three dimensions: two spatial coordinates versus intensity. It
is usually applied to the gradient image. Suppose that a hole
is punched in each regional minimum and that the entire
topography is flooded from below by letting water rise through
the holes at a uniform rate. When the rising water levels in
distinct catchment basins are about to merge, a dam is built to
prevent the merging. The flooding will eventually reach a stage
when only the tops of the dams are visible above the water
line. These dam boundaries correspond to the dividing lines of
the watersheds. Therefore, they are the (connected) boundaries
extracted by a watershed segmentation algorithm (Meyer and
Beucher, 1990; Vincent and Soille, 1991; Meyer, 1992; Gonza-
lez and Woods, 2002; Li
et al.
, 2010; Zhang
et al.
, 2010).
However, direct application of the above watershed seg-
mentation algorithm generally leads to over-segmentation
due to noise and other local irregularities of the gradient. A
solution is to control over-segmentation based on the concept
of markers; in other words, marker-based watershed segmen-
tation (Gonzalez and Woods, 2002). Marker-based watershed
segmentation is a two-stage process: including the extraction
of marker image and the labeling of pixels (flooding). There
are various land cover objects with different texture granula-
tions in a high spatial resolution remote sensing image. The
gradient magnitudes of the pixels are intricately distributed.
The gradient magnitudes within the homogenous object are
commonly lower than that of the boundary pixels. However,
the gradient magnitudes of pixels in objects with complex tex-
ture may be comparable or ever higher than that of the bound-
ary pixels. It will fail to extract correct marker image with a
single threshold for binarization (Li
et al.
, 2010). Therefore,
when applying the method to high spatial resolution remote
sensing image segmentation, the noises or textures of the
image are usually labeled as pseudo-local minimum regions,
resulting in over-segmentation.
To reduce over-segmentation, in this paper we adopt a
regional adaptive marker-based watershed segmentation
algorithm similar to that proposed by Li
et al.
(2010), but with
minor modifications. First, a regional adaptive marker extrac-
tion method is adopted to obtain the marker image by using a
threshold image (
TI
). Let
GI
represent the gradient image; then
the marker image is defined as a binary image (
BW
) that is
the result of the logical operation:
BW
=
GI
<
TI
. The low-pass
component of the gradient image (
LCG
) calculated by the But-
terworth low-pass filter corresponds to the main contents of
the gradient image (
GI
). Therefore, the
LCG
can be used to set
the
TI
. Multiplying the
LCG
with an appropriate scale factor (
T
,
[0–1]) is useful for the marker extraction of objects with tex-
tures. It is suggested that
T
be set in the range between 0.6 and
0.7. However, for a homogenous object, the scaled
LCG
may be
too low and lead to over-segmentation. For such objects, an
empirical statistic threshold (EST) is used as the threshold.
The EST value for a certain image is defined as the
α
fractile
of the gradient level probability distribution;
α
ranges from [0
- 1]. For each pixel in
TI
, the corresponding scaled
LCG
value
is compared with the EST value; the maximum between them
is defined as the pixel value. Then, markers in the
BW
with
insufficient spatial support are rejected because these markers
are usually caused by noise. An appropriate area threshold
(A), which is equal to the area of the smallest discernible
object, is set to remove these markers. Second, the image
labeling scheme in Meyer’s algorithm is implemented by us-
ing a one queue and one stack data structure (Li
et al.
, 2010).
To estimate the value of
α
adaptively, we use the Otsu method
(Otsu, 1975; Sahoo
et al.
, 1988) to estimate the value of
α
.
Seamline Determination at the Object Level
In this section, we propose a differential expression method
based on objects. The overlapping area of the right image is
overlaid with the objects’ boundaries of the overlapping area
of the left image obtained in the previous step. Overlaying the
overlapping area of the right image with the objects’ boundar-
ies of the left image not only establishes a one-to-one cor-
relation between the left image and the right image, but also
provides a method to estimate the difference based on objects.
Because of the relief displacement, the same objects that
cover the obvious objects, e.g., buildings and high bridges, of
the overlapping area of the left image may not cover the same
terrestrial features of the overlapping area of the right image.
When calculating the difference of the objects, the objects
belonging to the obvious objects, such as buildings and high
bridges, will have low correlation coefficients. The objects
with small relief displacements, such as roads, grass, squares,
and rivers will have high correlation coefficients.
After obtaining a one-to-one correlation of the objects,
the correlation coefficient is used to estimate the degree of
differences at the object level. The correlation coefficient
ρ
(
κ
)
for the
k
th
object is computed with Equation 1 and Equation
2, where the “Object” is the set of pixels belong to an object;
f
(
i,j
) and
g
(
i,j
) are the pixel values at coordinates (
i,j
) in the
left and right image, respectively;
i
and
j
are row and column
coordinates;
f
k
and
g
k
are the average of the
N
pixel values
of the
k
th
object in the left and right image. To improve the
efficiency,
ρ
(
κ
) is computed by Equation 3;
ρ
(
κ
) has a range
between -1.0 and 1.0. The cost (degree of difference) at the
k
th
object, cost (
k
), is defined in Equation 4. The cost value
approaches 0.0 for objects with small differences and 1.0 for
objects with large differences.
( )
[( ( , )
)( ( , )
)]
( ( , )
)
( (
,
k
f i j f g i j g
f i j f
g
k
k
i j Object
k
=
2
i j g
k
i j Object
i j Object
, )
)
,
,
2
ρ
(1)
,
(2)
( )
( , ) ( , )
( , )
( , )
,
,
,
k
f i j g i j
N
f i j
g i j
i j Object
i j Object
i j
=
1
Object
i j Object
i j Object
f i j
N
f i j
g i j
[
][
( , )
(
( , ))
( , )
,
,
2
2
1
2
2
1
N
g i j
i j Object
i j Object
(
( , ))
,
,
]
ρ
(3)
cos
t
(
k
) = [1.0 –
ρ
(
κ
)]/2.0
(4)
After estimating the cost of the objects (the object cost
map), the
POA
s determination is based on the object cost map
and the adjacency relationships of the objects. Figure 2 shows
a demonstration of
POA
s determination. Figure 2 a shows the
object cost map; Figure 2b shows the
POA
s (white objects) of
the object cost map. The white and black circles represent
the start and end pixels, respectively. The darker color of the
object denotes higher difference. To determine the
POA
s, the
region adjacency matrix is built based on the object cost map.
The object cost map can be described as a graph in which the
objects are vertexes. If two objects are adjacent, there is an
edge between the two objects. The region adjacency matrix is
a way of representing a
N
vertex graph,
G
= (
V
,
E
), by a
N
×
N
matrix whose entries are Boolean values. The region adjacency
matrix
a
[
i
][
j
] is defined by Equation 5 (Cormen,
et al.
, 2001):
a
[
i
][
j
] =
true if (i,j)
E
false otherwise
(5)
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
February 2016
123
71...,113,114,115,116,117,118,119,120,121,122 124,125,126,127,128,129,130,131,132,133,...171
Powered by FlippingBook