PE&RS July 2016 Public - page 522

when a new image is introduced to the system (Crispell
et al.
,
2012; Kang
et al.
, 2013). Because of using a back-projection
concept in this approach, off-nadir images can be used. How-
ever, many images are required to train the change detection
system. It is not possible to use only two images.
Some other studies also used the back-projection concept
for change detection. For example, Liu
et al.
(2010) used an
existing 3D
CAD
model to detect building changes in UltraCam
airborne imagery. They projected building borders from the
3D
CAD
into the image using the exterior orientation param-
eters of the image. Then, compared the projected borders to
the object edges detected in the image to find changes. Qin
(2014) used stereo images to generate a
DSM
and then com-
pared it to an existing 3D city model to update the 3D model.
These studies are in the category of change detection, but they
depend on the existence of 3D city models.
If we can accurately coregister the objects in any bi-tempo-
ral images regardless of their viewing angles, we will be able
to utilize almost all existing
VHR
satellite images for change
detection. It will significantly increase the efficiency and re-
duce the cost of change detection. However, in our literature
review, no method has been found that specifically focuses on
coregistration of bi-temporal images collected from different
viewing angles.
This study presents a novel solution for coregistering bi-
temporal
VHR
images acquired from different viewing angles
for change detection. Due to varying local relief displace-
ments of urban objects in the images, a patch-wise coregistra-
tion (
PWCR
) is introduced to coregister corresponding image
patches at a local level. A patch can vary from a pixel to a
large image segment that represents a part of or the whole
ground object. Once the corresponding patches are found in
the two images, change detection is conducted by comparing
the difference of spectral properties within the corresponding
patches. To compare the spectral properties, in this study, two
methods are tested as change criteria: Image Differencing and
MAD
(multivariate alteration detection) Transform.
Methodology
General Concept of PWCR for Change Detection
In
PWCR
we aim to find the exact border of a segment in one
image based on its border in the other image. To account for
the relief distortion differences caused by the viewing angle
difference of the bi-temporal images, we use the
RPCs
of the
two images and a
DSM
in the coregistration process (see the
PWCR
component in Figure 1). The
DSM
needs to be acquired
at the same or similar time as one of the images. The image
acquired at the same time as the
DSM
is used as
base
image;
the other image is used as
target
image.
To find corresponding patches in the bi-temporal images,
first we generate the patches in the base image by segmenta-
tion. The
DSM
pixels are then back-projected into the two bi-
temporal images using the Rational Function Models (
RFM
s)
of the two images (
RFM
1 and
RFM
2 in Figure 1). The back-pro-
jection is used to guide the identification of the corresponding
pixels in the bi-temporal images. Knowing the corresponding
pixels, the corresponding patches can be generated in the
target image based on the previously generated patches in the
base image.
After the base patches are transferred from the base image to
the target image, i.e., the corresponding patches are generated
in the target image, object-based change detection is conducted
by comparing the spectral properties of the corresponding
patches (see Change Analysis component of Figure 1). If the
spectral properties of the corresponding patches are similar,
no change is expected; otherwise, a change is expected in the
corresponding patches. To measure and compare the spectral
properties, we tested and adopted the Image Differencing and
MAD
Transform methods for providing change criteria.
Due to viewing angle differences, certain occlusions and
exposure of building façades might appear in one image but
not in the other one. Therefore, to avoid false alarms in the
change detection, we focus on detecting horizontal surfaces
of objects, such as building roofs, but ignore vertical surfaces,
such as building façades. In addition, occluded objects are
also ignored in the change detection.
Figure 1. The schematic components of the presented framework for change detection. In the preprocessing component, after pan-
sharpening of the images, the base image is segmented to generate patches. In the PWCR component, using the DSM and RFMs, the
corresponding patches are generated in the target image. In the change analysis component, after radiometric normalization, by com-
parison of the spectral properties of the corresponding patches, changes are detected.
522
July 2016
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
447...,512,513,514,515,516,517,518,519,520,521 523,524,525,526,527,528,529,530,531,532,...582
Powered by FlippingBook