PE&RS November 2014 - page 1053

Block Adjustment of Satellite Imagery
Using RPCs with Virtual Strip Scenes
Guo Zhang, Hongbo Pan, Deren Li, Xinming Tang, and Xiaoyong Zhu
Abstract
The increasing volume of high-resolution satellite imag-
ery has seen the need for a large number of ground control
points become a limiting factor for large-area mapping.
Utilizing a shift model for adjacent scenes of the same track,
this paper proposes a method based on rational polynomial
coefficients with virtual strip scenes. An affine transforma-
tion in the image space is used as the adjustment model
for the virtual strip scenes, and the corresponding adjust-
ment parameters are derived from the relationship between
the standard scenes and virtual strip scenes. Triplet stereo
images from ZiYuan-3 are used to test the accuracy of the
virtual strip scenes, and we compare the block adjust-
ment of the long strip scene products and standard scene
products. The results show that sub-pixel accuracy can be
achieved in checkpoints close to the long strip scenes.
Introduction
As an increasing number of new-generation very-high-reso-
lution satellites are launched (e.g., WorldView-2, GeoEye-2,
SPOT
6 and 7, Pleiades), the capability of obtaining ground
images is no longer a limiting factor. In spite of the high ori-
entation accuracy of such satellites, measuring
GCPs
(Ground
Control Points) for block adjustment remains a necessary
function, and can be rather costly and time-consuming. Thus,
there is an urgent need to reduce the required number of
GCPs
.
Since being utilized by Ikonos as the camera model, ven-
dors increasingly prefer
RPCs
(Rational Polynomial Coeffi-
cients) because of their high replacement accuracy for the
rigorous model, faster computation, and greater generality.
Block adjustment methods using
RPCs
have been investigat-
ed, and are referred to in many cases as bias-compensation
(Fraser and Hanley, 2005). The block adjustment of rigorous
models has also been studied in detail (Orun and Natarajan,
1994; Poli, 2007; Weser
et al.
, 2008).
To fit the requirements of different users, vendors provide
standard scenes and strip scenes. For long strip scenes with
RPCs
, four
GCPs
around the corners can ensure sub-pixel accu-
racy (Grodecki and Dial, 2003; Pan
et al.
, 2013). For standard
scenes with rigorous models, Kim
et al
. (2007) investigated
the modeling of entire strips with different parameters, and
an accuracy of around two pixels was obtained over the
whole 420 km of
SPOT
3 strips. A similar result was obtained
by Gupta
et al
. (2008) for imagery from Cartosat-1. A Keple-
rian motion-based orbit determination method developed by
Michalis and Dowman (2008) obtained pixel-size accuracy,
while a strip adjustment approach based on a generic sensor
technique was applied to
ALOS
imagery, giving single-pixel
accuracy (Fraser and Ravanbakhsh, 2011; Rottensteiner
et al.
,
2009).
The above methods are based on standard scenes with rigor-
ous models or single strip scenes, and are thus not appropriate
for standard scenes with
RPCs
. In this paper, we develop a new
method that uses virtual strip scenes, geometric mosaics with-
out resampling, instead of single scenes for block adjustment.
The proposed method does not require tie points between ad-
jacent scenes, and just four
GCPs
settled around the four corners
promise a similar degree of accuracy as with strip scenes.
This paper is organized as follows. After a brief description
of
RPCs
, the virtual strip scenes and their block adjustment
methods are introduced. A comparison between standard
scenes, virtual strip scenes, and strip scenes from ZiYuan-3
is then presented. The two adjacent strips consist of seven
and twelve standard scenes, and a large number of
GCPs
are
uniformly distributed in this area, enabling a comprehensive
experimental evaluation and validation of the proposed meth-
od. Finally, some orientation results are presented, before we
summarize our conclusions.
RPCs
In the
RPC
model, image pixel coordinates (
sample, line
) are ex-
pressed as the ratios of cubic polynomials of ground coordinates
(
Latitude, Longitude, Height
). To improve the numerical stability
of the equations, the 2
D
image coordinates and 3
D
ground coor-
dinates are each offset and scaled to fit the range [–1.0, 1.0]. The
normalized coordinate values of object points on the ground (
P
,
L
,
H
) and the normalized line and sample image pixel coordi-
nates (
X
,
Y
) are computed using the following equations:
X
sample SAMP OFF
SAMP SCALE
Y
line LINE OFF
LINE SCALE
=
=
_
_
_
_
(1)
P
Latitude LAT OFF
LAT SCALE
L
Longitude LONG OFF
LONG SCALE
H
=
=
_
_
_
_
=
Height HEIGHT OFF
HEIGHT SCALE
_
_
(2)
Guo Zhang, and Deren Li are with the State Key Laboratory of
Information Engineering in Surveying, Mapping and Remote
Sensing (LIESMARS), Wuhan University, 129 Luoyu Road,
Wuhan, 430079, P.R. China (
).
Hongbo Pan is with the School of Geoscience and Info-Physics,
Central South University, Changsha 410083, China, P.R. China.
Xinming Tang and Xiaoyong Zhu are with the Satellite Sur-
veying and Mapping Application Center (SASMAC), National
Administration of Surveying, Mapping and Geoinformation,
28 Lianhuachi West Road, Haidian District, Beijing, 100830,
P.R. China.
Photogrammetric Engineering & Remote Sensing
Vol. 80, No. 11, November 2014, pp. 1053–1059.
0099-1112/14/8011–1053
© 2014 American Society for Photogrammetry
and Remote Sensing
doi: 10.14358/PERS.80.11.1053
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
November 2014
1053
999...,1043,1044,1045,1046,1047,1048,1049,1050,1051,1052 1054,1055,1056,1057,1058,1059,1060,1061,1062,1063,...1086
Powered by FlippingBook