PE&RS October 2018 Full - page 606

there have been at least two prominent types of attempts
in recent years. One is the introduction of fractional
programming to solve the model based on projective error
(Kahl,
et al
., 2008). In (Kahl,
et al
., 2008), the authors note
that the new solutions are consistently equal to or better
than those of bundle adjustment in terms of accuracy.
However, the optimal point ultimately reported by the
branch and bound is usually obtained within the first few
iterations. Thus, the method includes an upper bound
on the re-projection error in pixels. The second branch is
the direct odometry, which solves the photometric error
of intensity of pixels directly instead of the reprojection
error of coordinates of feature points (Engel,
et al.
, 2014
and 2016). In (Alismail,
et al
., 2016), the authors propose
a new algorithm that relies on maximizing photometric
consistency and estimates the correspondences implicitly.
• Convergence
: The inertial measurement unit (
IMU
) can
provide the position and attitude of the rover. However,
the phenomenon of accumulative errors during move-
ment is the primary reason for the decreasing pose
accuracy of the rover. Visual-
IMU
BA
is significantly more
challenging than vision-only
BA
, which can make the
scale observable (Leutenegger,
et al
., 2015). However,
IMU
measurements complicate matters in computing the
global maximum-likelihood parameter estimate (Engels,
et al
., 2006; Triggs,
et al
., 1999) incrementally in real time
(Keivan,
et al
., 2016). When visual-
IMU
BA
is used for
rover navigation, we must adequately consider that
BA
will face difficulties converging and will fail to do so if
the initial values that the rover poses from the
IMU
have
low precision or are lost. A visual localization algorithm
on the planetary rover’s stereo vision system is required
to prevent low efficiency and no convergence.
In general, the key idea behind
BA
is to correct the camera
pose parameters of all input images under reasonable initial
values. An improper selection of the initial values will lead
to the need for more iterations and convergence to a local
optimal solution. Inspired by this finding, this paper proposes
a new visual localization algorithm based on weighted total
least squares (
WTLS
) (Van Huffel and Lemmerling, 2013) for
rover mapping and navigation. An error-in-all (
EIV
) mathemat-
ical model (Shen,
et al
., 2011) considering error in both the
coefficient matrix and observation vector is used instead of
OLS
. The new algorithm consists of two parts: (1) relative ori-
entation and forward intersection at a single point to acquire
stereoscopic models (Kanatani and Niitsuma, 2012) in differ-
ent stations, which contain 3D coordinates of the tie points,
and (2) 3D similarity transformation based on the stereoscopic
models to obtain rover poses directly. In the first part, the new
algorithm pursues the high-precision relative orientation pa-
rameters of binocular cameras based on the
EIV
model, which
is a special objective that previous visual localization methods
have not considered. In the second part, the new algorithm
adopts an alternative solution to 3D similarity transformation
with
WTLS
and the Procrustes method (Igual,
et al
., 2014).
The new algorithm entails three contributions: (1) We de-
rive a reasonable solution to the relative orientation problem
in the planetary rover stereo vision system, which works
toward the first issue of
accuracy
; (2) As a non-iterative com-
putational method, the new visual localization algorithm can
obtain the rover poses without the initial values. This works
toward the above issues of
efficiency and convergence
. (3)
Inspired by the fact that the tie points contain different con-
tribution values due to the range factor, we devise a practical
strategy for exploring the weight matrix of the 3D coordinate
observations of the tie points. This also works toward the is-
sue of
accuracy
.
In contrast to (Kim,
et al
., 2015; Zhang,
et al
., 2015), the
new algorithm considers only measurement error without
the process noise when using camera pose estimation for the
process model. To the best of the authors’ knowledge, this
study is the first to estimate planetary rover poses by using
WTLS
, which is different from the previous
BA
algorithms.
In addition, we devise the weight matrix of the coordinate
observations of the tie points and an efficient solution to the
relative orientation problem with fictitious observations for
the specific application of a planetary rover. Compared with
the method of (Xie,
et al.
, 2016; Fusiello, A., and Crosilla, F.,
2015), our method analyzes the reconstruction precision of
the tie points and design of the weight matrix.
There are two major differences between our method and
the method by (Xie,
et al.
, 2016): (1) the weight of the observa-
tions - the former views the 3D coordinates of the tie points as
the observations, whereas the latter views the 2D coordinates
of the image points (corresponding to the tie points); and (2)
influencing factors - once the errors of the parallax, image
coordinates, and external parameters are set, the weight of
the former is influenced by the distance of the feature points,
whereas the latter is influenced by the variation of the terrain
and the attitude of the aerial vehicle. This method (Xie,
et al.
,
2016) determines an
a priori
weight for an image observation
based on its scale and the amount of anisotropy in its GSDs,
which varies markedly between different oblique views.
This paper is organized as follows. The next Section de-
scribes the new relative orientation method based on the
EIV
model followed by the details of the new proposed algorithm.
Next, experimental results are provided to demonstrate the
effectiveness of the proposed algorithm compared with those
BA
methods based on
OLS
(Alexander,
et al
., 2006; Di,
et al
.,
2008), parallaxBA (Zhao,
et al
., 2015) and the
LM
algorithm
(Li,
et al
., 2016), as well as its improved convergence and
position accuracy. The final Section concludes the paper.
Estimation of the Epipolar Geometry of Binocular Cameras
The estimation of the epipolar geometry of binocular cameras
from matching points is a fundamental problem of computer
vision with many applications, which is one of the key com-
ponents of all visual odometry and
SLAM
pipelines. Relative
orientation in photogrammetry and epipolar rectification in
computer vision are essentially the same and can acquire the
fundamental matrix. In this section, we adopt the relative
orientation algorithm. Many methods based on the pinhole
camera model have been proposed, such as a 5-point rela-
tive pose solver for calibration cameras; 6-point, 7-point, or
8-point solvers for uncalibrated cameras; and 10-point corre-
spondence solvers for radially distorted cameras (Kukelova,
et
al
., 2015). However, these solutions are numerically unstable,
sensitive to noise, or based on many point correspondences.
In (Stewenius,
et al
., 2006), the authors indicate that most of
these methods use
OLS
. When those epipolar geometry estima-
tion methods are applied, some differences must be consid-
ered for the rover’s stereo vision system versus conventional
aerial or close-range photogrammetry:
(1) Image characteris-
tics
. The optical axis of the conventional aerial or close-range
images is approximately perpendicular to the target area.
When we obtain the matching corresponding points in the
binocular cameras’ field of view (
FOV
), the 2D coordinates
of these points can be viewed as observations with equal
weights. However, in the rover’s stereo vision system, the
optical axis intersects the target area at small angles. Thus,
in this case, the 2D coordinates of matching corresponding
points are measured with different weights, which have high-
er weights close to the rover and lower weights away from the
rover.
(2) Computing model
. When the relative orientation
606
October 2018
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
591...,596,597,598,599,600,601,602,603,604,605 607,608,609,610,611,612,613,614,615,616,...670
Powered by FlippingBook