PE&RS September 2014 - page 849

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
September 2014
849
Spectral-Angle-based Laplacian Eigenmaps
for Nonlinear Dimensionality Reduction
of Hyperspectral Imagery
Lin Yan and Xutong Niu
Abstract
In traditional manifold learning of hyperspectral imagery,
distances among pixels are defined in terms of Euclidean
distance, which is not necessarily the best choice because
of its sensitivity to variations in spectrum magnitudes.
Selecting Laplacian Eigenmaps (
LE
) as the test method, this
paper studies the effects of distance metric selection in
LE
and proposes a spectral-angle-based
LE
method (
LE-SA
) to
be compared against the traditional
LE
-based on Euclidean
distance (
LE-ED
).
LE-SA
and
LE-ED
were applied to two airborne
hyperspectral data sets and the dimensionality-reduced data
were quantitatively evaluated. Experimental results demon-
strated that
LE-SA
is able to suppress the variations within
the same type of features, such as variations in vegetation
and those in illuminations due to shade or orientations, and
maintain a higher level of overall separability among dif-
ferent features than
LE-ED
. Further, the potential usage of a
single
LE-SA
or
LE-ED
band for target detection is discussed.
Introduction
Hyperspectral imaging systems have greatly enhanced the
capabilities of discrimination, identification, and quanti-
fication of objects from remotely sensed data. Hyperspec-
tral imagery has been applied to a variety of fields, such as
military, agriculture, and environment monitoring (Jimenez
and Landgrebe, 1998; Kruse
et al
., 2003; Keshava, 2004;
Farrell and Mersereau, 2005; Bachmann
et al
. 2005; Du and
Fowler, 2007; Li
et al
., 2012; Zhang and Qiu, 2012; Bigdeli
et
al
., 2013). However, the large volume of data and the
curse
of dimensionality
(Hughes, 1968) resulting from hundreds of
spectral bands create significant challenges to the processing
and analysis of hyperspectral data.
Dimensionality reduction (
DR
), also referred to as fea-
ture extraction, is one of the main solutions to the problems
above.
DR
transforms high-dimensional data into data with
lower dimensions while still preserving most of the infor-
mation. There are two types of
DR
methods: linear methods
and nonlinear methods (Maaten
et al
., 2009). The linear
DR
methods are based on linear transformations of the original
data, among which the representative methods are
PCA
(prin-
ciple component analysis) and
MNF
(minimum noise fraction)
(Green
et al
., 1988). Although
PCA
and
MNF
have been widely
used in hyperspectral data processing (Farrell and Mersereau,
2005; Du and Fowler, 2007; Li
et al
., 2012), the linear map-
pings of hyperspectral data can cause distortions to camou-
flage subtle differences between spectrally similar targets
(Prasad and Bruce, 2008).
Recently, manifold learning has received more attention for
nonlinear
DR
of hyperspectral imagery. In manifold learning,
structures of the manifolds embedded in high-dimensional
data are learned based on pairwise distances among data
points, and the
DR
data are able to maintain the nonlinear
properties of the manifold structures (Maaten
et al.,
2009).
As summarized by Bachmann
et al
. (2005), there are intrinsic
nonlinear characteristics in hyperspectral imagery due to the
nature of scattering as described in the bidirectional distribu-
tion function (
BRDF
) (Goodin
et al
., 2004) and other nonlineari-
ty sources such as water, which is a nonlinear attenuating me-
dium. As such, the nonlinear
DR
methods are potentially more
suitable to hyperspectral imagery than the linear methods.
Manifold learning methods can be categorized into two
groups (de Silva and Tenenbaum, 2002; Maaten
et al
., 2009),
global methods, such as isometric mapping (
ISOMAP
) (Tenen-
baum
et al
., 2000), and local methods, such as Locally Linear
Embedding (
LLE
) (Roweis and Saul, 2000), Laplacian Eigen-
maps (
LE
) (Belkin and Niyogi, 2002), and Local Tangent Space
Alignment (
LTSA
) (Zhang and Zha, 2005). While global meth-
ods attempt to preserve the global properties of the manifolds,
local methods attempt to preserve the local properties of the
manifolds. Some of these methods, such as
ISOMAP
and
LLE
,
have been applied to hyperspectral imagery and shown better
performance in the classification, target discrimination, and
end-member extraction than the linear
DR
methods, such as
PCA
and
MNF
(Bachmann
et al
., 2005; Bachmann
et al
., 2006;
Mohan
et al
., 2007). Some recent research has focused on im-
proving existing manifold learning methods, e.g., by consid-
ering the spatial information (Mohan
et al
., 2007; Gillis and
Bowles, 2012; Zhang
et al
., 2013) or using stochastic models
to represent the manifolds (Lunga and Ersoy, 2014) in hyper-
spectral images.
Even though manifold learning methods have become more
popular, an important subject has been largely ignored in pre-
vious research, which is the selection of the distance metric in
manifold learning. The selection of the distance metric is an
important factor that can affect the performance of manifold
learning. For example, in most manifold learning methods,
the manifold structures are represented by a neighborhood
graph obtained by connecting every point to its nearest neigh
Lin Yan is with the Geospatial Science Center of Excellence,
South Dakota State University, 1021 Medary Ave, Wecota
Hall, Brookings, SD 57007 (
).
Xutong Niu is with the Department of Mathematics and
Geomatics, Troy University, 232 MSCX, 1 University Avenue,
Troy, AL 36082.
Photogrammetric Engineering & Remote Sensing
Vol. 80, No. 9, September 2014, pp. 849–861.
0099-1112/14/8009–849
© 2014 American Society for Photogrammetry
and Remote Sensing
doi: 10.14358/PERS.80.9.849
811...,839,840,841,842,843,844,845,846,847,848 850,851,852,853,854,855,856,857,858,859,...914
Powered by FlippingBook