PE&RS October 2015 - page 777

Two Dimensional Linear Discriminant
Analyses for Hyperspectral Data
Maryam Imani and Hassan Ghassemian
Abstract
Most supervised feature extraction methods like linear
discriminant analysis (
LDA
) suffer from the limited num-
ber of available training samples. The singularity problem
causes
LDA
to fail in small sample size (
SSS
) situations. Two
dimensional linear discriminant analysis (
2DLDA
) for feature
extraction of hyperspectral images is proposed in this paper
which has good efficiency with small training sample size. In
this approach, the feature vector of each pixel of hyperspec-
tral image is transformed into a feature matrix. As a result,
the data matrices lie in a low-dimensional space. Then, the
between-class and within-class scatter matrices are calculated
using the matrix form of training samples. The proposed
approach has two main advantages: it deals with the SSS
problem in hyperspectral data, and also it can extract each
number of features (with no limitation) from the original high
dimensional data. The proposed method is tested on four
widely used hyperspectral datasets. Experimental results
confirm that the proposed
2DLDA
feature extraction method
provides better classification accuracy, with a reasonable
computation time, compared to popular supervised feature
extraction methods such as generalized discriminant analysis
(
GDA
) and nonparametric weighted feature extraction (
NWFE
)
particularly compared to the
1DLDA
in the
SSS
situation. The
experiments show that two dimensional linear discriminant
analysis + support vector machine (
2DLDA
+
SVM
) is an ap-
propriate choice for feature extraction and classification of
hyperspectral images using limited training samples.
Introduction
Hyperspectral images contain rich spectral information. They
are very useful for discriminating between classes with more
detail than multispectral images in classification applications
(Zhao
et al.
, 2008). However, not every spectral band contrib-
utes to the material identification. Moreover, the number of
available training samples is limited, and we have high dimen-
sional data and a small sample size (
SSS
) problem. Typically, the
performance of a classifier increases to a certain point as addi-
tional features are added, and then decreases. This is referred as
the Hughes phenomenon (Hughes, 1968). The Hughes phenom-
enon can be explained as follows: The unknown parameters of
classifiers are estimated in the most commonly used supervised
classification methods. For a fixed sample size, as the number of
features increases, the separability and class discrimination abil-
ity increase and so the classifier performance is potentially im-
proved. But, on the other hand, the reliability of the parameter
estimates decreases. As a result, the performance of classifiers,
using a fixed sample size, may degrade with an increase in the
number of features. The number of training samples required
for linear classifiers should be proportional to the number of
features and for a quadratic classifier should be proportional to
the square of the number of features (Fukunaga, 1990).
Unfortunately, gathering of training samples in hyperspec-
tral images is generally expensive, hard, and time consuming.
In other words, the number of available training samples is
limited. On the other hand, hyperspectral images include
a huge volume of spectral bands (features). Thus, due to
the small value of the ratio between the number of training
samples and the number of available spectral bands, the
SSS
problem is a challenging issue in analysis and classification of
hyperspectral images.
Different approaches have been proposed in literature to
deal with classification of high dimensional data with small
training sample size. An efficient maximum likelihood clas-
sification method has been proposed by Jia and Richards
(1994) which significantly reduces the processing time as-
sociated with traditional maximum likelihood classification
when applied to imaging spectrometer data. This maximum
likelihood classification technique copes with the training of
geographically small classes. Based on properties of the global
correlation among the bands, several wavelength subgroups
are formed from the complete set of spectral bands in the
data. Then, discriminant values are computed for each sub-
group separately and pixel labeling is done by using the sum
of discriminant. A practical and efficient method to deal with
hyperspectral data has been proposed by Jia and Richards
(1999). This method makes use of the block structure of the
correlation matrix so that the principal components transfor-
mation (
PCT
) is conducted on data of smaller dimensionality.
Therefore, the computational load is reduced significantly
compared to the conventional
PCT
. A reduced number of
features accelerate the maximum likelihood classification
process, and thus, the process does not suffer the limitations
encountered by using the full dimensional hyperspectral data
when training samples are limited. An automatic procedure
for implementing the hybrid supervised-unsupervised ap-
proach to hyperspectral image data classification has been
proposed by Jia and Richards (2002). This introduced cluster-
space representation leads to automatic association of spectral
clusters with information classes and the development of a
cluster-space classification. Moreover, ‌the estimate of a clus-
ter’s membership of belonging to information classes, together
with the estimate of a pixel’s membership of belonging to the
clusters, is used for pixel labeling. Because the class modeling
requires to only estimate the first degree statistics, the number
of training samples required can be many fewer than when
using Gaussian maximum likelihood classification
Feature reduction plays a critical role in hyperspectral
images classification (Saini
et al.
, 2014). The feature reduc-
tion approaches, are divided into two general groups: feature
selection and feature extraction. Feature selection methods
Maryam Imani and Hassan Ghassemian (Corresponding
author) are with the Faculty of Electrical and Computer
Engineering, Tarbiat Modares University, Tehran 14155-4843,
Iran (
).
Photogrammetric Engineering & Remote Sensing
Vol. 81, No. 10, October 2015, pp. 777–786.
0099-1112/15/777–786
© 2015 American Society for Photogrammetry
and Remote Sensing
doi: 10.14358/PERS.81.10.777
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
October 2015
777
751...,767,768,769,770,771,772,773,774,775,776 778,779,780,781,782,783,784,785,786,787,...822
Powered by FlippingBook