September 2020 Public - page 581

Analyzing the Contribution of Training Algorithms
on Deep Neural Networks for Hyperspectral
Image Classification
Mehmet Akif Günen, Umit Haluk Atasever, and Erkan Beşdok
Abstract
Autoencoder (
AE
)-based deep neural networks learn com-
plex problems by generating feature-space conjugates of
input data. The learning success of an
AE
is too sensitive
for a training algorithm. The problem of hyperspectral im-
age (
HSI
) classification by using spectral features of pixels
is a highly complex problem due to its multi-dimensional
and excessive data nature. In this paper, the contribution of
three gradient-based training algorithms (i.e., scaled con-
jugate gradient (
SCG
), gradient descent (
GD
), and resilient
backpropagation algorithms (
RP
)) on the solution of the
HSI
classification problem by using
AE
was analyzed. Also, it was
investigated how neighborhood component analysis affects
classification performance for training algorithms on
HSI
s.
Two hyperspectral image classification benchmark data sets
were used in the experimental analysis. Wilcoxon signed-
rank test indicates that
RB
is statistically better than SCG
and
GD
in solving the related image classification problem.
Introduction
Advances in spectral imaging sensors have enabled the use of
hyperspectral images (
HSI
) in remote sensing research (Li
et
al.
2019; Liu
et al.
2019).
HSIs
typically consist of image layers
with a large number of different spectral features.
HSIs
are
widely used in pattern recognition applications such as image
segmentation and object identification because of the detailed
information they provide about the spectral properties of ob-
jects. Image clustering and classification
commonly used image segmentation too
The classification of
HSIs
can be perfo
spectral, or spatial-spectral image features. K-means cluster-
ing (Filho
et al.
2003), Fuzzy C-means clustering (Sigirci and
Bilgin 2017), DBScan clustering (Datta Ghosh, and Ghosh
2012), expectation-maximization clustering (Marden and
Manolakis 2003), and agglomerative hierarchical clustering
methods (Medina Manian, and Chinea 2013) are commonly
used unsupervised classification methods. In practice, spatial
locations of training and test samples required for the clas-
sification process are determined using clustering meth-
ods (Guan, Yuen, and Coenen 2019; Nasiboglu, Tezel, and
Nasibov 2019). Hyperspectral images are data that contains a
large number of bands. As the data dimension increases, the
structure of classification techniques becomes more com-
plex and their performance decreases. In order to overcome
this problem, dimension reduction techniques are used.
Neighborhood component analysis (
NCA
) is one of the basic
dimension reduction techniques (Goldberger
et al.
2004).
The most commonly used tools for image classification are
neural networks, support vector machines, and classic super-
vised classification methods (parallelpiped based methods,
decision tree, maximum likelihood-based methods, etc.) (Liu
et
al.
2019; Xie
et al.
2019). Unfortunately, in some cases clas-
sification methods can produce noisy classification results. For
this reason, classical classification methods require preprocess-
ing and postprocessing. Generally, the success of the classifica-
tion methods shows high sensitivity to the homogeneity, the
complexity, the size, and the dimension of the data and the
characteristics of the method used. Image filtering techniques
used to increase classifier success can cause loss of information
(Atasever, Gunen, and Besdok 2018). Also, image features that
affect the classification success may vary depending on the
local image primitives. In addition, strongly related and highly
correlated relationships between image layers cause calcula-
tion difficulties that affect the success of classical classifiers.
Deep neural networks (
DNNs
) can extract high-quality fea-
tures corresponding to the data by learning the large amount of
raw sensor data with the help of complex training algorithms
(Liu and Wu 2016; Paoletti
et al.
2018). The operation of
DNNs
differs from classical supervised neural networks, fuzzy, and
expert systems, even if they have several structural similari-
ties.
DNNs
do not require predefined rule sets. They also do not
use predesigned data features in contrast to supervised neural
networks. In response to their costly computational complex-
r problem learning success. Nowadays,
rch area to try to train complex
DNNs
hical processing units (
GPUs
) with many
processors. (Do
et al.
2019; du Plessis and Broeckhoven 2019;
Grekousis 2019; Xu
et al.
2019; Yao, Lei, and Zhong 2019).
AE
is basically an unsupervised learning-based neural
network. The corresponding input data and output data of
AEs
are identical.
AEs
contain two physical partitions called
the encoder and the decoder. Encoder converts relevant data
into coded-feature space (Lv, Peng, and Wang 2018; Makkie
et al.
2019; Yu
et al.
2018). Furthermore, the encoder changes
the dimensional, homogeneity, smoothness, and continuity
level of the respective data. The decoder is used to obtain the
original data from the corresponding data converted to the
coded-feature space. The
AE
is actually used for data recon-
struction and data smoothing, unlike multi-layer perceptrons
used in log-regression-based numerical prediction.
A
DNN
’s success in learning to solve the problem is sensi-
tive to the complexity of the problem, the size of the data
Mehmet Akif Günen, Umit Haluk Atasever, and Erkan Be
ş
dok
are with the Department of Geomatics Engineering, Engineering
Faculty, Erciyes University, 38039, Kayseri, Turkey.
Mehmet Akif Günen is also with the Department of
Geomatics Engineering, Engineering Faculty, Erciyes
University, 38039, Kayseri, Turkey (
).
Photogrammetric Engineering & Remote Sensing
Vol. 86, No. 9, September 2020, pp. 581–588.
0099-1112/20/581–588
© 2020 American Society for Photogrammetry
and Remote Sensing
doi: 10.14358/PERS.86.9.581
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
September 2020
581
519...,571,572,573,574,575,576,577,578,579,580 582,583,584,585,586,587,588,589,590
Powered by FlippingBook