PE&RS July 2016 Public - page 514

Wavelet Basis Neural Network (WBNN)
Wavelet Basis Neural Networks combine the advantages of
wavelets and neural networks (Jin, 2008). A
WBNN
is an exten-
sion of a Wavelet Neural Network (
WNN
) by including a scal-
ing function as a neuron (Zhang, 1992; Zhang, 1995; Veitch,
2005; Jin, 2008). A
WBNN
is considered a feed-forward neural
network, with one hidden layer in which the activation func-
tions are drawn from an orthonormal wavelet family (Zhang,
1992; Zhang, 1995; Veitch, 2005; Jin, 2008).
Using a series of observed values, a
WBNN
can be trained to
learn and hence compute a given output. Figure 6 shows the
structure of a
WBNN
. There are three layers, in which the hid-
den layer includes the neurons which are activation functions
drawn from a scaling and a wavelet function (these function
neurons are usually referred to as wavelons). The output layer
consists of one or more linear combiners. There are two main
approaches to creating a
WBNN
. In one, the input data (a vector)
is first decomposed based on a scaling and wavelet function,
and then the approximate and wavelet coefficients are combined
using a combiner (a summer) in which the weights are obtained
by a learning algorithm in the training phase. This approach is
referred to as a “wavenet”, where the wavelet and neural net-
works are processed separately. The wavelet functions
Ψ
and
Φ
,
obtained by computing the discrete wavelet transform (Burrus
et
al
., 1997), can be from different levels of decomposition (where
L
is the number of decomposition levels) and a different
k
.
The activation function is based on wavelet theory in
which we can approximate a function as follows (Zhang,
1992; Zhang, 1995; Veitch, 2005; Jin, 2008):
(9)
where
ψ
ψ
L K
L
L
x
x k
,
(
)
( )
=
2 2
2
and
ϕ
ϕ
L k
L
L
x
x k
,
(
)
( )
=
2 2
2
.
In Equation 9,
ϕ
and
ψ
are the scale and wavelet functions
at decomposing level
L
and shift parameter
K
.
Therefore for the network of Figure 6, the output of the
wavenet can be written:
y x v
x w x
j
N
j L k
i
M
l k
j
i
i
( )
=
( )
+
( )
=
=
∑ ∑
1
1
ϕ
ψ
,
,
i
.
(10)
In the Equation 10,
v
and
w
are the weights for the scale
and wavelet functions.
N
and
M
are the number of transla-
tions for scale and wavelet functions. In another approach
the translating and the scaling parameters of the scale and
wavelet function as well as the combiner weights are updated
and modified in a learning algorithm. In the training mode,
in addition to computing the weights, the scale and wavelet
functions are computed. This approach is referred as a wave-
let network. In the hidden layer the activation function is
defined based on the following equations (Jin, 2008):
ϕ
ϕ
ψ
ψ
a b
c d
x
x a
b
x
x c
d
,
,
( )
=
− 


( )
=
− 


and
(11)
in which
a
and
b
are the translating and scaling parameters
respectively for the scaling function;
c
and
d
are translating
and scaling parameters, respectively, for the wavelet function,
therefore:
y x v
x w x
i
i a b
i
i c d
i i
i i
( )
=
( )
+
( )
∑ ∑
ϕ
ψ
,
,
.
(12)
In the wavelet network approach the parameters
v
i
,
w
i
,
a
i
,
b
i
,
c
i
, and
d
i
are adapted in the training mode and learning
procedure. However, in the wavenet the
a
i
,
b
i
,
c
i
,
d
i
, are fixed
at initialization and not changed by a learning procedure.
Just as with the
BPNN
, we used 60 percent of reference data
samples from each
AOI
(the number of the reference data for
each
AOI
is explained in the previous section) for training and
40 percent for validation and testing. In this work, a
WBNN
with Daubechies mother wavelet (‘db2’) and 2 decomposition
levels is examined to estimate conductivity.
Results and Discussion
We estimated
EC
using
SAR
data based on the algorithms
explained in the previous section. We tested the algorithms
using the four different vegetation areas which are described
in data section. The extracted features were grouped in three
combinations for analysis to estimate conductivity:
• Scenario 1: using only
SAR
backscatter coefficients of
HH
and
VV
along with the local incidence angle (
θ
);
• Scenario 2: including a mean of
HH
and
VV
on a sliding
window (3
×
3 pixels) (which was used in Shi (1997))
in addition to Scenario 1 features;
• Scenario 3: including more statistical and textural
features: the mean and standard deviation of the back-
scatter coefficients in a sliding window of 5 × 5, and
wavelet features for a sliding window (7 × 7) in addi-
tion to the Scenario 2 features.
These combinations allow evaluation of the potential for
different types of features to estimate the conductivity over
the four different vegetation areas as shown in Figure 2.
Figure 7 shows the four different vegetation areas of
Figure 6. Structure of a Wavelet Basis Neural Network (WBNN; HH, VV: back scattering coefficients;
θ
: local incident angle; W, V: Weights
for neural network;
Ø
,
ψ
: scale and wavelet function)
514
July 2016
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
447...,504,505,506,507,508,509,510,511,512,513 515,516,517,518,519,520,521,522,523,524,...582
Powered by FlippingBook