Wavelet Basis Neural Network (WBNN)

Wavelet Basis Neural Networks combine the advantages of

wavelets and neural networks (Jin, 2008). A

WBNN

is an exten-

sion of a Wavelet Neural Network (

WNN

) by including a scal-

ing function as a neuron (Zhang, 1992; Zhang, 1995; Veitch,

2005; Jin, 2008). A

WBNN

is considered a feed-forward neural

network, with one hidden layer in which the activation func-

tions are drawn from an orthonormal wavelet family (Zhang,

1992; Zhang, 1995; Veitch, 2005; Jin, 2008).

Using a series of observed values, a

WBNN

can be trained to

learn and hence compute a given output. Figure 6 shows the

structure of a

WBNN

. There are three layers, in which the hid-

den layer includes the neurons which are activation functions

drawn from a scaling and a wavelet function (these function

neurons are usually referred to as wavelons). The output layer

consists of one or more linear combiners. There are two main

approaches to creating a

WBNN

. In one, the input data (a vector)

is first decomposed based on a scaling and wavelet function,

and then the approximate and wavelet coefficients are combined

using a combiner (a summer) in which the weights are obtained

by a learning algorithm in the training phase. This approach is

referred to as a “wavenet”, where the wavelet and neural net-

works are processed separately. The wavelet functions

Ψ

and

Φ

,

obtained by computing the discrete wavelet transform (Burrus

., 1997), can be from different levels of decomposition (where

is the number of decomposition levels) and a different

.

The activation function is based on wavelet theory in

which we can approximate a function as follows (Zhang,

1992; Zhang, 1995; Veitch, 2005; Jin, 2008):

(9)

where

,

(

)

( )

=

−

2 2

2

and

,

(

)

( )

=

−

2 2

2

.

In Equation 9,

and

are the scale and wavelet functions

at decomposing level

and shift parameter

.

Therefore for the network of Figure 6, the output of the

wavenet can be written:

( )

=

( )

+

( )

=

=

∑ ∑

1

1

,

,

i

.

(10)

In the Equation 10,

and

are the weights for the scale

and wavelet functions.

and

are the number of transla-

tions for scale and wavelet functions. In another approach

the translating and the scaling parameters of the scale and

wavelet function as well as the combiner weights are updated

and modified in a learning algorithm. In the training mode,

in addition to computing the weights, the scale and wavelet

functions are computed. This approach is referred as a wave-

let network. In the hidden layer the activation function is

defined based on the following equations (Jin, 2008):

,

,

( )

=

−

( )

=

−

and

(11)

in which

and

are the translating and scaling parameters

respectively for the scaling function;

and

are translating

and scaling parameters, respectively, for the wavelet function,

therefore:

( )

=

( )

+

( )

∑ ∑

,

,

.

(12)

In the wavelet network approach the parameters

,

,

,

,

, and

are adapted in the training mode and learning

procedure. However, in the wavenet the

,

,

,

, are fixed

at initialization and not changed by a learning procedure.

Just as with the

BPNN

, we used 60 percent of reference data

samples from each

AOI

(the number of the reference data for

each

AOI

is explained in the previous section) for training and

40 percent for validation and testing. In this work, a

WBNN

with Daubechies mother wavelet (‘db2’) and 2 decomposition

levels is examined to estimate conductivity.

Results and Discussion

We estimated

EC

using

SAR

data based on the algorithms

explained in the previous section. We tested the algorithms

using the four different vegetation areas which are described

in data section. The extracted features were grouped in three

combinations for analysis to estimate conductivity:

• Scenario 1: using only

SAR

backscatter coefficients of

HH

and

VV

along with the local incidence angle (

);

• Scenario 2: including a mean of

HH

and

VV

on a sliding

window (3

3 pixels) (which was used in Shi (1997))

in addition to Scenario 1 features;

• Scenario 3: including more statistical and textural

features: the mean and standard deviation of the back-

scatter coefficients in a sliding window of 5 × 5, and

wavelet features for a sliding window (7 × 7) in addi-

tion to the Scenario 2 features.

These combinations allow evaluation of the potential for

different types of features to estimate the conductivity over

the four different vegetation areas as shown in Figure 2.

Figure 7 shows the four different vegetation areas of

Figure 6. Structure of a Wavelet Basis Neural Network (WBNN; HH, VV: back scattering coefficients;

: local incident angle; W, V: Weights

for neural network;

Ø

,

: scale and wavelet function)

514

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

SEO Version