PE&RS October 2015 - page 802

inadequate to discriminate riparian species in images ac-
quired during the summer.
Pixel-based Classification Using Spectral-Texture Images
Inclusion of the texture measures did not improve the clas-
sification. Although different texture-window sizes were
used, there was a large overestimation of cottonwood in these
classification results, with the highest user’s accuracy being
less than 35 percent.
At window sizes less than 17 pixels, saltcedar most likely
was more separable from Sophora due to their textural differ-
ences. Especially for small saltcedar individuals in large So-
phora patches, the distinction in textures could be captured
only with small texture windows (Plate 2d, 2e, and 2f). Table
5 shows that more saltcedar pixels, instead of being misclas-
sified as Sophora in the multispectral case (Table 3), were
correctly classified with the spectral-texture data. In contrast,
for window sizes greater than 17 pixels, the texture measures
seemed to reduce the commission error of cottonwood. How-
ever, after reviewing the texture images of various window
sizes (not shown), it was found that instead of capturing the
texture differences between cottonwood and saltcedar or
Sophora, the higher user’s accuracy of cottonwood obtained
through larger windows was caused only by the smoothing
effect. In this case, texture measures of saltcedar or Sophora
with low canopy coverage were averaged with those of other
pixels within the same window. Consequently, these saltce-
dar or Sophora pixels were “correctly” classified with the
smoothed texture obtained from these large texture windows
(Plates 2a, 2b, and 2c). However, when large texture windows
were used, those small, isolated saltcedar were affected by
the smoothing effect, causing them to be misclassified as
surrounding Sophora (Plates 2d, 2e, and 2f). Therefore, in the
pixel-based classification using texture-spectral images, the
optimal texture-window size should differ according to the
focal species (smaller window sizes for saltcedar and larger
window sizes for cottonwood).
Semi Object-based Classification
To identify cottonwood, the semi-object-based method in-
cluded information on both the shape (cottonwood crowns
in the segmented image) and shadow (the spatial relationship
between cottonwood trees and their shadows) and was able
to reduce the rate of misclassification rate between saltcedar
and cottonwood significantly. Therefore, we believed that our
proposed method is a successful machine-based, cost efficient
alternate to visual interpretation (Nagler
et al
., 2005) for map-
ping saltcedar and cottonwood. However, several problems
with this method still need to be addressed.
First, a more accurate method of shadow detection is need-
ed since it is a critical step in locating cottonwood. Although
accuracy of shadow detection was the highest among all
classes in the pixel-based results, there still were undetected
shadows. As a result, cottonwood neighboring these missed
shadows were left out of the candidate objects (Figure 7). Sec-
ond, better methods are needed to exclude non-cottonwood
pixels within the candidate objects. Although the pixel-based
classification result used in this study to differentiate the
non-cottonwood pixels was the best among all results, overall
accuracy and user’s accuracy for cottonwood were only 68.3
percent and 34.2 percent, respectively. Finally, many cot-
tonwood trees located in the two dense stands along the
sixth and seventh transect (Figure 8) were not detected in our
study. Both the touching, closed canopies, and lack of shad-
ows in those two stands made extraction of tree crowns using
image segmentation more difficult. Consequently, most of the
cottonwood in these two patches were not correctly classi-
fied, leading to a lowered producer’s accuracy for cottonwood
Plate 1. Cottonwood overestimation in sparse saltcedar stands: (a) Original multispectral image in false color composite, (b) MLC clas-
sification result, and (c) SVM classification result.
October 2015
751...,792,793,794,795,796,797,798,799,800,801 803,804,805,806,807,808,809,810,811,812,...822
Powered by FlippingBook