PE&RS July 2016 Public - page 574

Orthophoto Quarter Quadrangle (
DOQQ
) data. This orthoimag-
ery was generated from color infrared (
CIR
) aerial photographs
captured in May 2002, scanned into digital form (green, red,
near infrared bands), and orthorectified using a digital eleva-
tion model (
DEM
). Image data from the Natural Resource Coun-
cil are true color (blue, green, red bands) orthoimages, generat-
ed by scanning color aerial photographs captured in September
2005 to 1 m spatial resolution. These orthoimages cover the
coniferous forest areas of San Diego County. The 2007 image
data set only covers the Palomar study area and consists of a
four-band (blue, green, red, and
NIR
) airborne digital UltraCam
image, captured for the County of San Diego Office of Emergen-
cy Services on 02 November 2007. This imagery was tested be-
cause it is the only four band data set available for the area.The
2002 and 2005 image data sets were subset to conform to the
spatial extents of the three study areas. Within each montane
area, specific study areas were subset to include topographic
variation and forest stands of varying sizes.
Calibration polygons (i.e., training sites) used to establish
signature templates for image classification were manually
delineated utilizing a systematic aligned approach. A 10 × 10
grid was superimposed over the image, with a point marking
the center of each grid cell. From this center point, an analyst
determined the closest dead tree object, based on image color,
shape, and context, as well as examples of the other classes
(“live conifer forest” and “non-forest”). Polygons delineating
tree objects were manually digitized polygons. In the event
that there were no dead tree objects (or other classes) in a
given grid cell, the analyst would move on to the next grid
cell. In this manner a sufficient sample of dead tree objects
was collected. For imagery from 2002, dead tree objects
tended to be sparsely scattered and fairly rare, such that a
random point generation approach would have missed this
class and warranted a more guided sampling strategy. This
sampling strategy yielded between 80 to 100 usable samples
for each date (the earliest date having the fewest dead tree
objects). Pilot tests had shown that 30 calibration polygon
samples was optimal, leaving the remainder for validation.
The training data were then randomly sorted and partitioned
into calibration (30 polygons for each class) and validation
sets (50 polygons for each class).
The same calibration and validation sets were utilized for
analyses of both software approaches. For both calibration
and validation objects, the minimum mapping unit was estab-
lished as the area of a tree canopy, which was approximately
equivalent to 9 to 16 contiguous 1 m pixels. For the calcula-
tion of omission error, validation polygons were also collect-
ed for “non-dead tree” objects, such as live trees, senescent
grasses, and roads, utilizing the same method.
With the object-based approach, the model parameters
(segmentation related scale factor, shape versus color
weighting, and compactness versus smoothness weighting)
were selected using a trial and error procedure, testing each
factor in turn and using visual inspection to compare each
test segmentation product to training objects to guide the
selection of the optimal set of parameters. Ten training objects
of dead tree crowns were created by heads-up digitizing of
object boundaries for each scene of imagery (after Lippitt
et
al
., 2012). Validation was conducted by visually assessing the
segmented objects, qualitatively determining the set of fac-
tors, which combined, optimized the representation of dead
tree objects. Image inputs into the eCognition classification
included the mean and standard deviation of spectral bands
for the objects. Input data transformations were also tested
to determine which were optimal in terms of final product
accuracy. Input data included: spectral bands alone, Normal-
ized Difference Green Red (
NDGR
; Gitelson
et al
., 2002), and
Component 1 from Principal Component Analysis transforma-
tion (
PCA
; Zhao and Maclean, 2000; Small, 2001), representing
the PC transform with the greatest variance explanation of all
wavebands of the original data set (Small, 2001), was selected
as an input because prior testing found it to be effective. A
combination of the above feature types (henceforth called the
Combination approach) was also used. Note that the Combi-
nation approach for each imagery type included only those
data transformations appropriate for the bands present in
each orthoimage data set. Also included were the Normalized
Difference Vegetation Index (
NDVI
; Rouse
et al
., 1973), the Soil
Adjusted Vegetation Index (
SAVI
; Huete, 1989), and the Simple
Ratio (
SR
; Jordan, 1969), which were applied to the 2002 and
2007 images that had a
NIR
band (2002 imagery). The Visible
Atmospherically Resistant Index (
VARI
; Gitelson
et al
., 2002)
was applied to the 2005 true color image (see Table 2). The
addition of spectral transform images has been demonstrated
to improve classification accuracies (Kerr and Ostrovsky,
2003; Lippitt
et al
., 2012). The classification of dead trees was
conducted using a nearest neighbor (Euclidean distance) clas-
sifier for each input data transformation.
With the spatial contextual classifier three image classifi-
cation strategies were tested to determine which yielded the
most accurate classification of dead trees. For the first strategy
a masking approach was tested, where non-forest areas were
mapped first utilizing on screen digitizing, and then applied as
a mask during the classification of the single dead conifer tree
class, in order to reduce error. For the second strategy, prod-
ucts derived from transforms, the same as utilized with the
object-based approach, were also tested. For the third strategy,
a single vector dead conifer class, the most straightforward
and simplest classification of the spatial contextual approach,
was also created and compared with the enhanced products.
An object-based accuracy assessment for the dead tree class
products was conducted using a visually-based comparison
T
able
1. I
magery
I
mage
D
ata
S
ets
U
tilized
for
D
ead
T
ree
O
bject
M
apping with
and
A
ssociated
M
etadata
Date Flown
Sensor
Spectral bands
Resolution
Source
May, 2002
Aerial Camera 3-band CIR NIR, R, G
1 m
U.S.G.S. Digital Orthophoto Quarter Quadrangle
September, 2005
Aerial Camera
3-band RGB
1 ft (0.3 m)
U.S.D.A. Natural Resource Council
November, 2007
Digital UltraCam 4-band NIR, RGB
1 ft (0.3 m)
County of San Diego Office of Emergency Services
T
able
2. S
pectral
V
egetation
I
ndex
T
ransforms
Vegetation Index
EquationFormula
Reference
Simple Ratio (SR)
Red/NIR
Jordan, 1969
Normalized Difference Vegetation Index (NDVI)
(NIR-Red)/(NIR+Red)
Rouse et al., 1973
Normalized Difference Green-Red Index (NDGR)
(Green-Red)/(Green+Red)
Gitelson et al., 2002
Soil Adjusted Vegetation Index (SAVI)
(NIR-Red) (1+L) /(NIR+Red+L)*
Huete, 1989
Visible Atmospherically Resistant Index (VARI)
(Green-Red)/(Green+Red-Blue)
Gitelson et al. 2002
* L is the soil adjustment factor: the constant L = 0.5 was utilized, NIR = near infrared
574
July 2016
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
447...,564,565,566,567,568,569,570,571,572,573 575,576,577,578,579,580,581,582
Powered by FlippingBook