PE&RS July 2017 Public - page 501

Northern Conifer Forest Species Classification
Using Multispectral Data Acquired from an
Unmanned Aerial Vehicle
Steven E. Franklin, Oumer S. Ahmed, and Griffin Williams
Abstract
Object-based image analysis and machine learning classifica-
tion procedures, after field calibration and photogrammetric
processing of consumer-grade unmanned aerial vehicle (
UAV
)
digital camera data, were implemented to classify tree species
in a conifer forest in the Great Lakes/St Lawrence Lowlands
Ecoregion, Ontario, Canada. A red-green-blue (
RGB
) digi-
tal camera yielded approximately 72 percent classification
accuracy for three commercial tree species and one conifer
shrub. Accuracy improved approximately 15 percent, to 87
percent overall, with higher radiometric quality data acquired
separately using a digital camera that included near infrared
observations (at a lower spatial resolution). Interpretation
of the point cloud, spectral, texture and object (tree crown)
classification Variable Importance (
VI
) selected by a machine
learning algorithm suggested a good correspondence with the
traditional aerial photointerpretation cues used in the devel-
opment of well-established large-scale photography northern
conifer elimination keys, which use three-dimensional crown
shape, spectral response (tone), texture derivatives to quantify
branching characteristics, and crown size, development and
outline features. These results suggest that commonly avail-
able consumer-grade
UAV
-based digital cameras can be used
with object-based image analysis to obtain acceptable conifer
species classification accuracy to support operational forest
inventory applications.
Introduction
Conifer tree species identification and classification is a
requirement in forest inventory, mapping, and management
applications in Canada (Gillis and Leckie, 1996). A typical
operational forest inventory method determines dominant/
co-dominant tree species composition using field surveys
and the acquisition and interpretation of analogue aerial
photography or digital imagery (e.g., Avery 1968; Howard
1991; Morgan
et al
., 2010). Depending on aerial photography
formats and scale, tree species information is determined
through the use of elimination or selective keys that employ
tone, texture (branching patterns, shadows), crown shape,
site and other decision factors, including regional ecologi-
cal settings, silvical characteristics, and disturbance history
(Sayn-Wittgenstein, 1961 and 1978; Zsilinszky, 1966; Avery,
1969; Loelkes
et al
., 1983; Lee and Cybulski, 2009; Leboeuf
and Vaillancourt, 2013). Typically, species identification and
measurements of tree height and crown area (to estimate vol-
umes), crown closure, density (or stocking), and age requires
medium scale (1:10 000 to 1:20 000), or preferably, large-scale
aerial photography (1:800 to 1:3000) (Aldred and Hall, 1975;
MacLeod, 1981; Franklin, 2001).
Airborne digital imagery and automated image analysis can
provide forest species inventory data, and standardized digital
procedures are increasingly well established (Gougeon, 1995;
Leckie
et al
., 1998; Thompson
et al
., 2007; Lutz
et al
., 2008, Li
et
al
., 2015). In Ontario, for example, operational forest inventory
data collection now employs a Leica ADS40 SH52 linescanner
platform acquiring digital stereo multispectral and panchro-
matic imagery. Standard image products in the resulting Forest
Resource Inventory (
FRI
) include orthorectified mosaics, indi-
vidual tree crown delineation maps, and digital surface models
(
DSM
) (Ontario Ministry of Natural Resources, 2009). Forest
inventory methods increasingly involve new technologies, such
as airborne lidar, terrestrial laser scanning (
TLS
), digital aerial
photogrammetry (
DAP
) (White
et al
., 2016), and advanced tem-
plate-matching or hybrid image analysis techniques (Gomes and
Maillard 2016). Deep learning (the application of multilayered
artificial neural networks) as an aid in the examination, quanti-
fication and identification of digital objects has gained promi-
nence in the last five years (LeCun
et al
., 2015). Deep learning
systems learn relevant features directly from image databases
in contrast to more traditional pattern recognition techniques,
which strongly rely on manually crafted quantitative feature
extractors. Increasingly, the large numbers of images needed to
train deep learning systems are becoming available, and tech-
niques are emerging that require few training samples. A specif-
ic neural network subtype (convolutional neural networks;
CNN
)
has become the de facto standard and is approaching human
performance in a number of tasks in medical diagnostics (LeCun
et al
., 2015), urban scene classification (Bergado, 2016), and tree
species recognition (Wegner
et al
., 2016).
The feasibility of achieving operational forest inventory
species classification with relatively low-cost deployment
of Unmanned Aerial Vehicles (
UAVs
) carrying calibrated
consumer-grade cameras is of wide interest, though less well
documented (Chianucci
et al
., 2016; Ahmed
et al
., 2017). In
addition to a number of common challenges in any airborne
remote sensing endeavor associated with regulatory, person-
nel (e.g., training), scale, geometry, and radiometry issues,
UAV
data acquisition and processing can be complex and dif-
ficult. For example, sensor payloads may be restricted by low-
lift and short duration flight capabilities of the platform, and
mission planning and photogrammetric workflows (particu-
larly for large areas or repeat coverage) are not yet operational
(Laliberte
et al
., 2011; Whitehead
et al
., 2014). An important
uncertainty associated with
UAV
-based remote sensing ap-
proaches is related to the quality and use of digital camera
radiance or reflectance data in automated image analysis
School of Environment, Trent University, 1600 West Bank
Drive, Peterborough, Ontario, Canada, K9J 7B8
(
).
Photogrammetric Engineering & Remote Sensing
Vol. 83, No. 7, July 2017, pp. 501–507.
0099-1112/17/501–507
© 2017 American Society for Photogrammetry
and Remote Sensing
doi: 10.14358/PERS.83.7.501
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
July 2017
501
459...,491,492,493,494,495,496,497,498,499,500 502,503,504,505,506,507,508,509,510,511,...522
Powered by FlippingBook