PE&RS December 2017 Public - page 798

A concern is that there are no current textbooks that could
be located by the authors specific to visual interpretation.
Most textbooks cover the broad aspects of remote sensing
and visual interpretation in only a limited component of the
text. For example, from an examination of the chapter titles
and lengths from the table of contents Lillesand et al. (2008)
have about 18% of their remote sensing text on visual image
interpretation and an additional 17% on elements of photo-
graphic systems and photogrammetry. Campbell and Wynne
(2011) include about 10% on mapping cameras and image
interpretation and Jensen (2007) also about 10%. These are
excellent textbooks but limited in their treatment of visual
analysis as is often the case in remote sensing courses where
visual analysis is compressed in an overview course followed
perhaps by image processing or advanced topics on radar or
hyperspectral systems. Given the importance of visual in-
terpretation, more focus in an academic curriculum or even
courses more specific to visual methods would be advanta-
geous to the development of a competent workforce being
able to more effectively use the power of current remote
sensing data.
I
mage
I
nterpretation
E
lements
The vocabulary of visual interpretation has changed with
the technology. The initial term was air photo interpretation
or more frequently aerial photointerpretation. However, as
sensors and platforms have changed, the term photo is no
longer appropriate. Increasingly, the community employs
‘image’ as ‘photo’ generally implies a film based product and
today even from aircraft, the data are almost entirely digi-
tal. Furthermore, the availability of spaceborne platforms
with an ability to collect aircraft quality data makes this
no longer an aerial process. The term image interpretation
however, might include either human or digital analysis, so
in the context of this discussion the term visual
interpretation will refer to human decision mak-
ing from the examination of remote sensing data.
Visual analysis of remote sensing data is often
based upon a number of principles of photointer-
pretation or what are also called image interpreta-
tion elements. In the first ASP ‘Manual of Pho-
tointerpretation’, Rabben et al. (1960) suggested
six elements of photointerpretation. Olson (1960)
was one of the first scientists to publish a list of
nine elements which included: shape, size, tone,
shadow, pattern, texture, site, association, and
resolution. A revised framework emerged in Estes
et al. (1983), where height replaced resolution,
and color was added as a complement element to
tone. Tone, the fundamental element, refers to
the grayscale value of an image and is a function
of the object’s reflectivity. Color is also a func-
tion of the object’s propensity to reflect certain areas of the
electromagnetic spectrum, and for that reason it is typically
grouped with tone in the interpretation framework. In the
second edition of the ASPRS ‘Manual of Photointerpreta-
tion’, Teng (1997) added time to suggest 10 elements.
These elements of interpretation have remained relatively
unchanged over the years but not without some variations.
Campbell and Wynne (2011) in their remote sensing text-
book list eight elements in which they combine tone and col-
or and incorporate height as a component of size. Lillesand
et al. (2008) continue to list the nine elements from Olson
(1960). Jensen (2007) suggests nine elements of image
interpretation which differ slightly from other listings. He
combines tone and color as well as site, situation, and asso-
ciation. He also changes size to height/depth/volume/slope/
aspect and adds as a new element, the x,y location.
No matter the elements, there are difficulties associated
with the use of machine assisted approaches to attempt to
replicate those elements. So far that replication has general-
ly not been possible in operational approaches seeking com-
plex information. In essence, machine processing of remotely
sensed data has multiple approaches but is often based upon
spectral signature matching of known spectral signatures
and pixel-by-pixel, band-by-band reflectance values. There
have been efforts to employ software methods to identify or
measure the elements of photointerpretation with varied
levels of success. Table 1 compares the ability to inter-
pret these standard methods by both visual and machine
approaches. The limitations of machine processing include
both the inability to synthesize the available elements and
the difficulty of including site, situation, and association.
The original interpretation elements were based on aerial
photos but given the range of sensors, platforms, and pro-
Table 1. Comparison of Visual Image Interpretation and Machine Processing
Based Image Interpretation.
Interpretation
Element
Visual Interpretation Machine
Processing
Tone
Shape
Size
Shadow
Man-made pattern
-
Natural Pattern
-
Texture
Site
Association/context
Height
- indicates that the element can be determined with the method specified;
- indicates that some part of the element can be determined with the method specified.
798
December 2017
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
787...,788,789,790,791,792,793,794,795,796,797 799,800,801,802,803,804,805,806,807,808,...870
Powered by FlippingBook