Peer-Reviewed Articles
1093 High Resolution Elevation Data Derived
from Stereoscopic CORONA Imagery with
Minimal Ground Control: An Approach Using
Ikonos and SRTM Data
Nikolaos Galiatsatos, Daniel N.M. Donoghue, and Graham Philip
Abstract Download
Full Article
The first space mission to provide stereoscopic imagery
of the Earth's surface was from the American CORONA
spy satellite program from which it is possible to generate
Digital Elevation Models (DEMs). CORONA imagery and
derived DEMs are of most value in areas where conventional
topographic maps are of poor quality, but the problem has
been that until recently, it was difficult to assess their
accuracy. This paper presents a methodology to create a
high quality DEM from CORONA imagery using horizontal
ground control derived from Ikonos space imagery and
vertical ground control from map-based contour lines. Such
DEMs can be produced without the need for field-based
ground control measurements which is an advantage in
many parts of world where ground surveying is difficult.
Knowledge of CORONA image distortions, satellite geometry,
ground resolution, and film scanning are important factors
that can affect the DEM extraction process. A study area
in Syria is used to demonstrate the method, and Shuttle
Radar Topography Mission (SRTM) data is used to perform
quantitative and qualitative accuracy assessment of the
automatically extracted DEM. The SRTM data has enormous
importance for validating the quality of CORONA DEMs, and
so, unlocking the potential of a largely untapped part of
the archive. We conclude that CORONA data can produce
unbiased, high-resolution DEM data which may be valuable
for researchers working in countries where topographic data
is difficult to obtain.
1107 Optimizing the High-Pass Filter Addition
Technique for Image Fusion
Ute G. Gangkofner, Pushkar S. Pradhan, and Derrold W. Holcomb
Abstract Download Full Article
Pixel-level image fusion combines complementary image
data, most commonly low spectral-high spatial resolution
data with high spectral-low spatial resolution optical data.
The presented study aims at refining and improving the
High-Pass Filter Additive (HPFA) fusion method towards a
tunable and versatile, yet standardized image fusion tool.
HPFA is an image fusion method in the spatial domain,
which inserts structural and textural details of the higher
resolution image into the lower resolution image, whose
spectral properties are thereby largely retained. Using
various input image pairs, workable sets of HPFA parameters
have been derived with regard to high-pass filter properties
and injection weights. Improvements are the standardization
of the HPFA parameters over a wide range of image resolution
ratios and the controlled trade-off between resulting image
sharpness and spectral properties. The results are evaluated
visually and by spectral and spatial metrics in comparison
with wavelet-based image fusion results as a benchmark.
1119 Photogrammetric Modeling of Linear Features
with Generalized Point Photogrammetry
Zuxun Zhang, Yongjun Zhang, Jianging Zhang, and Hongwei Zhang
Abstract Download
Full Article
Most current digital photogrammetric workstations are based
on feature points. Curved features are quite difficult to be
modeled because they cannot be treated as feature points.
The focus of the paper is on the photogrammetric modeling
of space linear features. In general, lines and curves can be
represented by a series of connected points, so called,
generalized points in the paper. Different from all existing
models, only one collinearity equation is used for each point
on the linear curve, which makes the mathematical model
very simple. Hereby, the key of generalized point photogrammetry
is that all kinds of features are treated as generalized
points to use either x or y collinearity equation. A significant
difference between generalized point photogrammetry
and conventional point photogrammetry is that image
features are not necessarily exact conjugates. The exact
conjugacy between image features and/or the correspondence
between space and image feature are established
during bundle block adjustment. Photogrammetric modeling
of several space linear features is discussed. Sub-pixel
precision has been achieved for both exterior orientation
and 3D modeling of linear features, which verifies the
correctness and effectiveness of the proposed approach.
Color Figures (Adobe PDF format):
[figure 4.] [figure 5.] [figure 7.]
1129 A Photogrammetric Correction Procedure
for Light Refraction Effects at a
Two-Medium Boundary
Toshimi Murase, Miho Tanaka, Tomomi Tani, Yuko Miyashita, Naoto Ohkawa, Satoshi Ishiguro,
Yasuhiro Suzuki, Hajime Kayanne, and Hiroya Yamano
Abstract Download
Full Article
We report on a correction procedure for light refraction
effects at a two-medium boundary, based on the stereo view
of underwater objects, to estimate underwater topography
using photogrammetry. Because theoretically, no solution
exists for photogrammetrically observed positions when the
incident angles of light rays from an underwater object of
interest to two cameras are different; approximation in
solving the positions is needed. We show the feasibility of
the approximation theoretically by examining the horizontal
differences between the observed and true positions when
objects are in line along an airplane track or when the
incident angles are identical. We applied the procedure to
bathymetric mapping of Shiraho Reef, southwest Japan,
using a stereo-pair of aerial photographs. Comparison of
the corrected depths with measured depths at 658 points
showed a mean error and standard deviation of 20.06 m
and 0.36 m, respectively, for measured depth range of
23.4 m to 20.2 m.
1137 Significance of Altitude and Posting Density
on Lidar-derived Elevation Accuracy on
Hazardous Waste Sites
María J. García-Quijano, John. R. Jensen, Michael E. Hodgson, Brian C. Hadley, John B. Gladden, and Lewis A. Lapine
Abstract Download
Full Article
This research evaluated the vertical accuracy of two lidarderived
elevation datasets acquired from two different altitudes
over a clay-capped hazardous waste site located on the
Savannah River Site (SRS), South Carolina, using the same
Optech ALTM 2050 lidar sensor and Cessna 337 platform. Both
missions provided adequate elevation estimates (low-altitude
RMSEz 6 cm; high-altitude RMSEz 14 cm.) A quantitative
comparison was performed to determine how decreasing
platform altitude and increasing lidar posting density affected
the vertical elevation accuracy. Higher posting densities did
not significantly improve the vertical accuracy of lidar-derived
elevation data. Conversely, acquiring the lidar-derived elevation
data at a lower altitude had a significant influence on the
mean vertical error present in the lidar-derived elevation data.
Differences in mean vertical elevation error between the lowand
high-altitude lidar data collection missions were primarily
due to a systematic underestimation bias present in the highaltitude
lidar data.
1147 Shaping Polyhedral Buildings by the Fusion
of Vector Maps and Lidar Point Clouds
Liang-Chien Chen, Tee-Ann Teo, Chih-Yi Kuo, and Jiann-Yeou Rau
Abstract Download
Full Article
We integrate lidar point clouds and large-scale vector maps
to perform building modeling. The proposed scheme comprises
three major steps: (a) the preprocessing of lidar point
clouds and vector maps, (b) roof analysis, and (c) building
reconstruction. During the preprocessing stage, the building
polygons are first obtained from the polylines, followed by the
selections of lidar points in the building polygons. An irregular
triangulated network is then built to represent the facets.
The segmentation of planar facets for roof analysis is implemented
by examining the patch size and the facet orientation.
The interior 3D roof edges are then determined from the
intersection of the roof planes. Finally, the building models
are reconstructed through regularization. Two sample sites are
tested for the purposes of validation. The experimental results
indicate that the proposed scheme allows for high fidelity and
accuracy, provided that the point cloud density is enough.
1159 Object-based Classification of High Resolution
SAR Images for Within Field Homogeneous
Zone Delineation
Jiangui Liu, Elizabeth Pattey, and Michel C. Nolin
Abstract Download
Full Article
Delineating management zones is important in agriculture
for implementing site-specific practices. We delineated
within-field homogeneous zones over a corn and a wheat
field using high spatial resolution multi-temporal airborne
C-band synthetic aperture radar (SAR) imagery with an
object-based fuzzy k-means classification approach. Image
objects were generated by a segmentation procedure implemented
in eCognition® software, and were classified as basic
processing units using SAR data. Results were evaluated
using analysis of variance and variance reduction of soil
electrical conductivity (EC), leaf area index (LAI), and crop
yield. The object-based approach provided better results
than a pixel-based approach. The variance reduction in LAI,
and soil EC varied with SAR acquisition time and incidence
angle. Although the variance reduction of yield was not as
significant as that of LAI and EC, average yield among the
delineated zones were different in most cases. The SAR data
classification produced interpretable patterns of soil and
crop spatial variability, which can be used to infer withinfield
management zones.