Foreword
Photogrammetry and the Quest for Digitalization
Raad A. Saleh, Guest Editor
Mead & Hunt, Inc.
Madison, Wisconsin
Introduction
The development of photogrammetry as a science, art, and technology is typified
by changes in its definition throughout recent editions of the Manual of Photogrammetry.
In the first three editions, "photograph" was a keyword, referring
to the medium wherein information would be recorded using a camera. In the
fourth edition, the term "photogrammetry" experienced some departure
from the premise that the only medium is a photograph, adding to the definition "...
recording [a] pattern of electromagnetic radiant energy and other phenomena." This
reflects the recognition that imagery may be acquired, not only through the
use of a conventional camera, but also by recording the scene by one or more
special sensors, including multispectral scanning (Thompson and Gruner, 1980).
This realization is in part due to the advent of remote sensing, which started merely as a tool, then steadily matured into a distinct field. Today, it has become obvious that the phrase "reading patterns of electromagnetic radiation" refers to digital imagery acquired by remote sensing systems. And while direct digital acquisition has become the primary tool for recording the phenomena in remote sensing and vision applications, information is still recorded in film-based media, even for softcopy photogrammetric production. At present, conversion into a digital form is achieved by means of image scanning technology. There has been some speculation, however, that the business of conversion by scanning may not stay around for long, as digital frame cameras are expected to be widely used in the future (Jenkins, 1994; Light, 1994; Strunk et a]., 1992). If this is truly realized, the impact will be tremendous. Photogrammetrists have always dealt, probably unconsciously, with hardcopy photography as the source of information, even in a softcopy modus operan di. Production regimes will be forced to adjust to new methods, especially in the orientation of frame images. The impact will be even more crucial to photogrammetric scanner manufacturers, as the need for conversion technology becomes highly questionable. However, the present state of the photogrammetric market seems to favor image scanning technology.
A New Way of Thinking
The words "digital" or "softcopy" distinguish this new
phase in terms of two issues. The first is representation of information, that
is, digital imagery replacing conventional photography. This leads to the second
issue, which is the host environment, in this case, computers replacing conventional
plotting machines. As a result, the whole photogrammetric task is altered;
most evident is the interaction mode between the operator and the machine.
A floating mark becomes a cursor, handwheels are replaced by a trackball, the
photo stage is replaced by a computer monitor, etc. Ackermann’s (1991)
words may best describe this attitude change: "With digital cameras and
digital image processing, photogrammetry will operate in a completely different
environment, characterized by different equipment, techniques, skills, and
by a different way of thinking." An equally important issue is what the
computer can, or cannot, offer in assuming the role of a photogrammetric system.
A prodigious leap has already been achieved by the mere fact that all functions
are now performed within the same system. Even in highly automated analytical
plotters, data are usually transmitted to a separate processing unit for simplest
operations. Yet despite these significant changes, it is still unclear to many
researchers and practitioners what the primary achievement really is. In fact,
some may even argue that it would be too hasty to abandon existing production
regimes and rush into a totally unfamiliar environment, in which nobody seems
even to agree about the basic terms of reference and, more importantly, about
what the economics are.
What is a Softcopy Photogrammetric System?
A simple yet important question needs to be addressed: Other than those basic
components, i.e., digital stereo imagery, a computer, and key photogrammetric
functions, what does characterize a softcopy photogrammetric workstation?
Must such a system include capabilities such as block adjustment, automatic
matching, and interfacing with geographic information systems (GIS)? While
a strict definition may or may not require all of these and other capabilities,
the analytical approach and the quest for automation have become integral
parts of the concept of photogrammetric technology. Furthermore, the functions
of a computer-based photogrammetric system do, in fact, extend beyond those
of displaying and other basic processes normally performed on digital lviiagery.
Therefore, despite the fact that the definition considers primarily the photograph
in a digital form, i.e., not requiring automation or analytical capabilities
as an intrinsic condition, investing a computer in this narrowly defined
manner would certainly defeat the whole purpose of digitalization. Furthermore,
it is unacceptable for a softcopy photogrammetnc system to achieve a level
of performance merely comparable to that of an analytical plotter. It must
offer far more capabilities and superior performance. Perhaps, then, it becomes
somewhat easier to make the difficult decision of phasing out expensive,
operational, and still satisfactory technology.
The potential for automation in softcopy photogrammetry is significant and real, It is a driving force behind present and future developments. With digital imagery and a computer, possibilities are extensive, evolving, and may be unlimited. Comprehensive research is being conducted in various aspects of automation. Specific tasks have already achieved full automation, and are thence operational, such as interior orientation and surface generation. Other tasks are almost-solved research problems on their way to commercialization, such as exterior orientation and aerotriangulation. More ambitious tasks, such as extraction of planimetric features, may require more time to automate for photogrammetric production.
About This Issue:
Many papers dealing with various topics of softcopy photogrammetry were submitted
for publication in PhotogrammetTic Engineering 8- Remote Sensing. A total of
21 reviewers participated in evaluating the manuscripts; only seven of these
appear in this issue. The papers are arranged in such a way that the topics
follow the logical sequence of softcopy photogrammetric procedures, as shown
in Figure 1.
The
first paper, "Tone Reproduction of Photographic Scanners," by
Kölbl and Bach, addresses several basic concepts in scanning, such as
image resolution, dynamic range, and granularity. A general overview of photo
sensing devices is presented, followed by a discussion of the importance of
an illumination system, detailing the different aspects of directed and diffused
light. This investigation included a comparison of several flatbed scanners
and a few drum scanners. The analysis focused on image noise, resolution, and
sensitivity. Testing material included a medium-contrast black- and-white photograph,
a high-contrast Panatomic-X photograph, and a false-color photograph. Most
scanners tested resulted in correlation of 50 percent or more between neighboring
pixels. Image noise was measured for densities (D) between 0.4 and 0.7 D, then
standardized with a point spread function of 10 micrometres. This resulted
in a uniform finding among the scanners tested an image noise of ± 0.1
D. Furthermore, scanners with diffused light gave about 20 percent less image
noise than those with directed light. Specific tests were made using films
with different graininess. Contrary to what might be widely believed, the image
noise did not increase with the graininess of the film. Another finding was
that the sensitivity of most tested scanners was considerably reduced in dark
areas. In conclusion, Kölbl and Bach state that development of scanners
is still in progress and that no final stage has yet been reached. Enforcing
this view, they further state:
"Digital photogrammetry and digital image processing have considerable advantages compared to analog techniques. However, it seems that it is not yet possible to take full advantage of these new promising techniques due to limitations in the scan- fling process. The photogrammetric industry has always played a key role with regard to image quality and it is difficult, in photography, to find lenses with similar standards to those produced for aerial cameras. Consequently, a substantial increase in scanning quality might also require special developments."
The quality of scanned imagery is an important issue in softcopy production. Another important task is handling the large volume of data resulting from scanning. As shown in Figure 2, the file size of a standard 9-by-9 inch aerial photograph can easily reach 200 to 300 megabytes, depending on scanning resolution.
Such
large files would certainly impose constraints on production. Compression techniques
are therefore needed to reduce data volume and make the data more
efficient to handle. Novak and Shahin discuss this issue in their article, "A
Comparison of Two Image Compression Techniques for Soft- copy Photogrammetry." A
basic assumption made by the authors is that digital images contain a significant
amount of redundant information. The authors describe the general classification
of lossless and iossy compression techniques and discuss relevant definitions,
such as entropy, redundancy, compression ratios, image decomposition, quantization,
and predictive coding. A rather detailed discussion of two compression techniques
follows; the Joint Photographic Expert Group, commonly known as JPEG, and the
Hierarchical Predictive Coding (HPC). The latter was developed by the authors
based on a compression scheme for digital video sequence. Several experiments
were conducted to evaluate the two techniques by means of visual loss of details,
compression ratio, speed, arid geometric integrity. Although the HPC technique
was found to achieve smaller compression ratios, it was faster and maintained
higher geometric and radiometnc consistency. On the other hand, the JPEG speed
problem could be overcome with hardwired compression chips. Furthermore, for
compression ratios smaller than 5, image distortion was negligible. In their
conclusion, Novak and Shahin concede that:
"Image compression alone, however, will not solve all problems of handling large quantities of digital data. [A] Database need(s) to be developed to manage digital images in a geographic information system (GIS), and make them easily and quickly accessible for the user."
The next two articles address different aspects of automating aerotriangulation. In their article, "Automatic Aerotriangulation Using Multiple Image Multipoint Matching," Agouris and Schenk discuss the possibility of simultaneously incorporating multiple points from multiple images in the matching process. A photo mosaic is first generated and the locations of several points in the overlap area are approxlmated. An initial block adjustment is then performed for a first approximation of exterior orientation. Exterior orientation parameters and scale factors are accordingly updated. The authors emphasize the important role of computing power to alleviate some of the difficulties the human operator encounters in observing more than two photos simultaneously: "Instead, multiphoto matching offers essentially a digitally implemented n-stage comparator." Agouris and Schenk go further in the promise of automating the photogramrnetric process and how it would expand applications of the field:
"The potential to bypass expensive and dedicated equipment (e.g., analytical plotters) and personnel by performing the necessary operations in a computer may revolutionize aerotriangulation. Not only could it change the way aerotriangulation is currently carried out, but it would mainly boost its practicality and application in scientific fields and communities (e.g., architecture, industry, and medicine) which are currently hesitant or turned off by the aforementioned equipment and personnel requirements."
While some practitioners may consider such an assertion far-fetched and unrealistic, we must be careful not to underestimate how today’s fast-paced technological achievements impact other disciplines. After all, the example of GPS and the way it "revolutionized" surveying is by no means a "farfetched" one.
Toth and Krupnik present another article addressing automation, in which aerotriangulation is again targeted for improvement in performance and reliability. As a proof of concept, an automatic aerotriangulation system (AATS) was designed and implemented. The conceptual objective of AATS overlapped with Agouris and Schenk’s investigation described earlier. Data volume was again a concern the authors expressed:
"Computer processing power and storage capacity are the primary requirements for implementing an automated aerotriangulation system. Clearly, storage requirements are currently more crucial than processing power."
To handle this problem, an image pyramid structure was developed. Experimenting with AATS, the authors concluded that efficiency and reliability increased substantially as a direct result of automation. In such a system, according to the authors, there would be no limit for the number of conjugate points identified by the matching technique. This is very important, because the accuracy of the exterior orientation parameters would increase proportionally with the number of such points. When aerotriangulation is completed, exterior orientation parameters are determined for all images. Surface generation usually follows.
The Zhou et al. article, "A Digital System for Surface Reconstruction," does address this topic, but departs from the conventional application of aerial photography. Rather, it describes an interesting industrial application that closely overlaps with computer vision. A fundamental distinction from scanned aerial photography is the very low noise in digital images directly acquired by vision systems. Consequently, many of the edge detection and matching techniques would result in a much higher success rate than if applied in scanned aerial photos. Experiments with this system demonstrated a high degree of accuracy and precision relative to the pixel size. Such a system can be useful for other industrial applications, such as metal deformation.
Another source of data amenable for photogrammetric processing is remote sensing satellites. Rao et al. present methodology and experiments on surface generation from Indian Remote Sensing (IRS) satellite data. A stereo pair is formed with overlapping images from adjacent orbital paths. The stereo pair is first appropriately oriented using ground control points. The right image is then radiometrically normalized with reference to its corresponding left image. Using cross correlation, pixels in the right image are searched and matched with a target pixel in the left image. This is performed at three pixel intervals in both raws and columns. Elevation data are then calculated from parallax resulting from the matching. This methodology was tested in three sites. As one might expect, the base-to-height ratio was a primary factor in elevation accuracy. The authors concluded that the resulting elevation data are accurate enough for small scale (e.g., 1:250,000) applications.
While surface generation is currently viewed as a highly automated task, extraction of planimetric features still requires a great deal of manual operation, and can be performed either with the stereomodel, or from an orthorectified imagery. Digital ortho imagery is becoming increasingly popular for this purpose. While using digital ortho imagery represents a cost-effective data source for linear feature extraction, this task is normally achieved following the standard photogrammetric procedures of creating a stereo model. These procedures require costly ground control, extensive storage capabilities, and powerful computing resources all well justified for conventional photogrammetric operations. For less conventional procedures such as map revision, inexpensive methods can be used to achieve the same tasks but with relaxed, yet acceptable, accuracies.
In "The Digital Transferscope," Derenyi presents an example of a
low-cost map revision methodology. The author describes a simple scheme of
real-time geometric registration of raster images and vector maps for map revision,
emulating digitally the function of a zoom transferscope. The digital map is
considered as the source for geometric consistency. Therefore, the assumption
is that the image be rectified to fit the map. Instead, a section of the vector
map is fitted to the image using interactive shift, scale, and rotation. The
transformation parameters are accumulated and saved in a file for later reverse
transformation. All features of interest are then transferred from the image
into the map by on-screen digitization. When the revision is completed, the
map section is then reversibly transformed to its original geometry. This scheme
was tested on two maps and found to be satisfactory. Derenyi states that: "The
RMSE indicates sub-pixel accuracy and is well within the tolerances set for
basic mapping al- though simple equipment and low resolution image data were
used."
The fundamental goal in Derenyi’s scheme is to bypass costly image processing
tasks, such as rectification. Geometric consistency is required for map revision
and thematic mapping. For these and other applications, orthorectifled imagery
is widely used, because it offers a higher degree of confidence in its geometric
consistency with base maps.
What About the Profession?
An assertion was made by Zhou et al. that, for the proposed system, "little
knowledge about image processing, photography, and computers is required of
an operator." Another was made by Derenyi in describing his methodology: "Neither
advanced photogrammetric knowledge nor stereo perception is a prerequisite." Both
assertions express where the field of photogrammetry is heading. This trend
echoes statements made in recent meetings and conferences, predicting the "Democratization
of Photogrammetry," essentially implying the demise of this profession.
It is true that the trend towards the highly automated photogrammetric workstation
shifts the skill requirement from photogrammetry to computer literacy. The
example often cited is that word processing software is very efficiently used
for word processing, producing very nice text files, but never requiring the
knowledge of how the software is written, what algorithm it uses, etc. The
problem with this argument is that photogrammetric production is not word processing.
It is a highly sophisticated process that requires a comprehensive understanding
of the full context of the photogrammetric operation. Furthermore, softcopy
photogrammetry is by no means a mature technology, at least for the present
time. There is no doubt that this field offers a great potential for low cost,
yet sound, photogrammetric systems with a continuously evolving range of capabilities.
However, more research and development is still needed in many aspects of the
technology, such as in user interface, automation, matching, and overall performance.
Each of these issues may become a "solved problem" in the near, or
not-so-near, future, because of the rapid advances in computing, sensing systems,
and supporting technologies.
References
Ackermann, S., 1991. Structural Changes in Photogrammetry, Proceedings
of the 43rd Photogrammetric Week at Stuttgart University, 9-14 September, pp. 9-23.
Jenkins, T., 1994. CCD Cameras Mount Serious Challenge to Film, Photonics
Spectra,
January, pp. 113-114.
Light, DL., 1990. Characteristic of Remote Sensors br Mapping and Earth Science
Applications, Photogramnietric Engineering & Remote Sensing, 56(12):1613-1623.
Strunk, S., J. McMacken, S. Kamasz, W. Washkurak, F. Ma, and S. Chamberlain,
1992. The Development of a Four Million Pixel CCD Imager for Aerial Reconnaissance,
Proceedings of the International Society for Optical Engineering, Airborne
Reconnaissance XVI, 1763:25-36.
Thompson, M., and H. Gruner, 1980. Foundations of Photogrammetry, Manual
of Photogrammetry (C. C Slama, editor), American Society of Photogrammetry, Falls
Church, Virginia, pp. 1-36.
Guest Editor
Raad A. Saleh has been involved in several research projects dealing
with softcopy plotogrammetric systems, digital image analysis, and GIS. He
obtained
a B.Sc.E. from the University of Baghdad, Iraq, and an M.Sc.E. from the University
of New Brunswick, Canada, both in Surveying Engineering, in 1979 and 1988 respectively.
At present, he is a Ph.D. Candidate in Civil and Environmental Engineering,
at the University of Wisconsin-Madison. His dissertation is on multidimensional
matching techniques. Raad is the recipient of several awards, such as the Robert
E. Altenhofen Memorial Scholarship Awards for 1992 and 1991. Raad is currently
heading a newly formed division of photogrammetric services in Mead & Hunt,
Inc.