ASPRS

PE&RS June 1996

VOLUME 62, NUMBER 6
PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
JOURNAL OF THE AMERICAN SOCIETY FOR PHOTOGRAMMETRY AND REMOTE SENSING

Commentary

Status, Prospects, and the Profession
Kennert Torlegard, Royal Institute of Technology, Stockholm

From early plane-table photogrammetry through the analog and analytical stages, photogrammetry has now reached the digital stage of development. It is characterized by using images stored in a computer, i.e., softcopy photogrammetry. Digital methods and software packages are being developed to such a degree that almost anyone who can use a PC or a workstation can become a photogrammetrist. Is this development dangerous or is it promising? Is it a threat to professionals or to the profession? Or does it finally provide the end user with the tool that has long been awaited.

Those who develop methods and supply hardware and software have to realize that softcopy photogrammetry must transcend beyond analytical and analog photogrammetry. It has to be cheaper, faster, more accurate, more reliable, more robust, more automated, more integrated, and more user friendly in order to be competitive.

Data capture is one of the bottlenecks. At the recent ISPxs Commission IV Symposium, a panel concluded that an IFOV of one to two meters is needed for interpretation of de tail in topographic mapping at a scale of 1:50,000. The data rates are thus very high. When we compare it with very high altitude photography with super-wide cameras, the resolution corresponds to pixel sizes of 7 to 15 micrometres.

Parts of the photogrammetric process are already automated to a large degree. The reconstruction of the interior and relative orientation of a stereo-pair is fully automated. Absolute orientation is automated only to the precise measurement of the center of targets (signals) on ground control points. The human operator still has to identify them. Aerial block triangulation can be done in digital images (15-micro metre pixels) with the same accuracy as classical analytical triangulation and there are automated methods to do it (OEEPE test with a 4 by 7 block of images).

Image matching, parallax reading, and DEM generation have been automated and have long since been in production. The same is the case with orthophoto resampling. The next step to be automated is feature extraction and interpretation of all topographic detail that are needed in databases, in GIS, and on topographic maps. At present there is very little automation. Human interaction is necessary in identification and selection. The man-computer interface is important: mono, stereo, and/or multi images; menus; eight-hour shifts; etc. There is much to be done. Data compression is under discussion. It has been shown that point determination and location of distinct objects such as houses can be done with the same accuracy after a ten times JPEG compression. A printed orthophoto did not show any evident loss of readability after similar compression. But how is it with interpretation of topographic detail in stereo? Here we have to be very careful before we draw any conclusions.

And finally to the profession. Even if we let the end user make his interpretation in stereo on his PC or workstation, the images have to be prepared for it by photogrammetrists, and a quality declaration has to follow the product. And when it comes to the production of topographic detail, the photogrammetrist is the expert, and here he is the end user. We are not only suppliers of data to GISs, we use Glss as a tool in our profession.

Top Home