Direct Georeferencing
| By Mohamed M.R. Mostafa, Senior Scientist, Applanix Corporation |
This article is a descriptive summary of a number of technical articles published in the past five years. It is dedicated to describe the development and testing of a prototype system developed at the University of Calgary as part of the research and development efforts in the field of integrated navigation/imaging systems over the past decade. These efforts have been led by Professor K-P Schwarz who recently retired after he devoted a few decades of his life to research and development in the field of geomatics.
The Prototype Multi-sensor
System
The system consists of two major components, an imaging component and a
geo-referencing component (GPS/IMU). The imaging component consists of
digital cameras fixed
to the same rigid body as the IMU and the GPS antenna (i.e. to the aircraft).
To explore the capabilities of a multi-sensor system working with an integrated
DGPS/IMU as the georeferencing component and a digital camera system as the
imaging component the configuration shown in Figure 1 was used.
The
georeferencing data is collected using an off-the-shelf Honeywell LRF
III strapdown inertial navigation system (INS), and an Ashtech ZXII
GPS receiver, connected to a portable computer for data acquisition
and storage. To synchronize the INS, GPS, and image data in a common
time frame, the INS records are stamped in real-time by the Ashtech
GPS receiver PPS pulse. Another GPS receiver event marker is used to
record the instant of time when an image is captured by one of the
cameras. Therefore, all the data are synchronized in the GPS time frame
during data logging. The data acquisition is controlled by real-time
data logging software, which downloads the INS/GPS/image data to the
host computers, and time tags the image and INS records by the GPS
receiver-derived time pulse.
The image data is collected by two low-cost Kodak DCS420c (1.5kx1k) digital cameras; one mounted in the nadir position, the other in an oblique position 30 degrees off the nadir. This arrangement was made after detailed simulations showed that such a dual-camera configuration would best compensate the rather poor intersection geometry of the cameras.
System Calibration
System calibration is required to relate GPS-derived positions, IMU-derived
attitude parameters, and image-derived object point coordinates. The digital
cameras had to be calibrated too. Two calibration approaches have been studied
in this research. They will be labeled terrestrial calibration and in-flight
calibration. In both cases a key problem is the determination of the boresight
(mis-orientation) between each of the cameras and the IMU, which is assumed
to be constant.
In terrestrial calibration, this problem is solved by mounting the digital cameras and the IMU on a rigid metal frame to do a lab calibration. The cameras are used to acquire a number of images to a precisely surveyed target field. Using automated precise target recognition techniques, the interior geometry of the cameras, their lens distortions, and the camera/IMU boresight are then determined simultaneously using the images and the GPS/IMU data. In in-flight calibration, one tries to calibrate the system under conditions that are as close as possible to the actual data acquisition environment. Images are therefore taken in flight over a well-controlled reference field. The same parameters as above are then determined from these images. Advantages and drawbacks of the two methods have been discussed in the literature. A direct comparison of the two methods has been made (c.f., Mostafa and Schwarz, 1999).
In-flight Testing of
the Multi-sensor System
In October 1998, two test flights were conducted over The University of
Calgary campus. The dual-camera system was used to compare directly the
single-camera
system and the dual-camera system. The system was installed onboard a Cessna
310. The flight altitude was at 400-500 m, and the flight pattern covered
the entire campus by a block of 60% endlap and 40% sidelap, using repeated
west-to-east
flight lines and a single flight per strip in the opposite direction (to
simulate a three-camera system). Two cross strips were also flown from
south to north
and vice versa. Close to one hundred ground control points had been established
on the campus by static DGPS prior to the flight. They were well distributed
in height (buildings) and had standard deviations of 1 cm in horizontal and
2 cm in elevation.
Careful mission planning was done to ensure a favorable sun angle and good satellite sky distribution. To avoid signal blockage by aircraft, banking wide turns were flown, keeping the roll angle typically smaller than 15 degrees. Two master stations were kept running during the test period. One was located close to the airport, the other on campus.
In-flight Test Results
To study the performance of direct georeferencing with small format
digital cameras, the coordinates of the ground reference points
were determined using
flight data only; i.e. no ground coordinates were used at all. The comparisons
for stereo-pairs only and for block triangulation will be discussed. When
using stereo-pairs of nadir images in direct geo-referencing mode,
the root mean
square errors (RMS) are 0.5 m in horizontal coordinates and 1.6 m in height.
The much larger errors in height are due to the poor intersection geometry
of the small digital cameras. When adding one oblique image, the height error
is reduced to 0.7 m (RMS), while the horizontal errors remained essentially
unchanged. When applying the GPS/IMU assisted strip or block triangulation,
still without using ground control, results improved considerably. In Table
1, results for a small block triangulation of three by three strips are shown
for the case of using nadir images only and of using nadir and oblique images.
In the latter case, the RMS values drop to 0.2 m for horizontal accuracy
and to 0.3 m for height.

The results shown here show the potential of these techniques. They add considerable flexibility to airborne mapping because very small areas, covered by only a few stereo-pairs, can now be mapped rapidly and without the need for ground control. The accuracy of 0.2-0.3 m (RMS), currently achieved, is sufficient for some aerial survey applications. Operationally, such a system is almost a one-man-show. There is also considerable potential for further operational and accuracy improvement, considering that digital cameras with a much wider field of view are rapidly being developed and that the extension of the size of the triangulated block will result in an increase in accuracy, due to the positive effect of direct georeferencing.
Further Reading
Mostafa, M.M.R and K.P. Schwarz, 2001. Digital image georeferencing
from a multiple camera system by GPS/INS. ISPRS Journal of Photogrammetry and Remote
Sensing, 56(2001): 1-12.
Mostafa, M.M.R. and K.P. Schwarz 2000. A Multi-Sensor System for Airborne Image
Capture and Georeferencing. Photogrammetric Engineering and Remote Sensing,
66(12): 1417-1423.
Mostafa, M.M.R. and K.P. Schwarz, 1999. An autonomous multi-sensor system for
airborne digital image capture and georeferencing. Proc. the ASPRS Annual
Convention, Portland, Oregon, May 17-21, pp. 976 - 987.
Schwarz, K.P. and M.M.R. Mostafa, 2000. Mapping the Earth by Airborne Multi-sensor
Systems Image Capture and Geo-referencing. GIM Magazine.
Schwarz, K.P., 2000. Mapping the Earth's Surface and Its Gravity Field
by Integrated Kinematic Systems. Lecture Notes, Nordic Autumn School, Fevik,
Norway, August 28 - September 1.
| Top | Home |