Mapping Matters

By Qassim A. Abdullah, Ph.D., PLS, CP
Your Questions Answered
The layman’s perspective on technical theory
and practical applications of mapping and GIS

Please send your question to and indicate whether you want your name to be blocked from publishing.
Answers for all questions that are not published in PE&RS can be found on line at

Dr. Abdullah is the Chief Scientist at EarthData International, LLC, Frederick, MD.

The contents of this column reflect the views of the author, who is responsible for the facts and accuracy of the data presented herein. The contents do not necessarily reflect the official views or policies of the American Society for Photogrammetry and Remote Sensing and/or EarthData International, LLC.

[March 2010] [April 2010] [July 2010] [August 2010] [September 2010] [October 2010]

[January 2009] [March 2009] [May 2009] [August 2009]

2008 Archive
[January 2008] [March 2008] [April 2008] [June 2008] [July 2008] [August 2008] [September 2008] [November 2008]

2007 Archive
[February 2007] [April 2007] [May 2007] [June 2007] [July 2007] [August 2007] [October 2007] [November 2007] [December 2007]

October 2010 (Download a PDF 443Kb)

Question: I noticed that the vertical accuracy is more stringent than the horizontal accuracy according to both ASPRS and NSSDA standards. For example, if I produce orthophoto products from 15 cm (6 in) digital imagery, the stated horizontal accuracy using the ASPRS standard is 30 cm (1 ft), while the expected vertical accuracy is 20 cm (0.67 ft). We always believed that the vertical accuracy of any mapping product is less stringent than the horizontal accuracy. Why is that? Evgenia Brodyagina, Frederick, Maryland - USA

This answer contains graphics and tables. Please see the PDF

September 2010 (Download a PDF 385Kb)

Question: I noticed that according to both ASPRS and NSSDA standards, the vertical accuracy is more stringent than the horizontal accuracy. For example, if I produce orthophoto products from 15 cm (6 in.) digital imagery, the stated ASPRS standard for horizontal accuracy is 30 cm (1 ft), while the expected vertical accuracy is 20 cm (0.67 ft). We always believed that the vertical accuracy of any mapping product is less stringent than the horizontal accuracy. Why is that?

Dr. Abdullah: PART II: In Part I of my answer (PE&RS, August 2010), I addressed the issues that resulted in the contradictory accuracy figures that came into question. I explained that many of the map accuracy standards used today, particularly here in the United States, were derived from the use of film sensors and paper maps. At the conclusion of Part I, I called on all concerned agencies and organizations in the United States to develop a new national standard that can be applied to modern geospatial data products. In Part II, I would like to introduce some high level thoughts and ideas to generate discussions on how to create such a standard, and I hope these ideas may even prove useful in the development of such a standard.

1. The new standard should be useful on a national level:
The standard should be accepted and endorsed by all those agencies and organizations that historically publish and maintain map standards in the United States, such as ASPRS, FGDC, USACE, FEMA, and others. In addition, the new standard should appeal to users from different sectors of the mapping and GIS community through its transparency and ease of use. When it comes to geospatial products, a single standard can be used if it is drafted carefully and in a way that satisfies varying user requirements. Different agencies or users should be able to apply different accuracy figures to the same standard and still achieve results that are specific to their unique suite of products. This is easily achievable by matching products to specific accuracies based on product resolution or map class. I will elaborate further on this concept later in this article. Currently, different agencies have already established, or are in the process of establishing their own individual standards. For instance, agencies such as FEMA, ASPRS, and the USGS have all published their own standards or guidelines for lidar data accuracy. Since lidar systems are based on the same fundamental laser technology, raw products from different lidar systems all possess more or less the same quality and accuracy. Quality and accuracy are essentially determined by the methods used to post-process and handle the data; therefore, users should have a single standard they can use to calculate accuracies that are specific to the methods being applied.

2. The new standard should be modular:
The old concept of “one sensor, multiple products” no longer applies to today’s modern map-making practices. The diverse range of technologies currently used in map-making dictates the need for new standards that can be applied to new sensor technologies, such as lidar (topographic lidar and bathymetric lidar), interferometric synthetic aperture radar (IFSAR and InSAR), digital cameras, underwater survey by sonar, etc. Therefore, the standard should be modular, in the sense that it should encompass a set of sub-standards that can be individually applied to different technologies. For instance, one sub-standard may be used to define accuracies and specifications of products derived from imaging sensors. As a result, this group of products (e.g., orthophoto, compiled map, and elevation data) would share the same vertical and horizontal accuracy requirements.

Another sub-standard might address the specifications and accuracy of lidar and IFSAR data and would define products such as elevation data and ortho-like intensity images; and additional substandards could be defined for hydro survey or acoustic survey using sonar technologies to map sea, river, and lake floors.

By developing a single standard that simply and uniquely addresses each sensor-type, this modular approach eliminates the user’s confusion when trying to interpret multiple unrelated standards from multiple unrelated agencies. Modularity also lends well to change and expansion over time. Rather than becoming outdated and inapplicable over time, this modular standard will change and adapt as new sensor technologies and products are added by the geospatial mapping community.

3. The new standard should apply one of the following two measures to classify the accuracies of final products:

a) Accuracy according to the resolution of the final delivered products
For example, an orthophoto produced with 15 cm GSD should have a horizontal accuracy of RMSEX = RMSEY = 1.25*GSD (of the final delivered product) or 18.75 cm, regardless of the sensor used or the flying altitude. The proposed accuracy figure is a little aggressive when compared with the current practice of assigning an ASPRS Class 1 accuracy of RMSEX = RMSEY = 30 cm for such a product. Vertical accuracy can be derived using a similar measure of RMSEV = 1.25*GSD (of the final delivered product) or 18.75 cm, versus the current practice of labeling such products with an ASPRS Class 1 accuracy of RMSEv = 20 cm for 2 ft contour intervals.

The standard should not allow for the production of orthophotos with a GSD that is smaller than the raw imagery GSD (the GSD during acquisition). However, the standard should allow for re-sampling of the raw imagery for the production of coarser orthophoto GSDs, as long as the final accuracy figures are derived from the re-sampled GSD and not the native raw imagery GSD. Using the resolution or GSD of the imagery in referencing the final product accuracy introduces a more scientific and acceptable approach since a product’s accuracy is no longer based on the paper scale of a map.

One may argue that some users (e.g., a soldier on a battlefield) may need hard copy maps for field investigations. This is a valid concern. The new standard should allow users the option to produce paper maps using any scale they choose, as long as the map accuracy is stated on the paper map and the scale is represented by a scale bar that automatically adjusts to the map scale.

b) Accuracy according to national map classes In this case, the standard can specify multiple map categories for all users, and the standard will provide specifications and accuracy figures to support each of these classes. The following proposed categories represent reasonable classes that should fit the needs of most, if not all users:

1. Engineering class-I grade maps that require a horizontal accuracy of RMSEX = RMSEY = 10 cm or better and vertical accuracy of RMSEv = 10 cm
2. Engineering class-II grade maps that require a horizontal accuracy of RMSEX = RMSEY = 20 cm or better and vertical accuracy of RMSEv = 20 cm
3. Planning class-I grade maps that require a horizontal accuracy of RMSEX = RMSEY = 30 cm or better and vertical accuracy of RMSEv = 30 cm
4. Planning class-II grade maps that require a horizontal accuracy of RMSEX = RMSEY = 50 cm or better and vertical accuracy of RMSEv = 50 cm
5. General purpose grade maps that require a horizontal accuracy of RMSEX = RMSEY = 75 cm or better and vertical accuracy of RMSEv = 75 cm
6. User defined grade maps that do not fit into any of the previous five categories.

This concept provides more flexibility for data providers in designing and executing the project. However, it may be problematic for users who are not well educated in relating map classes to product spatial resolution (GSD). Keep in mind that due to the fact that digital sensors are manufactured with different lenses and CCD array sizes, different scenarios for image resolution and post spacing may result in the same final product accuracies and therefore, it is important that users clearly define their required GSD or work with the vendor to determine the optimal GSD for their needs.

4. The new standard should address aerial triangulation, sensor position, and orientation accuracies:
Currently, there is no national standard that addresses the accuracy of sensor position and orientation. As a result, the subject has been left open to interpretation by users and data providers. The accuracy of direct or indirect sensor positioning and orientation (whether derived from aerial triangulation, IMU, or even lidar bore-sighting parameters) is a good measure to consider in determining the final accuracy of the derived products. Furthermore, issues can be detected and mitigated prior to product delivery if the standard defines and helps govern sensor performance. In the past, we adopted the rule that says aerial triangulation accuracy must be equal to RMSE = 1/10,000 of the flying altitude for Easting and Northing and 1/9,000 of the flying altitude for height. Obviously, the preceding criteria were based on the then-popular large format film cameras that were equipped with 150 mm focal length lenses. Today’s digital sensors come with different lenses and are flown from different altitudes to achieve the same ground sampling distance (GSD), so relying only on the flying altitude to determine accuracy is no longer scientific or practical and new criteria needs to be developed.

When examining the 1/9,000 and 1/10,000 criteria, the following accuracy figures apply for 1:7,200 scale imagery that is flown using a large format film metric camera. such as Leica RC-30 or Zeiss RMK, to produce a 1:1,200 scale map:

RMSEX = RMSEY = 1/10,000*H = 1/10,000*1,100 = 0.11 m
RMSEZ = 1/9,000*H = 1/9,000*1,100 = 0.12 m

When using the current ASPRS class 1 standard, the following accuracy figures would be expected for a map derived from the same imagery:

RMSEX = RMSEY = 0.30 m
RMSEZ = 0.20 m (assuming 0.60 m [2 ft] contours were generated from the imagery)

The previous accuracy figures call for aerial triangulation results that are 270% more accurate than the final map accuracy. Old photogrammetric processes and technologies required stringent accuracy requirements for aerial triangulation in order to guarantee the final map accuracy, and past map production methods have transitioned through many different manual operations that ultimately resulted in the loss of accuracy.

Today’s map-making techniques have been replaced with all-digital processes that minimize the loss of accuracy throughout the entire map production cycle. In my opinion, the new standard should support accuracy measurements for aerial triangulation based on the resulting GSD. Considering all of the advances we are witnessing in today’s map making processes, aerial triangulation horizontal and vertical accuracy of 200% of the final map accuracy should be sufficient to meet the proposed map accuracy. Accordingly, the aerial triangulation accuracy required to produce a map product with a final GSD of 0.15 m, regardless of the flying height, is shown below:

RMSEX = RMSEY = RMSEZ = 0.625*GSD = 0.625*0.15 = 0.09 m
(if the final map accuracy is based on RMSEX = RMSEY = RMSEZ = 1.25*GSD = 0.1875 m)\

Similar calculations can determine the required accuracy for direct orientation (no aerial triangulation required) using systems such as IMUs. To derive the required accuracy for raw, pitch, heading, and position, the previous aerial triangulation error budget of 0.09 m can be used to mathematically derive the acceptable errors in the IMUderived sensor position and orientation.

Lastly, I feel that a new approach should be developed to calculate lidar orientation and bore sighting accuracies. Since the sensor’s geopositioning and not the laser ranging is the main contributor to the geometrical accuracy of lidar data, this calculation should link lidar final accuracy to sensor orientation and positioning accuracies. In the forthcoming issue of PE&RS, I will introduce the final part (Part III) of my answer which focuses on the importance for the new standard to deal with data derived from non-conventional modern mapping sensors such as lidar, IFSAR, and under water topographic survey using acoustic devices such as active SONAR (SOund Navigation And Ranging). In addition, Part III will provide recommendations on the statistical methodology and confidence level to be used in the standard.

August 2010 (Download a PDF 183Kb)

Question: I noticed that according to both ASPRS and NSSDA standards, the vertical accuracy is more stringent than the horizontal accuracy. For example, if I produce orthophoto products from 15 cm (6 in.) digital imagery, the stated ASPRS standard for horizontal accuracy is 30 cm (1 ft), while the expected vertical accuracy is 20 cm (0.67 ft). We always believed that the vertical accuracy of any mapping product is less stringent than the horizontal accuracy. Why is that?

Dr. Abdullah: I am glad you brought up this important issue concerning existing mapping standards and how they apply differently to imagery acquired by the new digital sensors. I would like to correct your understanding of the ASPRS and National Standard for Spatial Data Accuracy (NSSDA) standards as they relate to the example you’ve provided. The horizontal and vertical accuracies figures in the example are contradictory not because the ASPRS standard is stated incorrectly but because of the way we associate the image resolution or the Ground Sampling Distance (GSD) with the standard’s defined map scale or contour intervals.

When softcopy photogrammetry was introduced in the early 1990s, it was standard practice to scan the film or the dispositive with 21 micron resolution or 1200 dpi (dots per inch). Therefore, for a negative film scale of 1:7,200 (1”=600’), which is designed to support a map scale of 1:1,200 (1”=100’) according to 6x enlargement ratio, the resulting Ground Sampling Distance (GSD) after scanning is 15 cm (6 in.). When we transitioned to digital aerial sensors, which essentially replaced film cameras, we maintained the same standards and conventions that we used for film products. As a result, digital imagery flown with 15 cm GSD are routinely used for the production of 1:1,200 (1”=100’) scale maps or orthophotos and 2 ft contours. So the confusion actually originated when we adopted the old conventions for the new mapping products from digital cameras.

The ASPRS standard did not specify a certain GSD for a certain map scale, but it did state that for class 1 mapping products, a 1=1,200 scale map should meet a Root Mean Squares Error (RMSE) of 30 cm horizontally. Also, the standard did not specify that imagery with 15 cm GSD had to be used for the production of 2 ft contours. The ASPRS standard states that the class 1 vertical accuracy for elevation data with 2 ft contour intervals must meet an RMSE of 20 cm; however, when we extract accuracy figures for 15 cm imagery, we use the above mentioned association of map scale and GSD to apply the ASPRS accuracy standard for evaluating the new digital sensor data products.

This is clearly a confusing situation that we created ourselves due to the lack of concise mapping standards for the highly accurate products produced from modern digital sensors. Immediate needs forced the mapping community to adapt conventions and measures that were originally designed for film cameras and paper-based products. The well known “enlargement ratio”, which had been used in the past to determine how much film or dispositive could be enlarged to produce a final map with minimum or no degradation in quality, is no longer applicable in today’s digital world of geospatial data production. An enlargement ratio of 6 was widely accepted and used in the mapping industry when dealing with film-based mapping products; however, some of the modern digital sensors are built with diiferent CCD size (i.e. 6 microns versus the 14 or 21 microns of scanned films) and a variety of lenses, and therefore, the enlargement ratio becomes irrelevant when compared to film scanned at 21 microns. In fact, the application of scale to digital imagery is not valid and only adds to the confusion, particularly since the concepts of paper scale and enlargement ratio are based on film or paper-based maps. Again, the contradicting accuracies represented in our original example are not derived from the ASPRS standard, but result from our misconception that digital imagery with a GSD of 15 cm is only suitable to produce a 1=1,200 (1”=100’) scale map with 2 ft. contours.

The ASPRS mapping standard, however, is problematic when applied to data from digital sensors. The ASPRS standard materialized in the 1980s and was approved in the 1990s, before digital sensors were used (or even existed) for mapping purposes. When we consider our level of achievement using today’s mapping processes, the ASPRS standard is outdated and no longer suitable for further advancement of digital passive and active sensors and to support technologies such as GPS and IMU, especially when the standard is based on mapping scale. Modern standards that are more suitable for digital maps and current and future technologies, such as digital cameras, lidar and IFSAR are needed to replace both the National Map Accuracy Standard (NMAS) and the ASPRS standard. A new set of standards should be developed based on the GSD of the digital data and the resolving power of the imaging sensor, and not on scale since digital scale can vary from one user to another based on the zoom ratio used to evaluate the data. These same arguments are valid for the more modern standard published by the Federal Geographic Data Committee, which is called the National Standard for Spatial Data Accuracy (NSSDA). The phrase “Accuracy Standard” in the NSSDA title is misleading and should be called “Testing Guidelines”. The term “standard test method” is defined by Wikipedia as follows: “to describe a definitive procedure which produces a test result. It may involve making a careful personal observation or conducting a highly technical measurement”. This definition does not apply to NSSDA since it does not quantify the testing threshold. To determine the final accuracies, the NSSDA provided a statistical acceptance formula based on 95% confidence level without addressing the threshold (in this case the “RMSE”). Users typically derive an RMSE value in order to use the NSSDA. When users address the NSSDA, we find they are often confused by these guidelines and misrepresent the standard in some way, such as mislabeling requirements (i.e., 2 ft RMSE at 95%). This example statistically makes no sense, since the term RMSE always refers to test results with a confidence level around 68% and not 95%. In my opinion, the industry desperately needs to reform and consolidate all three standards - NMAS, and ASPRS, and NSSDA - into one single unambiguous national standard that clearly defines procedures and acceptance or rejection thresholds for the different mapping products. This effort requires constructive and focused cooperation between the ASPRS and the FGDC (which represents almost all federal agencies) to draft a standard that’s based on today’s knowledge, practices, and vision for the future. This effort should focus on developing sets of standards that will remain applicable over time and will not quickly become obsolete as today’s innovations and technologies rapidly progress. In the next issue of this column, I will further discuss my ideas and thoughts on developing this standard, as well as the different conditions and parameters on which it should be based.

July 2010 (Download a PDF 476Kb)

Question: What is a “bias” in mapping processing? Where does it come from? How is it calculated? How would one deal with it at different stages of the process?

This answer contains graphics and tables. Please see the PDF

April 2010 (Download a PDF 431Kb)

Question: Due to plate tectonics, the Earth’s crust is moving at a rate of 5cm per year. What impact does this have on our GPS solutions and the accuracy of jobs that requires very high coordinate measurements?

This answer contains tables. Please see the PDF

March 2010 (Download a PDF 186Kb)

Question: My questions are about accuracy degradation of horizontal and vertical data during the photogrammetric process for airplane based platforms. I know that there are many variables involved but is there a relative constant multiplier that determines the loss of accuracy between ground survey and AT results, as well as between AT results and final vector data and contours? Also, can I assume digital and film cameras will result in different multipliers? Finally, should the flying height be the sole determinant of the data accuracy?

This answer contains tables. Please see the PDF

August 2009 (Download a PDF 519Kb)

Question: Data re-projection is done all the time by both GIS neophytes and advanced users, but a slightly wrong parameter can wreak havoc with respect to a project’s destiny if undetected. Many update projects were originally performed in NAD27 and the client now wants the data moved to a more up-to-date datum. What happens behind the scenes when data gets re-projected? Other than embarking on an expensive ground survey effort, what assurances exist to give the user confidence that what has been done is correct? What special considerations should be taken into account when data is re-projected and what are the potential pitfalls? Is every dataset a candidate to be re-projected, if not, why not?

Complicating the re-projection piece, older projects may have been done in NGVD29 and need to be moved to NAVD88. Similar to what is above, what happens behind the scenes, and how do we know the result is correct? What are some of the commonly performed vertical shifts done in the industry? Is there a standardized practice to perform this task? What impact, if any, does this vertical shift play on contours. Why do some firms/clients/consultants feel it necessary to recollect spot elevations and regenerate the contours in the new vertical datum, rather than just shifting the contours generated from the older vertical datum? Under what circumstances would a vertical shift be ill-advised?

Dr. Abdullah: I personally consider this question among the most important issues I face as a mapping scientist. Despite full awareness of the importance of coordinate and datum conversions and the role they play on the accuracy of the final delivered mapping products, most users and providers have a very limited understanding and knowledge of the topic. The question accurately describes the common mistakes, misunderstandings, concerns and anxiety that many concerned users experience when accepting or rejecting a mapping product. I will try to address all aspects of the question as much as I can for its importance. I will start by describing “what is happening behind the scenes”.

Datums and Ellipsoids: Defined by origin and orientation, a datum is a reference coordinate system that is physically tied to the surface of the Earth with control stations and has an associated reference ellipsoid (an ellipse of revolution) that closely approximates the shape of the Earth’s geoid. The ellipsoid provides a reference surface for defining three dimensional geodetic or curvilinear coordinates and provides a foundation for map projection. Here in the United States, the old horizontal North American Datum of 1927 (NAD27) was replaced with a more accurate datum called the North American Datum of 1983 or NAD83. NAD83, which is a geocentric system with its center positioned close to the center of the Earth, utilizes the GRS80 ellipsoid that was recommended by the International Association of Geodesy (IAG). The NAD27, on the other hand, is a non-geocentric datum, utilizes an old reference ellipsoid or oblate spheroid (an ellipsoid of revolution obtained by rotating an ellipse about its shorter axis) called the Clark1866 spheroid.

Conversion Types: There are two types of conversions that can occur during any re-projection: datum transformation and projection system transformation. Datum transformation is needed when a point on the Earth used to reference a map’s coordinate system is redefined. As an example of datum transformation is upgrading older maps from the old American datum of NAD27 to the newer NAD83 datum. The coordinate system (not the coordinate values) such as the State Plane may be kept the same during the transformation but the reference datum is replaced. Projection system transformation is needed when a map’s projected coordinates are moved from one projection system to another, such as when a map is converted from a State Plane coordinate system to Universal Transverse Mercator (UTM). Here, the horizontal datum (i.e. NAD83) of the original and the transformed map may remain the same.

Datum Transformations: In the process of updating older maps produced in reference to NAD27, a datum transformation is required to move the reference point for the map from NAD27 to NAD83. Several different methods for transforming coordinate data are widely accepted in the geodetic and surveying communities. In North America, the most widely used approach is an intuitive method called NADCON (an acronym standing for North American Datum conversion) to translate coordinates in NAD27 to NAD83. NADCON uses a method in which are first and second order geodetic data in National Geodetic Services of NOAA (NGS) data base is modeled using a minimum curvature algorithm to produce a grid of values. Simple interpolation techniques are then used to estimate coordinate datum shift between NAD 83 and NAD27 at non-nodal points.. Those who utilize NADCON rarely obtain bad conversion results. Most of the common blunders and mistakes made by users while using different conversion tools result from not fully understanding the basics of geodetic geometry. As such, the process of conversion should be handled by individuals who have some understanding and experience in dealing with datum and coordinates conversion.

Once the Global Positioning System (GPS) came along, the discrepancies inherent in the original NAD83, which was first adjusted in 1986 and referred to as NAD83/86 to differentiate it from newer adjustments of NAD83, became apparent. New adjustments of NAD83 (HARN adjustment, designated NAD83 199X, where 199X is the year each state was re-adjusted) resulted in more accurate horizontal datums for North America. The multi-year HARN adjustments added more confusion to the already complicated issue of the North American Datum, especially when the user had to convert back–and-forth to the World Geodetic System of 1984 (WGS84)-based GPS coordinate determination. An ellipsoid similar to the GRS80 ellipsoid is used in the development of the World Geodetic System of 1984 (WGS84) coordinates system, which was developed by the Department of Defense (DoD) to support global activities involving mapping, charting, positioning, and navigation. Moreover, the DoD introduced WGS84 to express satellite positions as a function of time (orbits). The WGS84 and NAD83 were intended to be the same, but because of the different methods of realization, the datum differed slightly (less than 1 meter). Access to NAD83 was readily available through 250,000 or more of non-GPS surveyed published stations which were physically marked with a monument. WGS84 stations, on the other hand, were accessible only to DoD personnel. Many military facilities have WGS84 monuments that typically were positioned by point positioning methods and processed by the U.S. military agencies using precise ephemeris.

In 1994, the DOD decided to update the realization of WGS84 to account for plate tectonics since the original realization, as well as the availability of more accurate equipment and methods on the ground. In that decision, the new WGS84 was made coincident with the International Terrestrial Reference Frame (ITRF) realization known as ITRF92 and was designated WGS84(G730), where G730 represents the GPS week number when it was implemented. In the late 1980s, the International Earth Rotation Service (IERS) introduced the International Reference System (ITRS) to support those civilian scientific activities that require highly accurate positional coordinates. Furthermore, the ITRS is considered to be the first major international reference system to directly address plate tectonics and other forms of crustal motion by publishing velocities and positions for its world wide network of several hundreds stations. The IERS, with the help of several international institutions, derived these positions and velocities using highly precise geodetic techniques such as GPS, Very Long Base Line Interferometery (VLBI), Satellite Laser Ranging (SLR), Lunar Laser Ranging (LLR), and Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS). Every year or so since introducing ITRF88, the IERS developed a new ITRS realization such as ITRF89, ITRF90,…, ITRF97, ITRF00, etc Since the tectonic plates continue to move, subsequent realization of WGS84 were published such as WGS84(G873) and WGS84(G1150). One of the newest realization is equal to ITRF 2000 2001.0 (i.e., ITRF 2000 at 1/1/2001).

As time goes on, the NAD83 datum drifts further away from ITRF realization unless a new adjustment is conducted. The later HARN adjustments, for example, are closer in values to the NGS coordinated network of Continuously Operating Reference Stations (CORS) system than the earlier ones. CORS provides GPS carrier-phase and code-range measurements in support of three-dimensional positioning activities throughout the United States and its territories. Surveyors can apply CORS data to the data from their own receivers to position points. The CORS coordinates in the U.S. are computed using ITRF coordinates and then transformed to NAD83. The problem with using ITRF for this purpose lies in the fact that the coordinates are constantly changing with the recorded movement of the North American tectonic plate. In the latest national adjustment of NAD83, conducted in 2007, only the CORS positions were held fixed while adjusting all other positions. This resulted in ITRF coordinates for all NGS positions used in the adjustment as opposed to only CORS published ITRF positions.

Projection System Transformation: Projected coordinates conversion, such as converting geographic coordinates (latitude and longitude) of a point to the Universal Transverse Mercator (UTM) or a State Plane Coordinates System, represents another confusing matter among novice users. State plane coordinate systems, for example, may include multiple zones (e.g., south, north, central, etc.) for the same state, and unless the task is clear, the user may assign a certain coordinates set to the wrong zone during conversion. The vertical datum conversion poses a similar risk as here in the U.S., maps were originally compiled in reference to the old North-America Geodetic Vertical Datum of 1929 (NGVD29) and conversion is necessary to relate data back and forth between the NGVD29 and the new more accurate vertical datum of 1988 (NAVD88). Similar problems arose since most surveying practices are conducted using GPS observations. Satellite observations are all referenced to the ellipsoid of WGS84 and the user has to convert the resulting elevation to geoid-based orthometric heights using a published geoid model.

As for NAD83 updates, the geoid model also went through many re-adjustments and different geoid models were published over the years such as geoid93, geoid99, geoid03, and the most recent geoid06, which only covers Alaska so far. Without having details about the data at hand, a user may easily assign the wrong geoid model during conversion, resulting in sizable bias in elevation for a small project. When a new geoid model is published, a new grid of geoid heights (the separation between ellipsoid and geoid) is provided and most conversion packages utilize these tabulated values to interpolate the elevation for non-nodal positions. As for the vertical datum conversion between NGVD29 and NAVD88, a program similar to NADCON called VERTCON is used throughout the industry to convert data from the old to the new vertical datum.

Judgment Calls: As for the question of whether “every dataset is a candidate to be re-projected”, the answer is simply NO. To transform positional coordinates between ITRF96 and NAD83(CORS96), U.S. and Canadian officials jointly adopted a Helmert transformation for this purpose. Helmert Transformation, which is also called the “Seven Parameter Transformation”, is a mathematical transformation method within a three dimensional space used to define the spatial relationship between two different geodetic datums. The IERS also utilized a Helmert transformation to convert ITRF96 and other ITRS realization. The NGS has included all of these transformations in a software package called Horizontal Time- Dependent Positioning (HDTP), which a user can down load from the NGS site

While the Helmert transformations are appropriate for transforming positions between any two ITRS realization or between any ITRS realization and NAD83(CORS96), more complicated transformations are required for conversions involving NAD27, NAD83/86, and NAD83(HARN) as the inherited regional distortion can not reliably be modeled by simple Helmert transformation. Even with the best Helmert transformation employed in converting positions from NAD27 to NAD83(CORS96), the converted positions may still be in error by as much as 10 meters. In a similar manner, NAD83(86) will contain distortion in the 1 meter level while NAD83(HARN) will contain a distortion in the 0.10 meter level.

In summary on the conversion possibilities and tools, HTDP may be used for converting between members of set I of reference frames [NAD83(CORS96), ITRF88, ITRF89,.., and ITRF97] while NADCON can be used for conversion between members of set II of reference frames [NAD27, NAD83(86), and NAD83(HARN)]. No reliable transformation tool is available to convert between members of set I and set II of reference frames, in addition no conversion is available for transforming positions in NAD83(CORS93) and/or NAD83(CORS94) to any other reference frames. As for WGS84 conversions, it is generally assumed that WGS84(original) is identical to NAD83(86), WGS84(G730) is identical to ITRF92, and that WGS84(G873) is identical to ITRF96. Other transformations between different realizations of WGS84 and ITRF are also possible.

Based on the above discussions, data conversion between certain NAD83 and WGS84 is not always possible or reliable. As I mentioned earlier, existing data in NAD83 may not be accurately converted to certain WGS84 realizations as NGS did not publish all reference points in WGS84 and most WGS84 reference points are limited to military personnel. Unless a new survey is conducted in WGS84, it is always problematic to convert older versions of NAD83-based data from and to the newer WGS84 realizations. Conversion packages that make such tasks possible assume the term “WGS84” to be equal to the first realization of WGS84, which was intended to be equal to NAD83/86.

Free Conversion Tools:
GEOTRANS: The US Army Corps of Engineers provides a coordinate transformation package called “GEOTRANS” free to any US citizen. In a single step, user can utilize GEOTRANS to convert between any of the following coordinate systems, and between any of over 100 datums: Geodetic (Latitude, Longitude), Geocentric 3D Cartesian, Mercator Projection, Transverse Mercator Projection, Polar Stereographic Projection, Lambert Conformal Conic Projection, UTM, UPS, MGRS. The “GEOTRANS” is also distributed with user manual and Dynamic Link Library (DLL) which users can use it in their software

CorpsCon: Another good free package called CorpsCon is distributed by US Army Topographic Engineering Center (TEC) and solely for coordinates conversion for territory located within the United States of America.

Effect of Datum Conversion on Contours: When existing sets of contours are converted from one vertical datum to another, the resulting contours do not comply with the rules set governing contour modeling. Contours are usually collected or modeled with exact multiples of the contour interval (e.g., for 5-ft contours, it is 300, 305, 310, etc.). Applying a datum shift to these contours could result in the addition or subtraction of sub-foot values depending on the datum difference; therefore the contours will no longer represent exact multiples of the contour interval (for the previous 5-ft contour example, the new contours may carry the following values 300.35, 305.35, 310.35, etc., assuming that the vertical datum shift is about 0.35 ft). Consequently, after conversion, a new surface should be modeled and a new set of contours that are an exact multiple of the contour interval should be generated.

Similar measures should be taken for the spot elevations, as they represent a highest or lowest elevation or a region between two contours without exceeding the contour interval. When the new contours are generated, the new contours are no longer in the same locations as the previous set of contours. The existing spot elevations may no longer satisfy the condition for spot elevations, and new spot elevations may need to be compiled. Vertical shift based on one shift value is not recommended for large projects as the geoid height may change from one end of the project to another. The published gridded geoid heights data should be consulted when converting the vertical datum for large projects that span a county or a state. Small projects may have one offset value and therefore applying one shift value that is derived from the suitable geoid model tables for the project area may be permissible.

Conversion Errors and Accuracy Requirements: As a final note, the previous discussions on the effect of conversion accuracy on the final mapping product may not pose a problem if the accuracy requirement is lenient and the discrepancy between the correct and assumed coordinates values fall within the accuracy budget. To clarify this point, the difference between NAD83(86) and NAD83(HARN) in parts of Indiana, is about 0.23 meter. Therefore, if you provide mapping products such as an ortho photo with 0.60 meter resolution or GSD (scale of 1:4800) and whose accuracy is specified according to the ASPRS accuracy standard to be an RMSE of 1.2 meter, the 0.23 meter errors inherited in the produced ortho photo due to the wrong coordinates conversion may go by undetected, as opposed to providing ortho photos with 0.15 meter resolution (scale of 1:1,200) with an accuracy requirement of 0.30 meter where the error in the data consumes most of the accuracy budget for the product. However, errors should be detected and removed from the product no matter how large or small they are.

Best Practice: In conclusion, I would like to provide the following advice when it comes to datum and coordinate conversion:

1. When it comes to coordinate conversion, DO NOT assign the task to unqualified individuals. The term “unqualified” is subjective and it varies from one organization to another. Large organizations that employ staff surveyors and highly educated individuals in the field may not trust the conversions made by staff from smaller organizations that can not afford to hire specialists. No matter what the size of your organization, practice caution when it comes to assigning coordinate and datum conversion tasks. Play it safe.

2. Seek reliable and professional services when it comes to surveying the ground control points for the project. Reliable surveying work should be performed or supervised and signed on by a professional license surveyor. Peer reviews within the surveying company of the accomplished work represents professional and healthy practices that may save time and money down the road.

3. GIS data users need to remember that verifying the product accuracy throughout the entire project area is a daunting task if it is all possible. Therefore, it is necessary to perform field verification for the smallest statistically valid sample of the data and rely on the quality of the provided services and the integrity of the firm or individuals provided such services for all areas fall outside the verified sample. That is why selecting professional and reputable services are crucial to the success of your project.

4. When contracting surveyors to survey ground control points for the project, ask them to provide all surveyed coordinates in all possible datums and projections that you may use for the data in the future. Surveyors are the most qualified by training to understand and manipulate datums and projections and it does not cost them much to do the conversion for you. It is recommended that in your request for proposal you ask the surveying agency to provide the data in the following systems:

Horizontal Datum: NAD27 (if necessary), WGS84, NAD83/86 (if necessary), NAD83/latest HARN, NAD83/CORS, NAD83/2007.

Coordinates System (projected): Geographic (latitude, longitude), UTM (correct zone), Sate Plane Coordinate System

Vertical Datum: WGS84 ellipsoidal heights; NGVD29 (if necessary), NAVD88 (latest geoid model).

5. When you are asked to provide data for a client, always make sure that you have the right information concerning the datum and projection. It is common to find that people ask for NAD83 without reference to the version of NAD83. If this is the case, ask them specify whether it is NAD83/86, NAD83/HARN (certain year), NAD83/CORS, or NAD83/2007.

6. If you are handed control data from a client or historical data to support their project, verify the exact datum and projection for that data.

7. If a military client asks you to deliver the data in WGS84, verify whether they mean the first WGS84 where the NAD83 was nominally set equal to WGS84 in the mid 80s. Most of their maps are labeled WGS84, referring to the original WGS84. Otherwise, provide them with NAD83/CORS or ITRF at a certain epoch suitable for the realization they requested, unless they give you access to the WGS84 monument located in or near their facility. The most accurate approach for obtaining WGS84 coordinates is to acquire satellite tracking data at the site of interest. However, it is unrealistic to presume that non-military users have access to this technique.

8. Pay attention to details. People are frequently confused about the vertical datum of the data. Arm yourself with simple, yet valuable, knowledge about vertical datums. If the project is located along the U.S. coastal areas, the ellipsoidal height should always be negative as the orthometric height (i.e., NAVD88) is close to mean sea level or zero value and the geoid height is negative. Therefore, if you are handed data with an incorrectly-labeled vertical datum, look at the sign of the elevations given for the project. A negative sign for elevation data on U.S. coastal projects is an indication that the data is in ellipsoidal heights and not orthometric heights (such as NAVD88).

9. Equip your organization with the best coordinate conversion tools available on the market. Look for a package that contains details of datum and projection in its library. Here apply the concept of the more the better.

10. Cross check conversion from at least two different sources. It is a good practice to make available at least two credited conversion packages to compare and verify conversion results.

11. If you are not sure about your conversion, or the origin of the data that you were handed, always look for supplementary historical or existing ground control data to verify your position. Take advantage of resources available on the Internet, especially the NGS site. Many local and state governments also publish GIS data for public use on their web sites. Even “Google Earth” may come in handy for an occasional sanity check.

May 2009 (Download a PDF 519Kb)

Question: What is the correlation between pixel size of the current mapping cameras in use and the mapping accuracy achievable for a given pixel size? e.g. for data collected at a 30 cm GSD what would be the best mapping horizontal accuracy achievable?

Dr. Abdullah: Unlike f lm-based imagery, digital imagery produced by the new aerial sensors is not referred to by its scale as the scale of digital imagery is diff cult to characterize and is not standardized. Digital sensors with different lenses and sizes of the Charge Coupled Device (CCD) can produce imagery from different altitudes with different image scales, but with the same ground pixel resolution. In addition, the small size of the CCD array of the digital sensors results in very small scale as compared to the f lm of the f lm-based cameras. This latter fact has made it diff cult to relate the image scale to map scale through a reasonable enlargement ratio as is the case with flm-based photography. As an example, the physical dimension of the individual CCD on the ADS40 push broom sensor is 6.5 um; therefore for imagery collected with a Ground Sampling Distance (GSD) of 0.30 m, the image scale is equal to (6.5/0.30x1000000) or 1:46,154. Such small scale can not be compared to the scale of the equivalent f lm imagery or 1:14,400 which is suitable to produce maps with a scale of 1:2,400 or 1”=200’. Here, the conventional wisdom in relating the negative scale to map scale, which has been practiced for the last few decades is lost, perhaps forever. Traditionally in aerial mapping, the f lm is enlarged 6 times to produce the suitable map or ortho photo products. This enlargement ratio is too small to be used with the imagery of the new digital sensors if we equate the CCD array to the f lm of the f lm-based aerial camera. Imagery from the ADS40 sensor as it is used today has an enlargement ratio of 19! Traditionally, aerial f lm is scanned at 21 um resolution and Table 1 lists the different f lm scales, the resulting GSD, and the supported map scale based on an enlargement ratio of 6.

Table 1.
Film Scale
Scanning Resolution
Resulting GSD (m)
Supported Map Scale
Supported Contour Interval (m)
21 um
21 um
21 um

Similar measures have been adopted for the new digital cameras as data providers and clients alike are familiar and comfortable with the values given in Table 1. Determining the vertical accuracy from digital sensors is no different from the horizontal accuracy as we adopted the same measure we used for f lm cameras to the new sensors. As it is given in Table 2, digital imagery collected at nominal GSD of 0.15 m is considered to support 0.60 m (2 foot) contours interval accuracy or an RMSE of 0.20 m according to the ASPRS map accuracy standard. This has been practiced despite the fact that the two-foot contours support was determined in the past based on the c-factor limitation of the stereo plotters used at that time. Table 2 provides the supported map products from digital sensors collected with different ground resolutions as practiced today.

Table 2.
Image GSD (m)
Ortho GSD (m)
Supported Map Scale
Supported Contour Interval (m)

Users of digital cameras are experiencing improved map quality and accuracies that exceed those given in Table 2. In other words, imagery from a good digital sensor with a GSD of 0.15 m may be suitable for map scale larger than 1:1,200, and in the future we may need a new standard for the digital camera products that ref ects the improved quality and stability of these digital sensors.

March 2009 (download a PDF 680Kb)

Question: Does lidar data support the generation of accurate one-foot contours and if it does, how feasible is it to generate photogrammetricquality one-foot contours?

This answer contains graphics and tables. Please see the PDF

January 2009 (download a PDF 980Kb)

Question: The use of 3D laser scanners (or ground-based lidar) has gained momentum over the last few years among the engineering and surveying communities. Could you please elaborate on the state of this technology and its benefi ts to public- and private-service agencies?

Dr. Abdullah: Ground-based 3D laser scanners, which are considered by many experts to be the new generation of survey instruments, have recently become very popular and are increasingly used in providing as-built and modeling data for various applications, such as land surveying, highway surveys, bridge and retaining wall structural surveys, architectural surveys, plant/factory surveys, mining surveys, forensic surveys, reverse engineering, and cultural heritage and archeological studies. In contrast to traditional surveying instruments, which are limited to locating one point at a time, 3D laser scanners measure thousands of points per second, generating very detailed “point cloud” datasets. The point clouds can be processed further to generate very accurate and detailed 3D surface models for use in many commercial CAD packages to extract and model various design parameters and to generate as-built survey reports and analysis. 3D laser scanners of interest for highways and large structural operations are based on the following two different technological principles:

• Time-of-flight (TOF) technology measures the time it takes a laser pulse to hit its target and return to the instrument. Very advanced high-speed electronic devices are used to measure the micro time difference to compute the laser’s range, or distance between the instrument and the target. The range data is then combined with extremely precise angular encoder measurements to provide a 3D location of the point from which the laser pulse was reflected. TOF technology is similar to the principle utilized in a surveying “total station” instrument. The difference between the two is the superior point measurement density of the 3D laser scanner, which is capable of measuring more than 50,000 distances per second as compared to the few distances per second that can be measured by a total station device. TOF scanners are commonly used in applications that require signifi cant range measurements (typically 75 to 1000 m), such as highway surveys and other typical state department of transportation (DOT) applications.
• Phase-based technology measures the phase difference between the reflected pulse and the transmitted amplitude modulated continuous wave laser beam. The distance to the target is a function of the phase difference and the wave length of the amplitude modulated signal. Phase-based measurement scanners usually achieve a much higher number of point measurements (point cloud density) than is possible with TOF scanners. However, they are limited in range (typically 25 to 100 m), which makes them best suited for inside factories and enclosed facilities.

In terms of range, both TOF and phase-based scanners are outperformed by total stations that typically can handle measurements a few times greater than that of laser scanners. That said, 3D laser scanners can accurately position objects at a rate of 1,000 times the speed of a total station, which not only reduces survey fi eld time but also results in a more detailed site survey. The enormous survey speed of 3D laser scanners also reduces field crew exposure to all sorts of environmental hazards they are typically subjected to during traditional surveying applications; it also reduces lane closures, decreases the risk of causalities, and increases productivity. For certain applications and projects more than one laser scanner is needed to perform the survey. In these cases, the subject of co-registering data from different scanners plays a greater role in determining the fi nal data accuracy. The process of “registration” refers to combining different point cloud datasets collected using different laser scanners at different locations into a unified coordinate system. These different datasets are joined together in correct relative position and orientation in a common coordinate system. Once joined correctly, georeferencing is performed to complete data processing. Georeferencing is the process of fi xing the point clouds dataset(s) to an existing control and coordinate system and datum, such as the local state plane, UTM, and a local site-specifi ed system.

Many commercial laser scanner manufacturers, including InteliSum, Leica Geosystems, Optech, and Trimble, considered the tighter elevation accuracy requirements needed for the different transportation and highway projects around the world. Such requirement calls for an accuracy of pavement elevation measurements of 8 mm (RMSE) or better from a range of 50 to 80 m. Most scanners achieved an accuracy of 6 mm or better when tested independently by users.

Readers who are interested in more details should refer to an excellent report entitled “Creating Standards and Specifi cations for the Use of Laser Scanning in Caltrans Projects” recently published by the Advanced Highway Maintenance Construction Technology (AHMCT) Research Center of the University of California, Davis, in cooperation with California Department of Transportation.

November 2008 (download a PDF 980Kb)

Q1: In Aerial Triangulation, once a least-squares adjustment has been run, the results have been found to be acceptable with no blunders or residuals out of tolerance, there is a decision to be made: Do you overwrite the given ground control values with the adjusted coordinates or do you keep the original coordinates provided by the land surveyor?
Jon Martin, CP, Texas DOT

Dr. Abdullah: I would like to quote part of Jon Martin’s message that accompanied his question as he brings up a very interesting discussion on the topic that the reader needs to know about. In his message, Martin elaborated as follows:

“I’ve run this question by a number of colleagues. Among State DOTs, it appears that about half overwrite and half don’t. Dr. Hintz has suggested that the proper procedure is to overwrite the given ground control with the adjusted values. I tend to agree with Dr. Hintz because mathematically, it doesn’t seem to make much sense to not overwrite. Doing so means that you end up with a mix of best-fi t tie points with non-adjusted survey control. In the big scheme of things, it shouldn’t make a lot of difference. However, some software, like the software that displays imagery in stereo, runs a second least-squares adjustment on the data set coming out of the analytical triangulation process to form the stereo model. It seems that this second adjustment would be more accurate if all of the points used were part of a best-fi t solution rather than a mix. My Land Surveyor colleagues feel that the ground control has to be held as a fixed value. I don’t agree with this opinion. Unlike the survey world, we aren’t going to “re-occupy” an aerial photo derived map. Our map product is a final product and no subsequent mapping (or surveying) is going to be done using our map as a coordinate basis. I believe that the most accurate mapping is done using least-squares, best-fit solution. Could you please weigh in on this issue?”

The question and the comments given above represent very common arguments within the aerial triangulation community. I myself wanted to survey my colleagues in the field on their response to a question like this. Here are the different responses I got on the same question:

Colleague #1, Land Surveyor: “Absolutely, overwrite and hold fi xed unless there is evidence of blunder. The way that I believe it works is: The bundle adjustment is run minimally constrained with only one control point fi xed and all others free or to very low weights. As the control is evaluated, those with low (acceptable) residuals should then be held fi xed, infi nite weight, per the surveyor, not allowing for any adjustment to those data points. This has been how I have run least squares adjustments of geodetic control networks. Blunderous points need to be identifi ed and removed by the survey adjustment process, and good control from a professional survey fi rm need to have been redundantly measured, adjusted and certifi ed as to their accuracy at a given precision. The control survey should be magnitudes stronger than the airtrig, so the air-trig cannot supersede the values on any control point. That’s not to say the erroneous control doesn’t show up, and if held fi xed would cause problems. So, the air-trig is not intended to “prove” the surveyor correct, but errors are errors. And if the control doesn’t fi t, that may indicate some other problem with the bundle as well.”

Colleague #2, Land Surveyor: “This is an age old question without a known solution. In my view, the points must be adjusted with the rest on order to preserve the integrity of the adjustment. The surveyor’s control is not gospel; they are prone to many types of errors, but would not be adjusted if held in place [Sic]. Thus, my solution is to preserve a copy of the original surveyor’s points to document what was provided and used and then adjust the points with the solution provided that the solution does in fact meet the tolerance requirements. The bottom line is that the probability that the surveyor would measure his points one day and then measure the same points the next day with two different answers is great. Therefore, beyond good fi eld techniques, redundancy in a least square adjustment is the key to a good solution. One man’s view.”

He then added the next day, “I awoke thinking on this issue this morning and I have one additional point to add. When we are speaking of surveyor’s points, what order of control are we speaking of? First order or CORS points or ground control as provided by the surveyor? I believe there may be a difference in how the two should be treated.”

Colleague #3, Aerial Triangulation Specialist: “I say that you would overwrite with the adjusted control values for the main reason that individual measurements most likely would have inherent error even with their residuals being within tolerance. Using the adjusted coordinates would account for your network’s normal distribution of error. Just a thought. I’m not a surveyor or a CP.”

Colleague #4, Aerial Triangulation Specialist: “I would say not to overwrite because the adjusted values means they adjusted according to given actual control values and it shows you how the actual control network should be. As per Colleague #3, it is also correct that adjusted coordinates would account for your network’s normal distribution of error since with the residuals being within tolerance, it will not make much difference if you overwrite”.

I hope you agree with me that this issue has been a point of contention between professionals in the fi eld of mapping and surveying since the beginning of analytical aerial triangulation. My view on this goes along with many of the opinions given above on the theoretical aspects of network controls and constraints. However, experience has taught me that what may sound theoretically correct may not necessary be the only acceptable solution. We currently collect an average of 100 to 200 auto-correlated pass/tie points per frame, most of which are of excellent quality. In addition, most if not all of triangulation today is performed with the help of the airborne GPSmeasured camera position. The introduction of airborne GPS has changed the requirements for ground control and only a sparse control network is needed when an aerial triangulation project is planned. The combination of the added constraints due to the GPS-controlled principal point, the minimal ground control points (perhaps one control point for every 20 photos), and the high density of pass/tie points, has defi nitely weakened the effect of ground control points on the fi nal computation or re-computation of the exterior orientation parameters. In my opinion, the question on whether to overwrite or not overwrite the original controls points used in the bundle block solution can be answered in two ways, as follows:

1. If the aerial triangulation software restricts you to the production of the exterior orientation parameters derived from the airborne GPS-controlled bundle block adjustment only, then you have no choice and the adjusted coordinates of the ground control will be used in the solution. This is the case when you adjust the block using airborne GPS, the ground control points, and possibly the IMU-derived orientations, and you then use the exterior orientation derived from this solution for stereo compilation or ortho rectifi cation.

2. If the software routinely re-computes the exterior orientation parameters of each frame after the fi nal bundle block adjustment has been performed and accepted and all the tie/pass points’ coordinates are replaced with the fi nal adjusted ground values, then the issue of overwriting will depend on the number of the pass/tie points used in each frame. Examples of different methods of re-computing the exterior orientation parameters vary with the software and user preferences. For example, Albany performs a space resection solution, while ISAT of Intergraph performs a so-called bulk orientation. Some users prefer to perform additional conventional adjustment using the adjusted pass/tie points following the original airborne GPS adjustment. With the introduction of softcopy aerial triangulation, the subject using the original surveyed coordinates or the adjusted coordinates for the ground control points has become irrelevant to a certain degree. To simplify the matter further, previously when we used only three principal pass points per photo, the entire frame during orientation (space resection) was controlled by an average of nine pass, tie, and perhaps a few control points. In this case the control had a higher weight in the least squares adjustment and using adjusted coordinates versus original surveyed coordinates for ground control points could have a drastic impact on the photo orientation during mapping. This is not the case with the auto-correlated collection of tie/pass points. Most softcopy aerial triangulation packages perform either space resection or bulk orientation after all the pass/tie points are adjusted and densifi ed into control points. Therefore, having one surveyed control point, if any, between hundreds of pass/tie-turned into control points has minimized the effect of the original ground control on the fi nal exterior orientation computation for that individual frame. The individual control point or two present between hundreds of photo controls will have minimal weight and it will be overweighed by the presence of the dense network of densifi ed pass/tie points in the fi nal exterior orientation computation.

Based on the above, my recommendation is that if you are performing aerial triangulation today with hundreds of adjusted pass/tie points and you are re-computing the exterior orientation parameters again after the fi nal bundle block adjustment was fi nalized and accepted, it does not really matter whether you overwrite or not. However, if the aerial triangulation was performed 20 years ago, then it will be a different story.

Finally, as for the question on whether one should on a routine basis overwrite the given ground control values with the adjusted coordinates or keep the original surveyed coordinates as provided by the land surveyor, I believe that the adjusted coordinates should be used for all subsequent computations or orientation. This is due to the fact that the mathematical and statistical models have found the best fi t for that ground control within the different elements of the block. Introducing a different set of coordinates (in this case the one provided by the land surveyor) will offset that balance or fi t assuming that all of the measurements and values used in the aerial triangulation were of high quality. To provide an example for this argument, assume that there is one control point that the mathematical model found to be erroneous by about 40 cm. The new adjusted value, which is off by 40 cm from the surveyed value, desirably fi ts the entire network of the block. Introducing the original value (erroneous according to the math model) in any subsequent computations of the network or part of it will cause misfi t between that control point and the adjacent points.

September 2008 (download a PDF 594Kb)

Q: When shopping for lidar data, how do I know what point density I need for my project and whether I need breaklines to support the terrain modeling?

Dr. Abdullah: In my last article, I answered this question in terms of lidar data acquisition requirements for different terrain modeling applications.

In this issue, I will address the question as it pertains to requirements for 3D modeling applications.

3D Urban Modeling Applications: The high density of lidar point clouds meets wide acceptance in different user communities who need high defi nition elevation data for applications other than terrain modeling. These applications include, but are not limited to, line-of-sight, 3D city modeling, 3D urban fl y-throughs and simulation, and security and emergency planning. Homeland security agencies, for instance, have shown a strong interest in the use of dense lidar datasets for intercity combat planning and high profi le security planning. In addition, the emerging capabilities of oblique imaging and modeling have added a greater emphasis on high quality and high defi nition elevation data; requirements that would be cost prohibitive without lidar technology. In most of the urban modeling applications, users are more concerned about the defi nitions and details of the lidar dataset than with the centimeter-level accuracy. Most 3D city modeling can be achieved with a lidar point density of 5 to 10 points per square meter. We are, however, witnessing an emerging new market for dense to ultra-dense lidar data and many lidar providers are equipping their operations with the sensors designed to meet such demand. Figures 1 and 2 illustrate the quality of the scene as represented by lidar intensity with post spacing of about 20 points per square meter. It is amazing how fi ne the details are that such data provides.

Bio-mass and Forest Modeling: Lidar points clouds are also proven to be very effective in studying and modeling forest fl oor and canopy. Lidar-derived spatial data ultimately can be used to achieve the following resource management goals:

Furthermore, the “Mapping Matters” article published in the November 2007 issue of PE&RS provides more details on this very same subject. In that article, I suggested a lidar point density of 0.1 to 10 points per square meter, depending on the nature of the study.

August 2008 (download a PDF 522Kb)

Q: When shopping for lidar data, how do I know what point density I need for my project and whether I need breaklines to support the terrain modeling?

Dr. Abdullah: The subject of point density in lidar datasets and the resulting accuracy of derived products are of great importance, both to users and providers of lidar data. Unfortunately, there are no set rules to govern this topic, leaving many users to establish their own guidelines when requesting lidar data acquisitions. This fact becomes very obvious when studying the point density requirements specified by different requests for proposals (RFPs). At a loss in this ever confusing topic, many users request lidar data with sub-meter post spacing to achieve an accuracy that is easily obtainable with less dense lidar datasets. Unless the task calls for 3D modeling and above-ground, manmade or natural features, asking for highly dense lidar data may harm the budget with very little accuracy benefits, especially when the collection of breaklines is requested.

During the Second National Lidar Meeting held recently at the USGS headquarters in Reston, Virginia, speakers presented a variety of views and levels of understanding as to what constitutes a reasonable and practical lidar dataset. The most misleading approach is the one calling for a lidar database to fit the broad needs of all users, and here I mean all users, including those whose applications require 10 points or more per square meter! An advocacy call like this not only wastes valuable taxpayer money, but also makes for an impossible task as there is very little capital available for such an expensive undertaking…unless you live in the UAE, that is.

With the above phrases, I have made my political statement clear, so now let us get to the technical heart of the matter. Lidar data specifications should be tagged with user-specific needs and specifications. In order to address the issues adequately, my response will span the next few issues of the column due to the limited space allocated for each article.

The following sections represent different user communities’
requirements and the recommended data specifications:

1. Terrain Modeling Applications: Terrain modeling is a requirement of nearly all lidar projects, spanning a wide range of uses and specifications. The most common terrain modeling applications requested by lidar users follow.

a. Contours generation: The dwindling use of paper (hardcopy) maps combined with advancements in 3D terrain modeling software capabilities have driven down the need for traditional contour generation. The demand for contours and contour specifications in RFPs involving lidar data collection continues, however, despite availability of new terrain modeling and viewing methods, such as 3D rendering and shaded relief maps. To create lidar-based contours that meet cartographic and geometric qualities, lidar data with modest post spacing of around 2 to 4 meters can be augmented with breaklines derived from image-based photogrammetry. If imagery is not available for breakline production, then a “lidargrammetry” approach is possible. In this method, very dense lidar datasets with post spacing of around 1 meter are used to create detailed stereomates by draping the lidar intensity image over the lidar DEM; these stereomates are then used to generate breaklines using any stereo-softcopy system. Once the breaklines are collected, either through photogrammetry or lidargrammetry, the lidar points can be thinned to a great degree, depending on the complexity of the terrain. The thinned dataset is then merged with the breaklines to create a digital terrain model (DTM), required for modeling the contours. In addition, all lidar points within a buffered distance around the breaklines should be removed to achieve acceptable contours without sacrificing accuracy. This process makes sense, as that is how we have always modeled contours from a DTM. The issue of utilizing breaklines in modeling contours from lidar data often gets confused, however, as service providers attempt to mix very dense and accurate lidar data with manually collected and possibly less accurate breaklines. Without buffering or thinning the lidar points close to the breaklines, the contours will appear problematic whenever a lidar point appears right next to a breakline. The last statement is true even with a photogrammetrically collected and modeled DTM. In constructing lidar-derived DTMs, we should consider all the best practices previously developed and utilized for modeling photogrammetric DTM during the past decades. A good quality DTM is achieved by having accurately modeled breaklines and minimum mass points outside the breaklines when necessary. Lidar indiscriminately collects dense mass points throughout the project area, including on and around the later collected breaklines. Unless lidar data is thinned and breaklines are buffered and cleared from the lidar points around it, it will be very difficult, if not impossible, to achieve cartographically acceptable contours.

b. 3D Terrain Modeling: Most modern lidar data users and providers are equipped with 3D modeling software that allows them to model lidar data for different applications, such as flood and environmental hazard, watershed management, etc. Depending on the required vertical accuracy of the model; many applications can utilize lidar data without the need for breaklines. However, for hydro-enforced terrain modeling where the user expects a downhill uniform flow of water, breaklines or manual 3D modeling is required around such water features to assure the effect. For most applications, lidar data with post spacing of 1 to 2 meters is adequate. Hydro enforcement of lidar-derived terrain model is still cumbersome and costly and logical automation is strongly needed in this field.

(To be continued in the next issue of PE&RS)

July 2008 (download a PDF 160Kb)

Q: What is meant by color or colorized lidar and what is it used for?

Answer: The literal meaning of the terms “color lidar” or “colorized lidar” could imply two different things:

Colorized lidar
The latest topographical lidar data processing techniques utilize imagery to aid in the interpretation and fi ltering of lidar data. Many vendors are now acquiring digital imagery concurrent with the lidar data mission. Having an integrated lidar/digital camera solution provides many advantages for data providers and users alike. On the data providers’ level, the digital imagery, whether natural-color (RGB) or color-infrared (CIR), can be used for:

Color lidar
The term “green laser” is widely used to describe the bathymetry lidar used for three-dimensional, high precision surveys of seabeds and objects in the water column. Using light energy to penetrate seawater in much the same way as a multi-beam echo sounder, bathymetry lidar systems usually comprise a twin laser generator (red-infrared and bluegreen portions of the electromagnetic spectrum) providing an effective depth sounding frequency. The basic laser sounding principle is similar to acoustic methods. A pulse of laser light is transmitted from the system toward the water surface in a predefi ned pattern. The red-infrared laser light is reflected at the water surface, whereas the blue-green laser light penetrates into the water column and refl ects from the objects or particles along the laser path or the seabed, if it makes it all the way there. The water depth is equal to the time elapsed between the two echo pulses, multiplied by the speed of light in water. Typical water depth penetration is in the range 20–40m but, in good conditions, depths as great as 70m are possible.

June 2008 (download a PDF 604Kb)

Q: I am looking for a brief but encompassing overview of the map accuracy standard(s) used in the United States of America to evaluate geospatial data accuracies and whether it applies internationally.

This answer is available in PDF form only. Click the link above. Thank you.

April 2008 (download a PDF 66Kb)

Q: How effective are lidar datasets in mapping land features such as roads and buildings?

Answer: Constructing two-dimensional and three-dimensional building models and other land features requires accurate delineation of sharp building edges and this traditionally has been accomplished using photogrammetric stereo compilation. All past attempts to automate the extraction of ground features from dense lidar datasets with a post spacing of one to three meters for the purpose of planimetric mapping have failed for one reason: the rounded and jagged edges of delineated buildings and other manmade features. Despite some software efforts to employ smart algorithms that correct the geometrical shape of objects, this type of modeling remains less appealing to service providers and customers alike as it does not meet horizontal map accuracy requirements for large-scale mapping. The ASPRS standard requires buildings be placed within one foot of their true ground position when compiled from a map with a scale of 1”=100’. Recent advancements in lidar systems enable the collection of ultra-dense lidar datasets with point density of fi ve to eight points per square meter (ppsm), which makes the data more suitable for use in the aforementioned modeling algorithms. The downside of such demanding software requirements is the high cost associated with the aerial acquisition due to the additional fl ight lines required to collect the ultra-dense lidar data. Traditionally, a lidar dataset used for terrain modeling is collected with a nominal post spacing ranging between one meter and three meters or a point density ranging between one ppsm and 0.11 ppsm. The ratio between the data densities of the normally collected dataset and the ultra-dense dataset ranges from 20:1 to 32:1. This is true if we assume the normal density dataset is collected with two meter post spacing or 0.25 ppsm. This does not necessarily translate to the same ratio in cost increase, but it could come very close. In many cases the high cost of acquisition coupled with the massive amount of resulting lidar data prohibits the use of ultra-dense lidar data for accurate building modeling and may encourage providers and customers to consider other means, such as traditional photogrammetric modeling, for this purpose. Finally, while ultra-dense lidar datasets may not currently be cost-effective for largescale modeling, the technology is impressive in the sense of delineating details. A recent acquisition of an ultra-dense lidar dataset with about six ppsm over the city of Baltimore, Maryland, reveals great details of the baseball game underway in the Camden Yards baseball stadium, as shown in Figures 1 through 3. Baseball fans can easily observe which base was occupied at that moment!.

March 2008 (download a PDF 66Kb)

Q: What does oblique imagery mean and how effective it is in GIS and mapping activities?

Answer: Rather than collecting imagery directly beneath the aircraft with the camera pointing at nadir, oblique imagery is acquired from an angled position. Oblique imagery has been used for decades by military intelligence during aerial reconnaissance missions. In recent years, however, its use has spread to the commercial market with an expanded range of applications. The modern approach combines oblique and vertical imagery to produce accurate 3D digital maps that can be interfaced with any modern GIS database or used for fl y-through analyses and measurements. This solution has proven valuable to users in emergency management, law enforcement, homeland security, tax assessment, engineering, insurance, and real estate, among others.

Most existing oblique mapping systems comprise the following components:

January 2008 (download a PDF 73Kb)

Q: How effective are GIS and mapping techniques for rapid response and rescue efforts?

Answer: Aerial survey is the most effective way to monitor and survey
damages for catastrophic events. Reasons for this include:

The increased ability of both public and private agencies to distribute, visualize, and manipulate large sets of raster data over the World Wide Web is another major advantage to using aerial survey data for response and recovery efforts. The Web provides an easy and effective means for sharing geospatial data at record speed. The USGS Earth Resources Observation Systems (EROS), for example, established a public-access website ( in support of disaster response activities following Hurricane Katrina. Within hours of Katrina making landfall on the Gulf Coast, EROS began uploading imagery to state and local governments, FEMA, and other federal agencies—providing imagery and lidar-derived contours of the devastated area, which helped prioritize the recovery efforts.

Many geospatial vendors have provided valuable services in recent years to different public and private agencies during emergency situations. Enabling more informed decision-making for timely allocation of limited resources, these services can help reduce human suffering and save lives. Amidst nearly all recent major natural or manmade disasters, such as the 9/11 tragedy at the World Trade Center, Hurricanes Katrina and Rita, and the recent California’s wildfires, concerned agencies managed within hours of the event to contract with willing vendors to provide “rapid response mapping” services. The main elements of an integrated and effective rapid response mapping system are:

Increased attention to homeland security and emergency preparedness combined with USGS/FEMA response to recent natural disasters have contributed to the commoditization of rapid response mapping products. In addition, recent success on numerous rapid response projects spread over a wide array of events have encouraged many manufacturers to offer complete aerial systems providing automated and near real-time production of GIS data for emergency events. In order to serve the ever-evolving rapid response mapping industry, national guidelines and specifi cations for rapid response need to be developed through cooperative efforts among FEMA, the USGS, the Department of Homeland Security, and most defi nitely ASPRS. This will serve both data providers and contracting agencies alike by providing:

2007 Archive

December 2007 (download a PDF 505Kb)

November 2007 (download a PDF 69Kb)

October 2007 (download a PDF 331Kb)

August 2007(download a PDF 161Kb)

July 2007 (download a PDF 120Kb)

June 2007 (download a PDF 194Kb)

May 2007 (download a PDF 79Kb)

April 2007 (download a PDF 108Kb)

February 2007 (download a PDF 111Kb)



" />