November 2008 (download a PDF 980Kb)
Q1: In Aerial Triangulation, once a least-squares adjustment has been run, the results have been found to be acceptable with no blunders or residuals out of tolerance, there is a decision to be made: Do you overwrite the given ground control values with the adjusted coordinates or do you keep the original coordinates provided by the land surveyor?
Jon Martin, CP, Texas DOT
Dr. Abdullah: I would like to quote part of Jon Martin’s message that accompanied his question as he brings up a very interesting discussion on the topic that the reader needs to know about. In his message, Martin elaborated as follows:
“I’ve run this question by a number of colleagues. Among State DOTs, it appears that about half overwrite and half don’t. Dr. Hintz has suggested that the proper procedure is to overwrite the given ground control with the adjusted values. I tend to agree with Dr. Hintz because mathematically, it doesn’t seem to make much sense to not overwrite. Doing so means that you end up with a mix of best-fi t tie points with non-adjusted survey control. In the big scheme of things, it shouldn’t make a lot of difference. However, some software, like the software that displays imagery in stereo, runs a second least-squares adjustment on the data set coming out of the analytical triangulation process to form the stereo model. It seems that this second adjustment would be more accurate if all of the points used were part of a best-fi t solution rather than a mix. My Land Surveyor colleagues feel that the ground control has to be held as a fixed value. I don’t agree with this opinion. Unlike the survey world, we aren’t going to “re-occupy” an aerial photo derived map. Our map product is a final product and no subsequent mapping (or surveying) is going to be done using our map as a coordinate basis. I believe that the most accurate mapping is done using least-squares, best-fit solution. Could you please weigh in on this issue?”
The question and the comments given above represent very common arguments within the aerial triangulation community. I myself wanted to survey my colleagues in the field on their response to a question like this. Here are the different responses I got on the same question:
Colleague #1, Land Surveyor: “Absolutely, overwrite and hold fi xed unless there is evidence of blunder. The way that I believe it works is: The bundle adjustment is run minimally constrained with only one control point fi xed and all others free or to very low weights. As the control is evaluated, those with low (acceptable) residuals should then be held fi xed, infi nite weight, per the surveyor, not allowing for any adjustment to those data points. This has been how I have run least squares adjustments of geodetic control networks. Blunderous points need to be identifi ed and removed by the survey adjustment process, and good control from a professional survey fi rm need to have been redundantly measured, adjusted and certifi ed as to their accuracy at a given precision. The control survey should be magnitudes stronger than the airtrig, so the air-trig cannot supersede the values on any control point. That’s not to say the erroneous control doesn’t show up, and if held fi xed would cause problems. So, the air-trig is not intended to “prove” the surveyor correct, but errors are errors. And if the control doesn’t fi t, that may indicate some other problem with the bundle as well.”
Colleague #2, Land Surveyor: “This is an age old question without a known solution. In my view, the points must be adjusted with the rest on order to preserve the integrity of the adjustment. The surveyor’s control is not gospel; they are prone to many types of errors, but would not be adjusted if held in place [Sic]. Thus, my solution is to preserve a copy of the original surveyor’s points to document what was provided and used and then adjust the points with the solution provided that the solution does in fact meet the tolerance requirements. The bottom line is that the probability that the surveyor would measure his points one day and then measure the same points the next day with two different answers is great. Therefore, beyond good fi eld techniques, redundancy in a least square adjustment is the key to a good solution. One man’s view.”
He then added the next day, “I awoke thinking on this issue this morning and I have one additional point to add. When we are speaking of surveyor’s points, what order of control are we speaking of? First order or CORS points or ground control as provided by the surveyor? I believe there may be a difference in how the two should be treated.”
Colleague #3, Aerial Triangulation Specialist: “I say that you would overwrite with the adjusted control values for the main reason that individual measurements most likely would have inherent error even with their residuals being within tolerance. Using the adjusted coordinates would account for your network’s normal distribution of error. Just a thought. I’m not a surveyor or a CP.”
Colleague #4, Aerial Triangulation Specialist: “I would say not to overwrite because the adjusted values means they adjusted according to given actual control values and it shows you how the actual control network should be. As per Colleague #3, it is also correct that adjusted coordinates would account for your network’s normal distribution of error since with the residuals being within tolerance, it will not make much difference if you overwrite”.
I hope you agree with me that this issue has been a point of contention between professionals in the fi eld of mapping and surveying since the beginning of analytical aerial triangulation. My view on this goes along with many of the opinions given above on the theoretical aspects of network controls and constraints. However, experience has taught me that what may sound theoretically correct may not necessary be the only acceptable solution. We currently collect an average of 100 to 200 auto-correlated pass/tie points per frame, most of which are of excellent quality. In addition, most if not all of triangulation today is performed with the help of the airborne GPSmeasured camera position. The introduction of airborne GPS has changed the requirements for ground control and only a sparse control network is needed when an aerial triangulation project is planned. The combination of the added constraints due to the GPS-controlled principal point, the minimal ground control points (perhaps one control point for every 20 photos), and the high density of pass/tie points, has defi nitely weakened the effect of ground control points on the fi nal computation or re-computation of the exterior orientation parameters. In my opinion, the question on whether to overwrite or not overwrite the original controls points used in the bundle block solution can be answered in two ways, as follows:
1. If the aerial triangulation software restricts you to the production of the exterior orientation parameters derived from the airborne GPS-controlled bundle block adjustment only, then you have no choice and the adjusted coordinates of the ground control will be used in the solution. This is the case when you adjust the block using airborne GPS, the ground control points, and possibly the IMU-derived orientations, and you then use the exterior orientation derived from this solution for stereo compilation or ortho rectifi cation.
2. If the software routinely re-computes the exterior orientation parameters of each frame after the fi nal bundle block adjustment has been performed and accepted and all the tie/pass points’ coordinates are replaced with the fi nal adjusted ground values, then the issue of overwriting will depend on the number of the pass/tie points used in each frame. Examples of different methods of re-computing the exterior orientation parameters vary with the software and user preferences. For example, Albany performs a space resection solution, while ISAT of Intergraph performs a so-called bulk orientation. Some users prefer to perform additional conventional adjustment using the adjusted pass/tie points following the original airborne GPS adjustment. With the introduction of softcopy aerial triangulation, the subject using the original surveyed coordinates or the adjusted coordinates for the ground control points has become irrelevant to a certain degree. To simplify the matter further, previously when we used only three principal pass points per photo, the entire frame during orientation (space resection) was controlled by an average of nine pass, tie, and perhaps a few control points. In this case the control had a higher weight in the least squares adjustment and using adjusted coordinates versus original surveyed coordinates for ground control points could have a drastic impact on the photo orientation during mapping. This is not the case with the auto-correlated collection of tie/pass points. Most softcopy aerial triangulation packages perform either space resection or bulk orientation after all the pass/tie points are adjusted and densifi ed into control points. Therefore, having one surveyed control point, if any, between hundreds of pass/tie-turned into control points has minimized the effect of the original ground control on the fi nal exterior orientation computation for that individual frame. The individual control point or two present between hundreds of photo controls will have minimal weight and it will be overweighed by the presence of the dense network of densifi ed pass/tie points in the fi nal exterior orientation computation.
Based on the above, my recommendation is that if you are performing aerial triangulation today with hundreds of adjusted pass/tie points and you are re-computing the exterior orientation parameters again after the fi nal bundle block adjustment was fi nalized and accepted, it does not really matter whether you overwrite or not. However, if the aerial triangulation was performed 20 years ago, then it will be a different story.
Finally, as for the question on whether one should on a routine basis overwrite the given ground control values with the adjusted coordinates or keep the original surveyed coordinates as provided by the land surveyor, I believe that the adjusted coordinates should be used for all subsequent computations or orientation. This is due to the fact that the mathematical and statistical models have found the best fi t for that ground control within the different elements of the block. Introducing a different set of coordinates (in this case the one provided by the land surveyor) will offset that balance or fi t assuming that all of the measurements and values used in the aerial triangulation were of high quality. To provide an example for this argument, assume that there is one control point that the mathematical model found to be erroneous by about 40 cm. The new adjusted value, which is off by 40 cm from the surveyed value, desirably fi ts the entire network of the block. Introducing the original value (erroneous according to the math model) in any subsequent computations of the network or part of it will cause misfi t between that control point and the adjacent points.
September 2008 (download a PDF 594Kb)
Q: When shopping for lidar data, how do I know what point density I need for my project and whether I need breaklines to support the terrain modeling?
Dr. Abdullah: In my last article, I answered this question in terms of lidar data acquisition requirements for different terrain modeling applications.
In this issue, I will address the question as it pertains to requirements for 3D modeling applications.
3D Urban Modeling Applications: The high density of lidar point clouds meets wide acceptance in different user communities who need high defi nition elevation data for applications other than terrain modeling. These applications include, but are not limited to, line-of-sight, 3D city modeling, 3D urban fl y-throughs and simulation, and security and emergency planning. Homeland security agencies, for instance, have shown a strong interest in the use of dense lidar datasets for intercity combat planning and high profi le security planning. In addition, the emerging capabilities of oblique imaging and modeling have added a greater emphasis on high quality and high defi nition elevation data; requirements that would be cost prohibitive without lidar technology. In most of the urban modeling applications, users are more concerned about the defi nitions and details of the lidar dataset than with the centimeter-level accuracy. Most 3D city modeling can be achieved with a lidar point density of 5 to 10 points per square meter. We are, however, witnessing an emerging new market for dense to ultra-dense lidar data and many lidar providers are equipping their operations with the sensors designed to meet such demand. Figures 1 and 2 illustrate the quality of the scene as represented by lidar intensity with post spacing of about 20 points per square meter. It is amazing how fi ne the details are that such data provides.
Bio-mass and Forest Modeling: Lidar points clouds are also proven to be very effective in studying and modeling forest fl oor and canopy. Lidar-derived spatial data ultimately can be used to achieve the following resource management goals:
- accurate inventory and composition of forested land,
- harvest planning,
- habitat monitoring,
- watershed protection, and
- fuel management (for fire management).
Furthermore, the “Mapping Matters” article published in the November 2007 issue of PE&RS provides more details on this very same subject. In that article, I suggested a lidar point density of 0.1 to 10 points per square meter, depending on the nature of the study.
August 2008 (download a PDF 522Kb)
Q: When shopping for lidar data, how do I know what point density I need for my project and whether I need breaklines to support the terrain modeling?
Dr. Abdullah: The subject of point density in lidar datasets and the resulting accuracy of derived products are of great importance, both to users and providers of lidar data. Unfortunately, there are no set rules to govern this topic, leaving many users to establish their own guidelines when requesting lidar data acquisitions. This fact becomes very obvious when studying the point density requirements specified by different requests for proposals (RFPs). At a loss in this ever confusing topic, many users request lidar data with sub-meter post spacing to achieve an accuracy that is easily obtainable with less dense lidar datasets. Unless the task calls for 3D modeling and above-ground, manmade or natural features, asking for highly dense lidar data may harm the budget with very little accuracy benefits, especially when the collection of breaklines is requested.
During the Second National Lidar Meeting held recently at the USGS headquarters in Reston, Virginia, speakers presented a variety of views and levels of understanding as to what constitutes a reasonable and practical lidar dataset. The most misleading approach is the one calling for a lidar database to fit the broad needs of all users, and here I mean all users, including those whose applications require 10 points or more per square meter! An advocacy call like this not only wastes valuable taxpayer money, but also makes for an impossible task as there is very little capital available for such an expensive undertaking…unless you live in the UAE, that is.
With the above phrases, I have made my political statement clear, so now let us get to the technical heart of the matter. Lidar data specifications should be tagged with user-specific needs and specifications. In order to address the issues adequately, my response will span the next few issues of the column due to the limited space allocated for each article.
The following sections represent different user communities’
requirements and the recommended data specifications:
1. Terrain Modeling Applications: Terrain modeling is a requirement of nearly all lidar projects, spanning a wide range of uses and specifications. The most common terrain modeling applications requested by lidar users follow.
a. Contours generation: The dwindling use of paper (hardcopy) maps combined with advancements in 3D terrain modeling software capabilities have driven down the need for traditional contour generation. The demand for contours and contour specifications in RFPs involving lidar data collection continues, however, despite availability of new terrain modeling and viewing methods, such as 3D rendering and shaded relief maps. To create lidar-based contours that meet cartographic and geometric qualities, lidar data with modest post spacing of around 2 to 4 meters can be augmented with breaklines derived from image-based photogrammetry. If imagery is not available for breakline production, then a “lidargrammetry” approach is possible. In this method, very dense lidar datasets with post spacing of around 1 meter are used to create detailed stereomates by draping the lidar intensity image over the lidar DEM; these stereomates are then used to generate breaklines using any stereo-softcopy system. Once the breaklines are collected, either through photogrammetry or lidargrammetry, the lidar points can be thinned to a great degree, depending on the complexity of the terrain. The thinned dataset is then merged with the breaklines to create a digital terrain model (DTM), required for modeling the contours. In addition, all lidar points within a buffered distance around the breaklines should be removed to achieve acceptable contours without sacrificing accuracy. This process makes sense, as that is how we have always modeled contours from a DTM. The issue of utilizing breaklines in modeling contours from lidar data often gets confused, however, as service providers attempt to mix very dense and accurate lidar data with manually collected and possibly less accurate breaklines. Without buffering or thinning the lidar points close to the breaklines, the contours will appear problematic whenever a lidar point appears right next to a breakline. The last statement is true even with a photogrammetrically collected and modeled DTM. In constructing lidar-derived DTMs, we should consider all the best practices previously developed and utilized for modeling photogrammetric DTM during the past decades. A good quality DTM is achieved by having accurately modeled breaklines and minimum mass points outside the breaklines when necessary. Lidar indiscriminately collects dense mass points throughout the project area, including on and around the later collected breaklines. Unless lidar data is thinned and breaklines are buffered and cleared from the lidar points around it, it will be very difficult, if not impossible, to achieve cartographically acceptable contours.
b. 3D Terrain Modeling: Most modern lidar data users and providers are equipped with 3D modeling software that allows them to model lidar data for different applications, such as flood and environmental hazard, watershed management, etc. Depending on the required vertical accuracy of the model; many applications can utilize lidar data without the need for breaklines. However, for hydro-enforced terrain modeling where the user expects a downhill uniform flow of water, breaklines or manual 3D modeling is required around such water features to assure the effect. For most applications, lidar data with post spacing of 1 to 2 meters is adequate. Hydro enforcement of lidar-derived terrain model is still cumbersome and costly and logical automation is strongly needed in this field.
(To be continued in the next issue of PE&RS)
July 2008 (download a PDF 160Kb)
Q: What is meant by color or colorized lidar and what is it used for?
Answer: The literal meaning of the terms “color lidar” or “colorized lidar” could imply two different things:
The latest topographical lidar data processing techniques utilize imagery to aid in the interpretation and fi ltering of lidar data. Many vendors are now acquiring digital imagery concurrent with the lidar data mission. Having an integrated lidar/digital camera solution provides many advantages for data providers and users alike. On the data providers’ level, the digital imagery, whether natural-color (RGB) or color-infrared (CIR), can be used for:
- Generating simply georeferenced or accurately orthorectified imagery to aid in terrain analysis and interpretation when attempting to convert the lidar data to a bare-earth elevation model. The orthorectified imagery can also be provided to the end user as a useful by-product with minimum cost;
- Assigning the spectral color of the digital imagery to the corresponding lidar returns (points) that fall within the same geographic location of the digital pixel of the imagery. This more sophisticated technique results in pseudo lidar intensity or elevation data that resembles the color digital imagery. Such products can greatly benefit the interpretation and examination of the lidar surface since the human brain functions more effi ciently in interpreting colorized terrain data as opposed to black-and-white data sets.
- Applying supervised or non-supervised digital image classification, an advanced concept widely used in remote sensing applications, to spectrally classify imagery and then assign these spectral classes to the lidar data in a fashion similar to the technique described above. Accomplished by using specialized processing software, the spectral classification of the digital imagery delineates with great success the different terrain cover categories, such as water bodies, vegetation types, and impervious surfaces that are diffi cult to achieve from lidar data alone. Once the results of the spectral classifi cation are attributed to the lidar points, the filtering software utilizes this new attribute information, combined with the spatial property of the lidar surface (elevation and slope), to come up with the most accurate and automated way of classifying and filtering the lidar surface. A technique like this not only enhances the quality of the bare-earth elevation model but also reduces costs by minimizing or eliminating many of the manual editing and filtering efforts.
The term “green laser” is widely used to describe the bathymetry lidar used for three-dimensional, high precision surveys of seabeds and objects in the water column. Using light energy to penetrate seawater in much the same way as a multi-beam echo sounder, bathymetry lidar systems usually comprise a twin laser generator (red-infrared and bluegreen portions of the electromagnetic spectrum) providing an effective depth sounding frequency. The basic laser sounding principle is similar to acoustic methods. A pulse of laser light is transmitted from the system toward the water surface in a predefi ned pattern. The red-infrared laser light is reflected at the water surface, whereas the blue-green laser light penetrates into the water column and refl ects from the objects or particles along the laser path or the seabed, if it makes it all the way there. The water depth is equal to the time elapsed between the two echo pulses, multiplied by the speed of light in water. Typical water depth penetration is in the range 20–40m but, in good conditions, depths as great as 70m are possible.
June 2008 (download a PDF 604Kb)
Q: I am looking for a brief but encompassing overview of the map accuracy standard(s) used in the United States of America to evaluate geospatial data accuracies and whether it applies internationally.
This answer is available in PDF form only. Click the link above. Thank you.
April 2008 (download a PDF 66Kb)
Q: How effective are lidar datasets in mapping land features such as roads and buildings?
Answer: Constructing two-dimensional and three-dimensional building models and other land features requires accurate delineation of sharp building edges and this traditionally has been accomplished using photogrammetric stereo compilation. All past attempts to automate the extraction of ground features from dense lidar datasets with a post spacing of one to three meters for the purpose of planimetric mapping have failed for one reason: the rounded and jagged edges of delineated buildings and other manmade features. Despite some software efforts to employ smart algorithms that correct the geometrical shape of objects, this type of modeling remains less appealing to service providers and customers alike as it does not meet horizontal map accuracy requirements for large-scale mapping. The ASPRS standard requires buildings be placed within one foot of their true ground position when compiled from a map with a scale of 1”=100’. Recent advancements in lidar systems enable the collection of ultra-dense lidar datasets with point density of fi ve to eight points per square meter (ppsm), which makes the data more suitable for use in the aforementioned modeling algorithms. The downside of such demanding software requirements is the high cost associated with the aerial acquisition due to the additional fl ight lines required to collect the ultra-dense lidar data. Traditionally, a lidar dataset used for terrain modeling is collected with a nominal post spacing ranging between one meter and three meters or a point density ranging between one ppsm and 0.11 ppsm. The ratio between the data densities of the normally collected dataset and the ultra-dense dataset ranges from 20:1 to 32:1. This is true if we assume the normal density dataset is collected with two meter post spacing or 0.25 ppsm. This does not necessarily translate to the same ratio in cost increase, but it could come very close. In many cases the high cost of acquisition coupled with the massive amount of resulting lidar data prohibits the use of ultra-dense lidar data for accurate building modeling and may encourage providers and customers to consider other means, such as traditional photogrammetric modeling, for this purpose. Finally, while ultra-dense lidar datasets may not currently be cost-effective for largescale modeling, the technology is impressive in the sense of delineating details. A recent acquisition of an ultra-dense lidar dataset with about six ppsm over the city of Baltimore, Maryland, reveals great details of the baseball game underway in the Camden Yards baseball stadium, as shown in Figures 1 through 3. Baseball fans can easily observe which base was occupied at that moment!.
March 2008 (download a PDF 66Kb)
Q: What does oblique imagery mean and how effective it is in GIS and mapping activities?
Answer: Rather than collecting imagery directly beneath the aircraft with the camera pointing at nadir, oblique imagery is acquired from an angled position. Oblique imagery has been used for decades by military intelligence during aerial reconnaissance missions. In recent years, however, its use has spread to the commercial market with an expanded range of applications. The modern approach combines oblique and vertical imagery to produce accurate 3D digital maps that can be interfaced with any modern GIS database or used for fl y-through analyses and measurements. This solution has proven valuable to users in emergency management, law enforcement, homeland security, tax assessment, engineering, insurance, and real estate, among others.
Most existing oblique mapping systems comprise the following components:
- Acquisition subsystem: While oblique imagery traditionally was achieved by jerking the aircraft so the camera could view the enemy lines from a safe distance, today’s commercial operations use a number of different techniques:
- Multi-camera subsystem. Multiple cameras are positioned on a base frame to view the ground from multiple look angels. The most popular oblique look angle is 45 degrees to the side and ahead of the aircraft; nadir cameras meanwhile obtain orthogonal viewing.
- Scanning cameras subsystem. Rotating cameras and/or mirrors are positioned to obtain panoramic views of the ground. This approach is less popular than the fi xed frame multi-camera subsystem.
- Processing subsystem: The processing subsystem manages data download and georeferencing, making the imagery suitable for oblique applications.
- Applications subsystem: In my opinion, this is the most important part of the entire system as it determines the value of the oblique mapping capabilities and therefore the value of the oblique mapping system. Often people confuse the terms“oblique imagery” and “oblique mapping.” Oblique imagery is easy to achieve in many different ways, but utilizing the oblique imagery for GIS applications is more challenging and less understood. Oblique imagery can be used to provide the following:
- Wide area coverage for generation of orthoimagery and reconnaissance-type investigations. Most commercial orthoimagery generation software can incorporate oblique imagery to produce orthorectifi ed image maps for this purpose.
- 3D textured object viewing and data extraction of buildings and other ground objects. This type of application provides the fly-through style analysis and measurements. Buildings are textured by adding facades for a realistic appearance. The details of these facades are obtained either from the 45 degree oblique imagery, from on-site closerange photography, or from a combination of the two using sophisticated mathematical models and ray tracing techniques with or without an accurate and detailed digital elevation model of the buildings. Most “oblique mapping” data requires special, and in many cases, proprietary software for viewing and extracting data. In addition, many of the oblique applications software require accurate and expensive building elevation models in the form of a wire-frame building model. The most attractive solution is the one that does not require prior existence of a wire-frame building model as it is cumbersome and very costly despite the side benefi t of providing an accurate 3D model that can be used for other applications.
January 2008 (download a PDF 73Kb)
Q: How effective are GIS and mapping techniques for rapid response and rescue efforts?
Answer: Aerial survey is the most effective way to monitor and survey
damages for catastrophic events. Reasons for this include:
- the vantage point aerial imagery provides at a safe distance from the event site,
- the effectiveness of modern imaging and sensing technologies in collecting vast varieties of digital data crucial to assessing affected areas (lidar, multispectral imagery, hyperspectral imagery, and thermal imagery in addition to traditional aerial imagery), and
- the rapid turnaround time for disseminating data and derived information to decision makers and fi rst responders, thanks to advancements in data processing and storage capabilities.
The increased ability of both public and private agencies to distribute, visualize, and manipulate large sets of raster data over the World Wide Web is another major advantage to using aerial survey data for response and recovery efforts. The Web provides an easy and effective means for sharing geospatial data at record speed. The USGS Earth Resources Observation Systems (EROS), for example, established a public-access website (http://gisdata.usgs.net/hazards/katrina/) in support of disaster response activities following Hurricane Katrina. Within hours of Katrina making landfall on the Gulf Coast, EROS began uploading imagery to state and local governments, FEMA, and other federal agencies—providing imagery and lidar-derived contours of the devastated area, which helped prioritize the recovery efforts.
Many geospatial vendors have provided valuable services in recent years to different public and private agencies during emergency situations. Enabling more informed decision-making for timely allocation of limited resources, these services can help reduce human suffering and save lives. Amidst nearly all recent major natural or manmade disasters, such as the 9/11 tragedy at the World Trade Center, Hurricanes Katrina and Rita, and the recent California’s wildfires, concerned agencies managed within hours of the event to contract with willing vendors to provide “rapid response mapping” services. The main elements of an integrated and effective rapid response mapping system are:
- Mission Planning Subsystem: Effective planning determines the required logistics for executing the mission with reduced time and costs while maintaining the quality of the final products. In addition, these mission planning systems can include models that enable stakeholders to automatically determine all necessary support elements that are needed for fi eld or office-based activities, including historical and publicly-accessed geospatial data. Navigating airport availability, power sources, and other services in or around the disaster area is a mandatory requirement before deployment.
- Data Acquisition Subsystem: An effective aerial system includes one or more sensors such as a digital camera and/or thermal, lidar, multispectral, or hyperspectral sensors. In addition, auxiliary data collection systems such as GPS and IMU are crucial as they shorten the turnaround time of data processing by providing the proper sensor georeferencing.
- Data Processing Subsystem: Data processing can be approached by one of two scenarios, based on mission requirements:
- Deployable Data Processing System: Such systems, which can be pay-loaded with the acquisition aircraft or trucked to the affected region, can vary in capability and sophistication depending on the size of the project and other contractual terms concerning products turn around time. In general, a deployable system contains all necessary software and hardware for processing airborne GPS/IMU data, performing aerial triangulation if necessary, as well as orthorectification of any aerial imagery .
- Office-supported system: Ordinary mapping production-line environments not normally dedicated to rapid response mapping can become totally or partially dedicated to processing rapid response data as it arrives from the field through express mail. The main advantage of this approach is the unlimited processing capability as compared to the hardware that can be transported and installed in the field. However, the product turnaround time is longer with this approach. It is also important to budget for the time required for the overnight shipping of the collected data and for the time required to disseminate fi nal products to rescue and response staff in the field.
- Information dissemination and dispatching subsystem: This is shaped by combined efforts between the contracting agencies, whether governmental or private, and the fi rm executing the production mission. The dissemination of fi nal products and information could be in digital or paper media, as required by end-users. First responders and rescue staff most likely prefer paper maps for visual interpretation while decision-makers and planners may prefer digital and statistical results needed for further analysis. As I mentioned earlier, the World Wide Web and ftp servers as dissemination tools are becoming very popular. However, caution must be practiced in planning the dissemination efforts based on the situation surrounding the struck area; in many cases, the lack of basic services such as power or internet connectivity hinder electronic data transfer.
Increased attention to homeland security and emergency preparedness combined with USGS/FEMA response to recent natural disasters have contributed to the commoditization of rapid response mapping products. In addition, recent success on numerous rapid response projects spread over a wide array of events have encouraged many manufacturers to offer complete aerial systems providing automated and near real-time production of GIS data for emergency events. In order to serve the ever-evolving rapid response mapping industry, national guidelines and specifi cations for rapid response need to be developed through cooperative efforts among FEMA, the USGS, the Department of Homeland Security, and most defi nitely ASPRS. This will serve both data providers and contracting agencies alike by providing:
- accelerated negotiations and contracting during time-critical situations through well-defi ned specifi cations suitable for rescue and disaster management projects; and
- a mutual understanding of what deliverables are required for rapid response situations, thereby eliminating confusion between standard mapping products required for engineering-grade study and design with products used for rescue and disaster management. Endorsing less stringent accuracy requirements for rescue and disaster management tasks result in faster turnaround and less expensive contracts.