PE&RS November 2015 - page 826

November 2015
Speaking concretely about the AR Sandbox, the fact that
it is self-contained and mostly self-explanatory means it
can be installed in public venues such as science muse-
ums without requiring dedicated staff to be on hand at all
times. While many AR Sandbox installations offer facilita-
tion, most are offering unattended free exploration at least
part of the time. This means that large numbers of people
can be exposed to earth science concepts with relatively
little effort.
How has VR gone from being a gadget to being a valu-
able tool in scientific research?
Like any computer hardware, VR is useless without soft-
ware. VR became a general-purpose research instrument
the moment we started developing software that allowed
scientists to visualize and analyze their data in ways
that were not possible before. At a fundamental level, the
strength of VR lies in its nature as a pseudo-holographic
technology. VR can present virtual three-dimensional ob-
jects in a way that makes users’ brains believe that they
are real objects. On a technical level this means VR can
present 3D data without the distortions that are inherent
in traditional 3D graphics using 2D displays, but on a per-
ception level this means VR taps into parts of the brain
that are much older and have evolved over a much longer
time period than those that deal with 2D imagery.
Being able to make sense of the three-dimensional world
via 3D vision is an important survival trait, and VR al-
lows us to apply the same skills to the kinds of 3D data
that are very common in the physical sciences, especially
in the Earth sciences. Practically this means that scien-
tists can understand spatial relationships inside complex
3D data quicker and more accurately using VR than using
traditional visualization methods, and the input side of
VR, which allows users to apply their natural dexterity to
3D data via hand-held input devices, allows users to mea-
sure, isolate, extract, and even manipulate or create data
in ways previously impractical.
What has been your biggest challenge in working in this
field over the last 15 years?
The biggest challenge in VR is that it cannot be described
adequately in words, or even in images or videos. Unless
someone has personally experienced VR, it is almost im-
possible to explain to that person what it actually does on a
perceptual level, and how it is useful specifically for scien-
tific applications. In combination with the burst of the first
VR bubble in the ‘90s and the resulting general impression
that VR does not work, it has been very difficult to secure
funding for VR-related research via grant applications,
or publish results from such research. A grant reviewer
who believes VR is a dead technology might be reluctant
to fund further research developing it, and a manuscript
reviewer who believes VR does not work might be overly
skeptical of any results obtained via VR methods.
We have made a point of directly exposing as many peo-
ple as possible to our VR systems and software, and had
large numbers of cases where people were highly skeptical
to outright hostile before we dragged them into the CAVE,
/, and then changed their minds once
they saw how it works, and how it allows us to analyze
We are hoping that the current VR renaissance will
make this easier, as much larger numbers of people will
have a chance to experience VR first-hand, but even today,
due to the still small numbers of commodity VR systems in
the wild, we have not yet seen much change in the attitude
towards VR at large.
Where do you see the biggest advances have been made
in the VR field?
As VR taps more directly into the user’s brain than tradi-
tional 2D or 3D graphics methods, it has much more strin-
gent hardware and software performance requirements.
In a “regular” graphics application, slow rendering times
or intermittent short lock-ups are slightly annoying, but a
VR system that drops below a 60 Hz frame update rate, or
locks up for a fraction of a second, can literally make its
user sick.
VR has relatively well-defined performance thresholds,
above which it works, and below which it does not. The
biggest change over the last twenty years or so has been
that the continual improvement of computer hardware
performance has at one point crossed the threshold that
allowed useful applications to work reliably. This is the
main reason why VR did (mostly) not work in the ‘90s, but
works today.
On the software side, the biggest advances have been
in software infrastructures that make VR applications,
which were previously limited to very specific system
types, portable to a very wide range of systems. These
infrastructures are what allowed us to experiment with
low-cost VR systems, because our existing software auto-
matically worked on all of them without any additional
How do you see it affecting the geospatial industry in the
future? Perhaps as a tool to help us map other planets?
I have noticed that the geospatial field is moving more
and more from 2D data, such as maps, to 3D data such as
highly-detailed 3D environment scans. While 2D maps are
a natural fit for 2D display devices, 3D scans are not. In
fact, one of our most-used VR applications, LiDAR Viewer,
is specifically created to allow users to naturally interact
with large and complex 3D LiDAR scans. Seeing a LiDAR
scan as a three-dimensional object, potentially at 1:1 scale,
inside a VR display is fundamentally different that seeing
a picture of a LiDAR scan on a 2D display. Even in VR, a
LiDAR scan is obviously not a real environment, but our
scientist users have found that it is realistic enough that
continued on page 830
819,820,821,822,823,824,825 827,828,829,830,831,832,833,834,835,836,...882
Powered by FlippingBook