Tuesday’s panel, Geomatics to Geospatially Intelligent Services, focused on integrating geomatics and other foundational data and delivering it in ways that are actionable. This is a particularly tall order that touches on analysis, imagery and visualization, and delivery.
Kyle McCullough, Director of Modeling and Simulation at the University of Southern California’s Institute for Creative Technologies (ICT), pointed to the ICT’s work with the U.S. Army on the One World Terrain Program as well as work they do for other military research groups. Their goal, he said, is to take “high-res data, to semantically understand it and break it apart into the component pieces; being able to understand the material classification of a building and where the windows and doors are…[which] you can’t necessarily get from some of that foundation data and be able to have some more information that can be fused into this larger data set that can ultimately be authoritative.”
“Geospatial data is an incredibly powerful tool and resource, and spatial referencing is an incredibly powerful tool and resource for analysts,” said Jackie Barbieri, founder and CEO of Whitespace. That robust data and imagery is made more valuable with the overlay of critical relationships. “That’s the job of the sense maker—whether it’s human or machine or a combination thereof—to find linkages across information that’s been properly referenced and pass it on to decision-makers, supporting human security missions.”
Barbieri went on to talk about how cognition is an important feature of making geospatial information usable. “Humans are incredible pattern recognition machines. Our cognitive wetware is really well adapted to performing those types of tasks, almost regardless of the input. So, when you exploit the multiple sensory inputs that we have as individuals, auditory, tactile, visual and others, you create an opportunity to almost simulate synesthesia. When I encounter that sound and that visual—when I see those two things together, I know I need to act.”
Richard Cooke, Director of Business Development, Imagery and Cross Sectors for ESRI, discussed how his group is working to integrate multiple technologies. “We’re investing quite a bit of R&D to bring AR/VR natively into the ArcGIS environment on a platform that is supportive of these technologies so that when this data is available and it’s available at scale, we can handle the volume and the veracity of the data.”
Demetrius Dozier, Special Program Manager for the Mixed Reality Government division of Microsoft Mixed Reality, faces the task of delivering this information, including force locations, hazards and mission planning, in a mixed reality heads-up display. “We take all the different combat inputs that come in and we put that in their face. It’s a limited environment that we get to work in, and we have challenges,” such as connectivity, delivery in real time, as well as functionality. “How do you control things when you’re not using your hands and [are just using] eyeballs to control [the interface]?”
Moderator Barry Tilton, Technology Evangelist at Maxar, also addressed the end user, and the threat of having so much data that the person, particularly a warfighter in a critical situation, could be overwhelmed.
“It’s the challenge of filtering what’s necessary, what’s appropriate and what’s relevant,” said Dozier. “Getting to the point where it’s not a cognitive burden on the user with or a burden on the network. We’re still operating within systems that weren’t designed to have this much data presented to that low of a level—the warfighter and post-combat force level. Before, some of these guys may not even had radios, and now we’re trying to take all the data that we bring in, push it over legacy pipes, make sure it’s available to those that need to use it. And that’s a huge challenge.”