The United States Geospatial Intelligence Foundation’s 2021 GEOINT Community Forum, The Geospatial Metaverse – Infrastructure, Tradecraft, and Applications, included a stellar discussion amongst esteemed panelists regarding the benefits and opportunities of increased spatial resolution and accuracy in 3D terrain models in mission planning, training, and modeling.
The Evolution of 3D Terrain Models panel began with moderator Todd Johanesen, the Director of the Office of Special Programs at the National Geospatial-Intelligence Agency, noting that 3D terrain is experiencing a revolution of innovation and adoption among a variety of geospatial stakeholders. “3D GEOINT is critical to providing analytic content and visualizations that make sense of today’s threats and anticipate tomorrow,” Johanesen opened. “This will lead us to change how we understand intelligence topics, both tactical and strategic, covering territory that extends from the space to the surface of the Earth, to underground infrastructure, and to the seafloor.” The panelists then began to talk through the value chain for 3D terrain models.
Panelist Glenn Quesenberry, an Architecture, Standards, Test, and Certification Team Lead at the U.S. Army Geospatial Center, began by highlighting what he saw as the three primary issues of the 3D modeling: timeliness, fidelity, and authority to create a 3D environment. Quesenberry posed, “How can we decrease the time from data collection to 3D fusion?
How can we use LiDAR sensors and satellite imagery from all over the world and get a faster, more accurate picture of what is happening right now, anywhere on the planet?”
Following up on that, panelist Richard Cooke, the Director of Imagery at Esri, noted that the push for high spatial accuracy was being driven by a growing demand for 3D to be both operational and available for various different workflows.
“From a commercial standpoint, these goals are largely economic. As you can imagine, people want to create new revenue streams, or they want to do digital transformation in a way that optimizes operations, reduces costs, and increases safety,” Cooke said. The transition from planning to operations is difficult without high spatial accuracy and subsequent compatibility with 5G.
Additionally, in response to the COVID-19 pandemic, Cooke predicts the United States will invest stimulus money in infrastructure. “One of the ways to get from legislation to shovel in the ground faster is to use digital twins/3D immersive environments to do the upfront survey and planning activities,” he noted. Further, ongoing monitoring, maintenance, and combination with IoT sensors are all tasks related to such projects that can be enhanced through 3D modeling but which require spatial accuracy and precision to succeed.
Panelist Ryan McAlinden, Chief of One World Terrain at Army Futures Command, spoke about producer-consumer relationships as an additional challenge to 3D terrain and modeling. “Demand has got to be there,” he said. “The question is whether that continues or not, which we think it will…but then ultimately, how does the data get used in the systems? How do you make the systems compatible with it? How do you get it to the edge?”
McAlinden continued, “You can have one of the best centimeter models in the world, but if your systems can’t take them in—for whatever reason: hardware challenges, software limitations, etc.—then they are of no value.” Those interested in the utility of 3D modeling must be cognizant of both production and consumption. He cited experiences with RAM limitations and network problems in his professional pursuits that indicated a need to focus on distribution of content to the edge, as well as compression, detail, and tiling structures in order to better mitigate those concerns.
Panelist Jackie Barbieri, CEO of Whitespace, transitioned the discussion to end-stage dissemination of information. “It’s one thing for an analyst to sit in a role where they’re trying to make sense of all of this data, but then they also have to communicate to somebody who is going to do something about it,” Barbieri explained. “The more spatial accuracy we have in X, Y, and Z—and in time—the more feasible and viable it becomes to push machine assistance down the value chain and closer to sense-making.”
“We can more rapidly go from sensing to sense-making than we ever have been able to before,” she claimed. However, she underscored that it is still important to be respectful of human cognitive processes when designing technologies to enable them. “Our brains aren’t taking a 2D image and extracting the three-dimensional properties from it,” Barbieri noted.
After describing a use case regarding geospatial cell phone data and COVID-19 transmission, Barbieri reiterated that the 3D modeling value chain’s spotlight generally should be pointed upstream. Errors that begin at the start of the value chain can emerge in unexpected ways at its end.
Such errors, Barbieri explained, can extend beyond bandwidth or distribution—computational problems, for instance, may not come to fruition until late in the 3D modeling distribution cycle.
Johanesen hopped back into the discussion to address questions from audience members. He asked, “What would be the ultimate goal: a global model? At what level of accuracy and what currency, based on new sensors and massive cloud computing? When will this be achievable?” McAlinden replied, “If someone asked me that a year or two ago, I would have said, ‘Yeah, I’d love a one-centimeter surface model for the entire planet wrapped in a WGS84 projection system with all the metadata associated with materials and feature types and composition.’ I think [that] is feasible from a technical standpoint, but it goes back to what I was talking about earlier, on the consumer side—it can’t just be a bunch of different high-resolution sources that are coming together and then being sent out to users who are trying to consume that.”
Quesenberry agreed that consumers don’t need every piece of raw data—they often are only looking for answers and direction. Regardless of how technology will advance in the coming years, there will always be better inventions, and consumer needs should be paramount. “We will never be at the highest resolution we want to be,” Quesenberry surmised, “but the faster we can address the need [of the consumer], we can get to that problem.”
Cooke paralleled that response from a commercial standpoint. “There’s an equilibrium state where spatial resolution, temporal resolution, and cost achieve harmony,” Cooke said. “Maybe that’s a meter or half-meter global coverage for a base map. As you dive down into areas of interest, maybe you get into that 10-centimeter to sub-10-centimeter in large urban areas. When you get to the site or synthetic training environment, maybe then you want to be in that 1- to 2-centimeter accuracy range.”
Barbieri and Cooke further agreed that accuracy would not remain a technical problem—it will, however, remain an economic problem. Barbieri posed a follow-up question to her fellow panelists regarding the role that non-pixel or non-point-cloud data can play when informing 3D terrain.
“As the spatial accuracy of our observations from those devices [e.g., iPhones] improve,” Cooke said, “you’ll see more crowdsourcing and more accurate results from that crowdsourced data.” Until then, however, we must wait for common technologies to be equipped with more sophisticated sensors (like iPhones with LiDAR, for instance).
The panel ended on Johanesen’s final question: “How can we accelerate all of these inter-related technologies and processes?”
Barbieri provided insight from her perspective of the value chain, highlighting the importance of beginning with the consumer and optimizing with their needs in mind.
“Not that you know the answer,” she said. “But you know the question.”
USGIF thanks Todd Johanesen, Richard Cooke, Glen Quesenberry, Jackie Barbieri, and Ryan McAlinden for their remarkable discussion and participation and in the 2021 Geospatial Metaverse forum.