To build on successes and break new ground in machine learning and artificial intelligence (AI), innovators need to start small. Experts from USGIF’s Machine Learning and Artificial Intelligence Working Group gathered Tuesday at GEOINT 2017, and agreed on the importance of identifying goals for automation and understanding the landscape of captured data.
Juliane Gallina, partner and solutions director for IBM U.S. Federal Solutions, kicked the session off by highlighting the divide between narrow and general, multi-function AI. Since effective, general AI technology—like Iron Man’s fictional JARVIS—hasn’t yet been developed, she said companies should focus their efforts on AI applications with specific, discrete functions.
“I look to narrow artificial intelligence, microservices, and APIs, integrating many simple functions together,” Gallina said.
Session moderator Tom Reed, director of solution architecture at NVIDIA, emphasized the importance of attacking “low-hanging fruit” before aiming for the moon, particularly by finding ways to alleviate laborious processes and automate necessary areas to free up human creativity and bandwidth.
People don’t really understand where their data comes from, how it got there, what to call it, or where it is because of this magical process. — Juliane Gallina, IBM U.S. Federal Solutions
Gallina referenced an IBM research initiative with TSA in which computer vision was applied to object detection for baggage screening. Simple object recognition as it exists is effective for a variety of applications, but it isn’t enough, she said. Visual recognition is the next logical step, and once operational, will begin answering more sophisticated questions about adversary behavior—for example, “What does missile launch preparation look like?”
Among the low-hanging challenges Reed spoke of is the task of curating and labeling data. Atrophy in the cognitive process undermines our ability to automate—Gallina offered the example of boolean queries often too fragile for analysts to touch. Thus, they have no way to know what an algorithm is missing during the discovery process.
“People don’t really understand where their data comes from, how it got there, what to call it, or where it is because of this magical process,” Gallina said.
Todd M. Bacastow, DigitalGlobe’s director of strategic alliances, added that once a process becomes automated, we often assume that performance has improved simply because the function operates more quickly than it did before. This is not always the case. According to Bacastow, performance metrics, which themselves could be automated, will play a major role in establishing human trust in the machine learning technology of the future.