Although airlines stopped putting it in seatback pockets in 2015, longtime travelers might remember SkyMall, the offbeat in-flight catalog that peddled oddball wares such as chaise lounges for dogs, glow-in-the-dark toilet seats, and life-sized Stormtrooper suits. Thanks to SkyMall, the length of one’s flight could be measured just as easily in “oohs” and “ahs” as in miles or minutes.
During a keynote speech Tuesday at GEOINT 2019, The Honorable Dr. Lisa Porter gave the impression that artificial intelligence (AI) is a lot like the products in a SkyMall catalog: Because they sound so neat, you can’t help but want them. But do you really, actually need them? The truth is, not everyone does.
“AI is a bit of a shiny object … so sometimes we fall into that hammer-looking-for-a-nail problem,” said Porter, who is deputy under secretary of defense for research and engineering with the U.S. Department of Defense (DoD). “We have to be careful and recognize that not every problem is well suited to AI.”
Porter described how the research community both inside and outside DoD can advance AI in ways that maximize benefit over buzz. Step one, she said, is problem identification.
“Sometimes we’re not very good about describing problems,” explained Porter, who said succeeding with new technology requires “really digging into understanding the problem that you’re trying to solve [by] spending quality time with the end users who are currently addressing that problem.”
Along with a clear problem, AI demands good data.
“If you’ve decided that AI is really the appropriate approach to solving the problem, you also have to ask yourself … is it really possible to generate the type of AI data I’m going to need for the algorithms I want to deploy?”
Assuming it is, the next item on every AI shopping list should be explicit metrics, according to Porter. She offered as an example a search algorithm trained to identify cats versus one trained to identify missile launches. For the former, precision is paramount; you may not care if the algorithm returns every image of a cat, as long as every image it does return is, in fact, a cat. For the latter, recall is more important than precision; you may be willing to tolerate false positives in exchange for confidence that the algorithm will capture every image with a missile launch in it.
“These are the kinds of things you have to think about ahead of time when you’re defining metrics by which you’re going to improve and optimize your algorithms,” said Porter, adding that metrics are as important for making business cases as they are for improving algorithms. “At the end of the day, [users] have to be able to justify the cost of implementing your technology … and the only way they’re going to be able to do that is to show a quantitative measurement that exceeds the cost.”
Therein lies the shortcoming of most contemporary AI efforts, according to Porter: Their architects are so fixated on demonstrating new technology that they’ve yet to engineer systems to actually integrate it.
“A pilot is not adoption,” emphasized Porter. “The community must adhere to transparency so that we can get to reproducibility, so that we can get to trust. Because if we don’t understand these results are validated and reproducible, we at the DoD and in the national security community cannot use them.”
Along with explainable AI, Porter said the scientific community should be laser-focused on: developing solutions to adversarial AI; reducing reliance on large datasets; and diversifying GEOINT algorithms in areas beyond image processing.
Porter put her priorities into two main categories, the first being AI systems engineering, and the second being research.
“We’ve got to continue to focus on research—challenge ourselves to move beyond our comfort level,” she said. “Otherwise, we’re never going to get to a point where we’re going to truly rely on these algorithms in mission-critical applications.”