Building a Battlefield AI

By Tad Vezner

PHOTO BY KEVIN MONKO

Nidhal Bouaynaya (M.S. EE, CE ’02) had reservations about artificial intelligence. It went against everything she was comfortable with as an engineer who preferred things that were clear and predictable.

“Coming from a background where the clarity and predictability of mathematical models were key, I found AI to be somewhat unsettling,” Bouaynaya explains. “In engineering and mathematics, the reasons for a system’s failure are usually identifiable and logical. AI is a different story; it operates on principles that can sometimes lead to unpredictable outcomes without clear explanations.”

Despite these initial reservations, Bouaynaya couldn’t ignore the undeniable success of AI.

“I witnessed AI’s performance surpassing that of state-of-the-art models,” she notes.

And so Bouaynaya started to make strides in AI research, trying to fix what she thought was broken. She wanted a way to peek under the hood of those algorithms, or at least get some idea of how certain AIs were in their answers.

Her research efforts in trustworthy AI were initially funded by the National Science Foundation, and subsequently attracted significant military funding, enabling her to develop reliable AI applications for the Army.

The project, for which she has just been awarded $8.5 million in total funding, involves speeding the development of a combat simulation system in an immersive virtual reality environment, which uses AI to sense the environment and recommend responses to its users.

In her words, she is in the process of creating “secure, immersive, and dynamic mixed-reality environments, with futuristic threats and engagement scenarios, aimed at enhancing the operational assessment of forthcoming gunner turret systems, thereby expediting their advancement.

“Once [AIs] were trained, they became what I call ‘self aware.’ They truly comprehend the data. As the attack intensity escalates, so does their uncertainty. That confidence [or lack thereof] is learned during training.” —Nidhal Bouaynaya

“And picture this,” Bouaynaya elaborates, imagining a soldier nestled within a vehicle’s gunner turret. “They’re faced with a crucial decision: What’s the significance of this [AI] output? Or should they trust their instincts instead?”

It all comes down to trust in AI.

And trust in AI—as well as an AI’s measurable trust in itself—was exactly what Bouaynaya has been trying to pin down for years.

Born and raised in Tunisia, Bouaynaya spent a lifetime traveling. First she received her undergraduate degree in Paris from Lycee Louis-Le-Grand, followed by an engineering diploma from École Nationale Supérieure de l’Électronique (ENSEA)—earned concurrently with her electrical and computer engineering degrees from Illinois Institute of Technology through an exchange program. She then achieved her doctorate in engineering from the University of Illinois Chicago.

After a six-year stint as a professor at the University of Arkansas, she accepted a faculty position at Rowan University’s Henry M. Rowan College of Engineering in 2013. She’s now both the college’s associate dean for research and graduate studies and a professor of electrical and computer engineering.

“Because of her background in mathematics as well as engineering, she has this unique ability to take theoretical, complex mathematical ideas and then use them in solving practical problems,” says Robi Polikar, head of Rowan’s electrical and computer engineering department.

Despite Bouaynaya’s background in math, she gravitated toward AI, first focusing on resolving an internal turmoil.

“How can I trust an AI when I cannot understand its failure mode or quantify its responses?” she asks.

“I cannot accept a system that simply states, ‘It’s going to snow tomorrow.’ It doesn’t mean anything without a certain level of confidence,” Bouaynaya adds. “It’s an algorithm; even if it’s 99 percent accurate, it’s bound to error eventually. The problem is we don’t know when it’s going to make a mistake. And we cannot do any of that analysis, because it’s a black box.”

With funding from NSF, Bouaynaya started to study how “confident” AIs were in their answers. It appeared at first glance that the more wrong they were, the higher they set the probability that they were right. Kind of like humans in some regards.

“Imagine you have a really smart friend who learns a lot of things from a textbook,” Bouaynaya says. “Sometimes, they learn so much that they start to memorize every detail in the book, even things that are not very important. When you ask them a question about something they’ve never seen before, they might still act like they know the answer, just because they’re so used to remembering everything from the book. This is similar to what happens with complex computer models.”

So she altered the training of AIs in such a way that the systems became aware of their “confidence” when faced with incorrect data, fostering the development of self-aware AI models. With alteration, they began to develop a lower certainty in wrong answers.

When data was maliciously altered to deceive the system—for instance, when a few pixels of a picture were changed to make it hard to identify—the AI started assigning a lower probability to answers that it thought were right. And Bouaynaya created a second measurable metric— called uncertainty, rather than converting raw scores into probabilities—that went through the roof in such instances.

Adding the additional metric allowed users to more accurately gauge how much trust they could put in the AI’s answers.

“Once they were trained, they became what I call ‘self aware,’” Bouaynaya says. “They truly comprehend the data. As the attack intensity escalates, so does their uncertainty. That confidence [or lack thereof] is learned during training.” In high-stakes scenarios where soldiers must make split-second, life-or-death decisions, the reliability of AI becomes paramount, Bouaynaya says.

“In such moments, having AI that not only provides answers, but also quantifies its certainty can be the difference between success and failure, between safety and peril,” she says. “Knowing the level of confidence in AI’s responses empowers soldiers to make informed judgments, allowing them to trust the technology as a valuable ally in their mission-critical tasks.”