AI In Warfighting: New Conflicts, And New Philosophies
It’s a time of war, prominently, as the global markets and the global community react to a conflict that has impacted supply chains and disrupted commodities markets.
What does AI have to do with it? Well, right now, there’s a lot of work on the automation of defense applications, again, in the U.S. and around the world. This is one area where AI seems to have a lot of potential, but also generates a good deal of fear, around autonomous lethal weapons and other kinds of high-stakes operations that concern the population of the world.
Here’s an extract from a DoD memo dated Jan. 9 of this year :
“In the national security domain, AI-enabled warfare and AI-enabled capability development will re-define the character of military affairs over the next decade. This transformation is a race - fueled by the accelerating pace of commercial AI innovation coming out of America's private sector.”
What does it mean to dominate in AI?
In April, I sat in on a panel discussion at the Imagination in Action event that we hold each year at MIT.
Erik Bethel of Mare Liberum, Nathan Michael, CTO of Shield AI, Rylan Hamilton, CEO and co-founder of Blue Water Autonomy, and Tucker Hamilton, author and former co-director of the MIT AI accelerator, talked about war and the role of artificial intelligence.
The panel started with a clear-eyed assessment of the reality around asymmetric systems.
“Oil prices are spiking,” Bethel said. “Helium is spiking. You need helium to make semiconductors, urea, fertilizers. Well, the list goes on. But why are they trapped? Well, we can't send a destroyer through Hormuz, because (a missile) that costs 50,000 bucks can be launched from a garage anywhere in Iran and cause damage.”
“A much lower cost of systems can deliver incredible effects and outcomes,” Michael said, in agreement. “And so what you start to see is a massive cost asymmetry between those types of intelligent systems having meaningful effect, and the ability to counter those systems.”
Rylan Hamilton proposed a response to what he characterized as a long-term problem. “A $2 billion destroyer, which is the workhorse of the U.S. Navy, can't open the strait because of these asymmetric threats,” he said. “So we need a hybrid fleet, one that is full of both manned and unmanned craft, and we need to move a lot faster, because we're going to continue to have these dynamic threats, and to think that Iran, two weeks from now, will just go away, is a fool's errand.”
Tucker Hamilton had the following comment on the use of drones in the Russia-Ukraine war:
“I'm really curious to how we continue to adopt the technology to have meaningful battlefield effects that allow more of an advantage than right now we're seeing either side,” he said.
In response to questions about Shield’s work, Michael explained some of the complexities in this line of work.
“You have to not only be able to deploy edge-level intelligence models that are running, that are grounded by physics on constrained compute, on a variety of different platforms, but you have to be able to do that with traceability, auditability and an understanding of how changes in the underlying architectures of the AI running on the platforms translate to outcomes in the field,” he said.
“The speed of relevance and latency that I think is very important,” he said, mentioning some of the contributions of firms like Lockheed and Raytheon. “The other thing I think that's very important is that the future of warfare may not be determined on the battlefield, but it may be determined on the factory floor, so whoever can scale and produce mass at an inexpensive price point is going to win.”
He also spoke to some unique challenges of maritime building:
“One of our customers’ key anxieties for when you build something that's hundreds and hundreds and hundreds of tons that's carrying important payloads across long distances, is you just can't break down in the middle of the ocean,” he said, likening the principle of redundant design to what’s used in aerospace. “So one of the things we've done, which isn't the most sexiest thing, is we focus a lot on reliability and designing something that, from the heel up, will just work all the time.”
“AI is going to be able to recommend, it's going to be able to, of course, predict, it's going to be able to help prioritize. It's going to be able to go at machine speed in some situations, but when it comes to lethal force, escalation, strategic effect, it has to be a human decision - that has to be about accountability and judgment and trust.”
Reliability, he added, is also crucial in a “contested environment.”
“We cannot deploy fragile systems,” he said.
At the end of the talk, each of the four gave their visions of what war will look like in 10 years.
“War is terrible, and we want to avoid it at all costs,” Bethel said in opening his thoughts, adding that there are many different kinds of war, citing phenomena like fentanyl and TikTok.
He also came back to competitiveness:
“I mentioned earlier that I think the future of warfare is going to be determined at the factory floor, AI-enabled factories producing at scale,” he said. “It's been very sad to me to see chunks of the American manufacturing base get completely dismantled and get sent overseas.”
Michael agreed to look five years out, not ten, given the pace of change.
“I think we'll see fewer and fewer people in harm's way, in the battle space,” he said. “I think there will be an increasing level of separation, as we start to see, as was just called out, larger numbers of lower cost platforms that are achieving greater effect and outcomes. I think we'll see increasing levels, substantially increasing levels of intelligence deployed on the systems.”
He had this to say about gatekeepers:
“We'll be able to deploy, to the point made earlier, our ability to guarantee the performance of the system and that it's making the right decisions, and elevating those decisions to humans on the loop at the right moments. We'll see humans engaging at higher levels.”
“I think 10 years out, 50% of our budget will be spent on software and drones,” Rylan Hamilton predicted.
Tucker Hamilton, for his part, talked about struggling with the changing nature of war, while reflecting on watching war films.
“I want us to get there where we can take people out of harm's way, but there's a part of me that thinks we need to be in harm's way, to understand the weight of warfare and understand our responsibility as societies to try to protect what is sacred, which is humanity.”
That’s a bit, from several experts, about where we’re at with AI in defense. Write me a comment and let me know what you think.
Loading article...