NATIONAL PORT, Maryland — Artificial intelligence used by the US military has piloted small surveillance drones in special operations forces missions and aided Ukraine in its war against Russia. It tracks soldiers’ fitness, predicts when Air Force planes need maintenance and helps keep an eye on rivals in space.
Now the Pentagon plans to deploy thousands of relatively cheap, expendable, AI-based autonomous vehicles by 2026 to keep pace with China. The ambitious initiative — called Replicator — aims to “make progress on the too-slow shift of U.S. military innovation toward platforms that are small, smart, cheap and numerous,” Deputy Defense Secretary Kathleen Hicks said in August.
While its funding is uncertain and details vague, Replicator is expected to accelerate hard decisions about which AI technology is mature and reliable enough to deploy – including on weaponized systems.
There is little disagreement among scientists, industry experts and Pentagon officials that the US will have fully autonomous lethal weapons within the next few years. And while officials insist humans will always be in control, experts say advances in the speed of data processing and communication between machines will inevitably relegate humans to supervisory roles.
This is especially true if, as expected, lethal weapons are deployed en masse in drone swarms. Many countries are working on this – and neither China, Russia, Iran, India or Pakistan have signed a US-initiated pledge to use military AI responsibly.
It is unclear whether the Pentagon is currently formally assessing each fully autonomous lethal weapon system for deployment, as required by a 2012 directive. A Pentagon spokeswoman would not say.
Replicator highlights enormous technological and personnel challenges facing the Pentagon’s procurement and development as the AI revolution promises to transform the way wars are fought.
“The Defense Department is struggling to adopt AI developments from the latest breakthrough in machine learning,” said Gregory Allen, a former top Pentagon AI official now at the Center for Strategic and International Studies think tank.
The Pentagon’s portfolio has more than 800 AI-related, unclassified projects, most of which are still in the testing phase. Typically, machine learning and neural networks help people gain insights and create efficiencies.
“The AI we have now at the Department of Defense is being heavily deployed and empowering people,” said Missy Cummings, director of George Mason University’s robotics center and a former Navy fighter pilot. “There is no AI that walks around on its own. People use it to better understand the fog of war.”
One domain where AI-enabled tools detect potential threats is space, the newest frontier in military competition.
China is considering using AI, including on satellites, to “make decisions about who is and is not an adversary,” Lisa Costa, chief technology and innovation officer for the US Space Force, told an online conference this month.
The US wants to keep pace.
An operational prototype called Machina, used by Space Force, autonomously monitors more than 40,000 objects in space and orchestrates thousands of data collections every night with a global telescope network.
Machina’s algorithms arrange telescope sensors. Computer vision and large language models tell them which objects to track. And AI choreographers that draw directly from data sets from astrodynamics and physics, Col. Wallace “Rhet” Turnbull of Space Systems Command told a conference in August.
Another AI project at Space Force analyzes radar data to detect impending missile launches from adversaries, he said.
Elsewhere, AI’s predictive powers are helping the Air Force keep its fleet afloat, anticipating the maintenance needs of more than 2,600 aircraft, including B-1 bombers and Blackhawk helicopters.
Machine learning models identify potential failures dozens of hours before they happen, said Tom Siebel, CEO of Silicon Valley-based C3 AI, which has the contract. C3’s technology also models missile trajectories for the U.S. Missile Defense Agency and identifies insider threats in the federal workforce for the Defense Counterintelligence and Security Agency.
Among the health-related efforts is a pilot project that will assess the fitness of the Army’s entire Third Infantry Division: more than 13,000 soldiers. Predictive models and AI help reduce injuries and improve performance, said Major Matt Visser.
In Ukraine, AI provided by the Pentagon and its NATO allies is helping thwart Russian aggression.
NATO allies share intelligence from data collected by satellites, drones and people, some of which is merged with software from US contractor Palantir. Some of the data comes from Maven, the Pentagon’s groundbreaking AI project now largely managed by the National Geospatial-Intelligence Agency, say officials, including retired Air Force Gen. Jack Shanahan, the Pentagon’s inaugural AI director.
Maven started in 2017 as an effort to process video footage from drones in the Middle East – spurred by US Special Operations forces fighting ISIS and Al-Qaeda – and now collects and analyzes a wide range of sensor and people-related data.
AI has also helped the U.S.-founded Security Assistance Group-Ukraine organize logistics for military aid from a 40-nation coalition, Pentagon officials say.
To survive on the battlefield today, military units must be small, mostly invisible, and move quickly, as exponentially growing networks of sensors can see anyone “anywhere in the world at any time,” noted then-Chairman of the Joint Chiefs General Mark Milley, in a June Address. “And what you can see, you can photograph.”
To more quickly connect warfighters, the Pentagon has prioritized the development of interwoven combat networks – called Joint All-Domain Command and Control – to automate the processing of optical, infrared, radar and other data across the armed forces. But the challenge is enormous and involves bureaucracy.
Christian Brose, a former staff director of the Senate Armed Services Committee now at the defense technology company Anduril, is among military reform advocates who nevertheless believe they can “win here to some extent.”
“The argument may become less about whether this is the right thing to do, and more about how we actually do it – and the fast timelines that are required,” he says. Brose’s 2020 book, “The Kill Chain,” argues for an urgent redesign to assist China in the race to develop smarter and cheaper networked weapons systems.
With that goal in mind, the US military is working hard on ‘human-machine teaming’. Dozens of unmanned aerial and maritime vehicles are currently monitoring Iranian activities. U.S. Marines and special forces are also using Anduril’s autonomous Ghost minicopter, sensor towers and counter-drone technology to protect U.S. forces.
Industry advancements in computer vision have been essential. Shield AI allows drones to operate without GPS, communications or even remote pilots. It is the key to his Nova, a quadcopter, which US special operations units have used in conflict zones to scout buildings.
On the horizon: The Air Force’s “loyal wingman” program aims to pair piloted aircraft with autonomous aircraft. For example, an F-16 pilot can send out drones to reconnoitre, draw enemy fire, or attack targets. Air Force leaders are aiming for a debut later this decade.
The ‘loyal wingman’ timeline doesn’t quite match that of Replicator, which is considered by many to be overly ambitious. The Pentagon’s vagueness about Replicator, meanwhile, may be partly intended to keep rivals in the dark, though planners may also still be weighing in on function and mission goals, says Paul Scharre, a military AI expert and author of “Four Battlegrounds ‘.
Anduril and Shield AI, each backed by hundreds of millions in venture capital funding, are among companies vying for contracts.
Nathan Michael, chief technology officer at Shield AI, estimates that within a year they will have an autonomous swarm of at least three unmanned aircraft ready using the V-BAT aerial drone. The US military currently uses the V-BAT – without an AI mind – on Navy ships, in counter-narcotics missions and in support of Marine Expeditionary Units, the company says.
It will take some time before larger swarms can be deployed reliably, Michael said. “Everything is crawling, walking, running – unless you set yourself up for failure.”
The only weapons systems that Shanahan, the Pentagon’s inaugural AI chief, currently trusts will operate autonomously are fully defensive, such as ship-based Phalanx anti-missile systems. He is less concerned about autonomous weapons that make decisions on their own than about systems that do not work as advertised or kill non-combatants or friendly forces.
Craig Martell, the department’s current chief digital and AI officer, is determined not to let that happen.
“Regardless of the autonomy of the system, there will always be a responsible agent who understands the limitations of the system, has been properly trained with the system, has legitimate confidence about when and where it can be deployed – and who will always take responsibility,” said Martell, who previously led machine learning at LinkedIn and Lyft. “That will never be different.”
As for when AI will be reliable enough for lethal autonomy, Martell said there is no point in generalizing. For example, Martell relies on his car’s adaptive cruise control, but not on the technology that prevents him from changing lanes. “As a responsible officer, I would only use that in very limited situations,” he said. “Now extrapolate that to the military.”
Martell’s office evaluates potential generative AI use cases – it has a special task force for that – but focuses more on testing and evaluating AI in development.
A pressing challenge, says Jane Pinelis, chief AI engineer at Johns Hopkins University’s Applied Physics Lab and former head of AI insurance in Martell’s office, is recruiting and retaining the talent needed to advance AI technology To test. The Pentagon cannot compete on salaries. Computer science PhDs with AI-related skills can earn more than the military’s top generals and admirals.
Testing and evaluation standards are also immature, as a recent National Academy of Sciences report on Air Force AI points out.
Could this mean that the US will one day forcibly deploy autonomous weapons that are not fully compliant?
“We still assume that we have the time to do this as rigorously and as carefully as possible,” Pinelis said. “I think if we’re not quite ready and it’s time to take action, someone will be forced to make a decision.”