As autonomous warfare accelerates in 2025, critics warn costs, ethics, and diminished human oversight could prolong global instability.

The global defence sector is undergoing a seismic transformation driven by autonomous systems, AI‑enhanced decision tools, and increasingly interconnected weapons platforms. But as major powers rush to modernise their arsenals, a chorus of critics is warning that the pace of technological change is outstripping political, legal, and ethical oversight—creating new risks that could entrench conflict rather than deter it.
In recent months, a series of high‑profile military procurement announcements have underscored just how rapidly automation is reshaping the battlefield. The United States approved expanded use of AI‑enabled targeting systems across multiple combatant commands; the European Union launched its first multinational drone‑swarm defence initiative; and several Asia‑Pacific states accelerated development of autonomous maritime patrol vessels. Supporters frame these moves as necessary steps in a new era of strategic competition, where threats evolve too quickly for traditional military planning.
Yet defence experts caution that speed may be coming at the expense of control.
“This is not an incremental shift,” says Dr. Lena Hartmann, a researcher at the Geneva Centre for Security Governance. “We are effectively delegating critical stages of warfare—from detection to engagement—to increasingly opaque algorithms. The risk is not only unintended escalation but an erosion of human agency.”
Cost Spirals and the Race to Keep Up
One of the most visible pressure points is cost. While automation is often marketed as a budget‑saving innovation, the reality is far more complex. High‑performance sensors, modular drone fleets, and the computing infrastructure required to support real‑time battlefield analytics can carry price tags that rival traditional weapons systems.
In NATO’s latest defence expenditure report, tech‑centric modernisation accounted for more than 40 percent of new spending commitments—an unprecedented share. Emerging economies face even steeper financial hurdles, creating what analysts call a “technological stratification” of global security, where the nations least able to afford advanced systems risk falling behind in deterrence capabilities.
“This is not just an arms race—it’s a software race,” notes defence economist Abdul Rahman Farouk. “And software races never really end. They require constant investment just to maintain parity.”
Ethical Grey Zones
The ethical implications are equally fraught. Autonomous targeting capabilities remain controversial, even as militaries insist that humans retain the authority to approve lethal force. But critics argue that the distinction is weakening as algorithms increasingly shape the options presented to commanders.
Uncertainty about how AI systems interpret battlefield data raises questions about accountability. Who is responsible if an autonomous aerial vehicle misidentifies a target? How can civilians be protected when engagement decisions rely on models trained on imperfect or incomplete datasets?
In July, a coalition of humanitarian organisations renewed calls for a global treaty banning fully autonomous weapons—an initiative that has stalled for years due to lack of consensus among major powers. The coalition’s latest statement warned that “without urgent political action, the de facto deployment of autonomous weapons will precede any international legal framework governing their use.”
Speed vs. Stability
The drive for ever‑faster decision cycles—often described as “compressing the kill chain”—has long been a priority in military strategy. But AI is turbocharging that concept, enabling machines to conduct surveillance, threat assessment, and tactical responses at speeds no human operator can match.
Supporters argue this gives militaries a decisive defensive edge. But detractors warn that when decisions unfold in milliseconds, opportunities for diplomatic intervention narrow dramatically.
“The fear is a scenario where autonomous systems misinterpret routine maneuvers as hostile acts and respond before humans can intervene,” says former Australian defence official Mei Tan. “Once competing systems are operating at machine speed, crisis stability becomes fragile.”
A Fragmented Global Landscape
Compounding the problem is the lack of international agreement on standards for AI in warfare. Some nations have adopted strict human‑in‑the‑loop policies; others emphasise flexibility and rapid response. Private contractors, too, hold enormous influence in developing battlefield algorithms, introducing corporate incentives into decisions once reserved for states.
The result, analysts say, is a patchwork of norms ill‑suited to the interconnected nature of autonomous conflict. As supply chains globalise and dual‑use technologies proliferate, regulating the spread of advanced defence AI becomes increasingly difficult.
Oversight Playing Catch‑Up
Governments are beginning to respond. Several legislatures are debating new transparency mandates for military AI procurement, and the UN Group of Governmental Experts has scheduled expanded talks on autonomous weapons for early 2026. But critics argue that political institutions are structurally too slow to match the exponential pace of technological adoption.
“By the time regulations are drafted, tested, negotiated, and enacted, the underlying technologies will have changed,” warns Hartmann. “We risk codifying rules for a world that no longer exists.”
The Stakes Ahead
Despite the concerns, few experts expect the trend toward automation to reverse. Geopolitical tensions—from contested sea lanes to cyber skirmishes—are driving states to seek any strategic advantage available. For many leaders, advanced defence technology promises greater precision, lower casualty rates, and enhanced deterrence.
But whether these benefits materialise may depend on how effectively oversight mechanisms evolve in the coming years. Without robust accountability, transparency, and shared international norms, the critics warn, autonomous warfare could entrench instability rather than prevent it.
As 2025 draws to a close, the world stands at a crossroads: embrace the promise of technological defence innovation, or confront the profound dilemmas it creates. Most likely, it will have to do both—simultaneously, urgently, and under intensifying global pressure.




