The way we kill now
AI, autonomous weapons, and the new global arms race
The current conflict between the Pentagon and Anthropic is taking place against the backdrop of a global arms race among major military powers to integrate artificial intelligence into warfare. On February 24, 2026, it was reported that Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei until 5:01 PM Friday to accept unrestricted military use of Claude or face Defense Production Act compulsion and supply-chain blacklisting a threat normally reserved for foreign adversaries like Huawei.
This confrontation was triggered by Claude’s reported use in the January 3 capture of Venezuelan President Nicolás Maduro and it puts a sharp point on a fundamental question: who sets the rules for military AI?
Meanwhile, China fields DeepSeek-powered autonomous vehicles and drone swarms, Israel uses AI-based targeting at scale to target a statistically determined number of civilians in Gaza, Russia deploys AI-guided Lancet munitions in Ukraine, and non-state actors from ISIS to Mexican cartels adopt AI-enhanced drones at alarming speed. What follows is a detailed accounting of where each actor stands.
When viewed as a whole, three conclusions are clearly defensible:
commercial AI is being integrated into warfighting faster than governance can keep up
if we’re in an “AI war” in any sense of the word with anyone, then it’s with China; U.S./China competition is probably the primary dimension of the AI arms race in the years ahead
similar to nuclear proliferation, “AI proliferation” – the spread of AI-enabled weapons and methods of warfare to non-state actors - represents an ungoverned and extremely high-threat frontier
More importantly, it suggests that the primary upshot of everything we’ve seen in the news these past few weeks is going to be a relatively small footnote in the broader story that’s so huge it’s almost invisible, which is that war is going to increasingly be fought not by people, but rather by thinking machines we create to kill for us.
The sooner that we wrap our heads around that, and get Congress to pass some laws regulating it, the better for everyone – as in, not just Americans, but better for the entire world.
Operation Absolute Resolve and the “all lawful use” standoff
On January 3, 2026, U.S. Delta Force operators captured Venezuelan President Nicolás Maduro in Caracas in what the Pentagon designated “Operation Absolute Resolve.” The operation involved 150+ aircraft, suppression of Venezuelan air defenses, and disruption of communications and electricity. Maduro appeared at the Daniel Patrick Moynihan courthouse in New York on January 5 to face narco-terrorism and drug trafficking charges, per Fox News.
The Wall Street Journal and Axios reported in mid-February that Claude was deployed during the active operation through Palantir’s Maven Smart System on classified networks. The exact role remains unclear; there are sources indicating AI-enabled targeting helped with bombing multiple sites in Caracas, but what set off the Pentagon was what happened next: an Anthropic executive reportedly contacted a Palantir executive to ask whether Claude had been used in the operation. A senior administration official said this “caused real concerns across the Department of War,” interpreting it as potential disapproval. Anthropic denied making such inquiries or expressing concerns.
This incident accelerated a simmering dispute. Defense Secretary Hegseth’s January 9 AI strategy memorandum had already mandated that all DoD AI contracts incorporate “all lawful use” language within 180 days, explicitly rejecting company-imposed guardrails. Anthropic’s two red lines - no mass domestic surveillance of Americans and no fully autonomous weapons without meaningful human oversight - directly conflicted with this mandate.
The confrontation reached its apex on February 24, 2026, when Hegseth met Amodei at the Pentagon, flanked by Deputy Secretary Steve Feinberg, Under Secretary Emil Michael, and general counsel Earl Matthews. The meeting was described as “not warm and fuzzy at all” by one defense official, though another characterized it as “cordial” with no raised voices.
Hegseth presented two coercive options alongside contract termination:
Supply chain risk designation would effectively blacklist Anthropic. Any company holding military contracts would need to certify they don’t use Anthropic products. This designation is normally reserved for foreign adversaries - Huawei and Kaspersky are the precedents. Because eight of the ten largest U.S. companies use Anthropic’s products, the economic ripple effects would be enormous.
Defense Production Act invocation would compel Anthropic to provide Claude without restrictions. The DPA, a Korean War-era statute extended through September 2026, gives the president broad authority to direct private industry for national defense. Biden used the DPA’s Title VII (information-gathering) provisions for AI; Hegseth is threatening Title I - the core compulsion power - which legal scholars describe as “an enormous escalation.”
The deadline: 5:01 PM Friday, February 27. A Pentagon official told CNN the company must “get on board or not.”
Former DOJ-DOD liaison Katie Sweeten identified a logical contradiction in the threats, on CNN: “I would assume we don’t want to utilize the technology that is the supply chain risk, right?” You can’t simultaneously blacklist a company as dangerous and compel it to serve as critical infrastructure.
The path of least resistance here, such as it is, is probably going to end up on something that lets the government continue to use Claude while also offering Anthropic a face-saving – really, brand-saving - off-ramp.
Rozenshtein’s legal analysis draws the constitutional battle lines
Alan Z. Rozenshtein, associate professor of law at the University of Minnesota and senior editor at Lawfare, published “What the Defense Production Act Can and Can’t Do to Anthropic” on February 25, 2026 (Lawfare). It is the most rigorous legal analysis of the standoff extant.
Rozenshtein distinguishes two possible government demands:
Demand one: remove contractual usage-policy guardrails while leaving the model itself untouched - essentially a change to terms of service, not the product. For this, the government has “a real argument,” though it remains “genuinely contested.”
Demand two: compel Anthropic to retrain Claude to strip safety restrictions baked into model weights. This raises far harder legal questions. A retrained model “looks much more like a new product than dropping contractual restrictions does,” and the DPA’s authority to force a company to manufacture a product it doesn’t currently make is legally questionable.
Retraining also raises novel First Amendment issues. If model training decisions constitute editorial choices - a position with some legal support - then forcing Anthropic to retrain compels expression of values it rejects. Rozenshtein draws the closest analogy to the FBI’s 2015-2016 attempts to compel Apple to write custom software unlocking iPhones after San Bernardino; those attempts largely failed.
His core argument is that Congress should legislate rules for military AI rather than leaving it to ad hoc executive-company negotiations. This was his second Lawfare piece on the topic; his February 20 article, “Congress - Not the Pentagon or Anthropic - Should Set Military AI Rules,” laid the groundwork.
Actually, if nothing else, that’s pretty much the one take-home message from this article you should really, really remember.
America’s AI-first war machine
The Pentagon decided to lean into AI in a big way starting in 2017-2018 with the launch of Project Maven and the founding of the Joint Artificial Intelligence Center (everything is always “joint” in the post 9/11 era). Approximately 70% of all DARPA programs now involve AI, machine learning, or autonomy.
As a result, AI infrastructure has expanded dramatically since 2023. Four frontier AI labs - Anthropic, OpenAI, Google, and xAI - each hold $200 million prototype contracts awarded in mid-2025 by the Chief Digital and AI Office.
The operational layer is dominated by Palantir. Its $10 billion Army Enterprise Agreement (July 2025) consolidated 75 contracts into a single deal covering data integration, analytics, and AI tools across the Department of Defense. The Maven Smart System, now a Palantir commercial product, has a contract ceiling of nearly $1.3 billion after a $795 million increase in May 2025. Maven already processes intelligence for combatant commands worldwide “from the Joint Staff in the Pentagon to theater-level Combatant Commands around the world, including Stuttgart-based European Command“ and signed a NATO contract in April 2025 with an extent that is currently unknown but reasonably anticipated to be one of Palantir’s more significant contracts. The National Geospatial-Intelligence Agency, which manages Maven’s imagery-analysis component, announced that it started to transmit 100% machine-generated intelligence to combatant commanders in June 2025. In combat, Maven has reportedly supported 85+ precision airstrikes in Iraq and Syria, located rocket launchers in Yemen, and provided Russian equipment positions to Ukrainian forces.
Anthropic’s Claude holds a unique position: it was the first frontier AI model operating on classified Pentagon networks deployed through a November 2024 partnership with Palantir and AWS. This classified access makes Claude critical infrastructure for intelligence analysis and military operations. xAI’s Grok signed a classified-systems agreement on February 23, 2026, making it the second competitor to reach classified environments, but Claude’s operational head start is significant.
OpenAI won its $200 million Pentagon contract in June 2025 and launched “OpenAI for Government,” focusing on healthcare, acquisition data, and cyber defense according to its own website. ChatGPT is available on the Pentagon’s unclassified GenAI.mil platform, and Azure OpenAI received DISA authorization for secret classified information in April 2025, though a full classified deal remains incomplete.
xAI/Grok entered the defense market rapidly. Beyond its $200 million CDAO contract, “Grok for Government” will integrate into GenAI.mil for 3 million military and civilian personnel per Fox News. Crucially, xAI accepted the Pentagon’s “all lawful purposes” standard that Anthropic has refused. Senator Elizabeth Warren questioned the contract, noting xAI “came out of nowhere” and raising concerns about Elon Musk’s DOGE access creating unfair competitive advantage.
On autonomous weapons, the Replicator Initiative launched in August 2023 with a $1 billion budget has fielded hundreds of ‘attritable’ (basically low-cost and expendable) autonomous systems, including Switchblade-600 loitering munitions and Anduril Ghost-X drones though it fell short of its “multiple thousands” target, per Responsible Statecraft; the program was reorganized under a new Defense Autonomous Warfare Group focused on larger attack drones. DARPA’s ACE program achieved a milestone in April 2024 when an AI-piloted X-62A VISTA (modified F-16) engaged in autonomous dogfighting with a human pilot. Its August 2025 successor, AIR, is reported to aim at giving F-16s tactical autonomy for beyond-visual-range missions.
The intelligence community has embedded AI across agencies. The CIA’s Office of Artificial Intelligence deploys models through a centralized platform, with its Open Source Enterprise using LLMs to process global news across 90+ languages in near-real-time. The DIA’s MARS system achieved full operational capability in 2025 for AI-assisted big-data analysis; meanwhile, the NSA integrates AI into SIGINT for speaker identification, machine translation, and pattern detection, and uses AI to identify hackers and assist cybersecurity investigators tracing Chinese cyber-attacks on U.S. critical infrastructure. CYBERCOM also reportedly stood up a dedicated AI Task Force in April 2024.
China and “intelligentized warfare”
China’s military AI ambitions operate on a scale matched only by the United States, and in some areas - particularly autonomous drone swarms - China may lead. The doctrine of “intelligentized warfare” (智能化战争) represents the third stage of PLA modernization after mechanization and informatization, with a target of integrated development by 2027.
DeepSeek has become the PLA’s preferred AI foundation. A Reuters investigation in October 2025 documented a dozen DeepSeek-related procurement tenders from PLA entities, versus only one referencing Alibaba’s Qwen. Norinco’s P60 autonomous military vehicle, unveiled in February 2025, runs DeepSeek models on Huawei Ascend chips for combat-support operations. Xi’an Technological University claimed a DeepSeek-powered system assessed 10,000 battlefield scenarios in 48 seconds - a task estimated to take human planners 48 hours. The U.S. State Department has stated that “DeepSeek has willingly provided, and will likely continue to provide, support to China’s military and intelligence operations”.
Chinese drone swarm development is aggressive. Researchers filed 930+ swarm-intelligence patents since 2022, compared to approximately 60 by U.S. engineers. The Swarm I and II systems can launch hundreds of drones under a single mission objective, reported to be designed to continue operating autonomously even when communications are jammed, with behavior modeled on animals “prioritising evasion and avoiding detection by more serious threats”. The Diplomat on Feb 3 2026 details PLA-linked research on lethal autonomous drone swarms for urban warfare, including potential Taiwan invasion scenarios.
China’s surveillance AI ecosystem represents the world’s most developed military-applicable surveillance infrastructure. The SkyNet (yup) program monitors through 700+ million cameras nationwide; it is one of the world’s largest monitoring networks. Sharp Eyes integrates public and private cameras with AI for facial recognition and predictive policing. In Xinjiang, Hikvision cameras and AI running on Nvidia chips screened all 23 million residents for “terrorism” potential using facial recognition, DNA collection, iris scanning, voice printing, and gait recognition, according to reporting from The China Project. Xinjiang security spending reached over $8 billion in 2017 alone, a tenfold increase from 2007. These technologies have direct military applicability and are exported to dozens of countries.
Chinese companies with military ties are extensive. CETC (state-owned) is the top-awarded entity in PLA AI procurement. Huawei’s Ascend chips and MindSpore framework are central to military AI, with the company listed on the DoD’s Section 1260H Chinese Military Companies list. SenseTime, sanctioned by the U.S. for Uyghur surveillance, led the creation of mandatory national facial recognition standards. Georgetown CSET’s September 2025 analysis of 2,857 AI-related PLA contracts identified 1,560 different organizations winning at least one contract, with ~75% being private firms founded after 2010 - evidence of the military-civil fusion strategy in practice.
In offensive cyber operations, China has achieved a landmark: Anthropic disclosed in November 2025 the first documented large-scale AI-orchestrated cyberattack, in which a Chinese state-sponsored group (GTG-1002) jailbroke Claude Code to target ~30 organizations. The AI autonomously executed 80-90% of the operation with minimal human intervention.
Russia’s deployment of AI in information warfare
Russia’s military AI doctrine assigns AI as a support function, not a replacement for human decision-making, per CSIS in February 2026. The limited state of tech available to it – due to sanctions – means that Russia’s military AI story is one of battlefield adaptation rather than technological leadership. Ranking 31st globally on the Tortoise Media AI Index, Russia has only 168 AI startups (versus 6,903 in the U.S.). Yet it has achieved meaningful results in specific domains.
The ZALA Lancet loitering munition is Russia’s premier AI-enabled weapon. By end of 2024, Russia launched over 2,800 Lancets with a 77.7% hit rate. The Lancet’s AI-driven autonomous targeting system, powered by an NVIDIA Jetson TX2 module, independently identifies, classifies, and prioritizes targets, reportedly displaying vehicle type names on its targeting display. Upgrades since 2022 have doubled flight time, extended strike radius from 40 to 70 km, and added electronic warfare resistance. A next-generation variant is anticipated with network-centric swarm capabilities. This critical dependency on Western components (NVIDIA chips, U-Blox GPS modules, Czech AXI motors) exposes sanctions vulnerabilities.
Despite these difficulties, Russia has committed heavily to military AI. The 2024 defense plan reportedly included a dedicated AI section with a separate budget line. Strategic Rocket Forces Commander Karakayev stated (in a 2023 European Leadership Network report at page 12) that AI-equipped robotic systems will be incorporated into all mobile and stationary strategic missile complexes by 2030.
Though Russian usage of FPV drones is well-reported and apparently widespread, its relative lack of advancement compared to other AI powers limits its reported deployment of autonomous drone systems. Drones did cause 70-80% of battlefield casualties in Ukraine as of August 2025, with unmanned systems conducting up to 80% of Russian fire missions. Russia has also reportedly tested the S-350 Vityaz air defense system in autonomous mode - detecting, tracking, and destroying a Ukrainian aircraft without human assistance in June 2023, and reportedly uses the “Svod” Tactical Situational Awareness Complex and “Glaz/Groza” software to convert drone footage into targeting data, compressing detection-to-impact time from hours to minutes (CSIS).
Russia’s most effective AI domain, by far, is information warfare. RUSI estimates Russia spends approximately $1 billion on information warfare but achieves disproportionate impact; AI is a force magnifier on an already sizeable national priority. Success stories in this domain are numerous, and well documented:
The Pravda network published over 3.6 million articles in 2024 aimed at corrupting Western AI chatbots, a technique dubbed “LLM grooming.”
NewsGuard audits found AI chatbots repeat false Russian narratives about one-third of the time, per CEPA earlier this year.
Russia used Meliorator AI software (developed by RT/FSB) to create over 1,000 fake American social media profiles, according to the Center for Strategic and International Studies.
The Storm-1679 network used advanced deepfakes to impersonate ABC News, BBC, and POLITICO, including AI-generated voices of Tom Cruise.
Russia-China AI military cooperation is deepening but remains transactional. In December 2024, the Council of Foreign Relations reported, that Putin instructed the government and Sberbank to “collaborate with China on technological R&D in AI.” Three months earlier, (sanctioned bank) Sberbank announced plans for joint research with DeepSeek and Qwen developers. Chinese factories provide Russia with hardware and AI software for UAV adaptations, and Russia used Chinese parts to produce up to 2 million small tactical UAVs. However, the partnership is constrained by mutual distrust - Chinese cyber groups like Mustang Panda (yup) have been caught spying on Russian aerospace and defense firms, including nuclear submarine programs.
Iran’s AI drones and asymmetric cyber capabilities
Iran’s military AI strategy is fundamentally shaped by sanctions, limited resources, and the centrality of drones to its defense doctrine. The Shahed series has undergone dramatic AI upgrades, particularly through Russian battlefield modifications. Ukrainian intelligence recovered a downed Shahed-136 “MS series” variant in June 2025 containing an NVIDIA Jetson Orin minicomputer, infrared camera, and radio modem enabling AI-powered target recognition and autonomous terminal guidance in GPS-denied environments. These upgraded drones feature swarm coordination, thermal imaging, anti-spoofing navigation, and can reprioritize targets mid-flight.
Full article on Patreon, for subscribers, here.

