US Military Reportedly Used Claude in Iran Strikes Despite Trump’s Ban

By Published On: March 2, 2026

The intersection of national security, cutting-edge artificial intelligence, and political directive is often fraught with tension. A recent report highlighting the alleged deployment of Anthropic’s Claude AI by the U.S. military in Iran strikes, mere hours after a presidential ban, has sent ripples through both the cybersecurity and geopolitical communities. This scenario raises critical questions about command and control, the autonomy of AI in sensitive operations, and the implications of supply chain risks in advanced technological deployments.

The Alleged Deployment: Operation Epic Fury

According to reports, the U.S. Department of Defense (DoD) utilized Anthropic’s Claude AI during Operation Epic Fury. This joint offensive with Israel against Iran reportedly took place on February 28, 2026. What makes this deployment particularly contentious is its timing: it occurred just hours after then-President Trump had designated Anthropic as a national security “supply chain risk” and issued a directive for all federal agencies to cease using its AI systems. This direct defiance, or perhaps disconnect, between presidential mandate and military action underscores a significant governance challenge.

Anthropic and the Supply Chain Risk Designation

President Trump’s designation of Anthropic as a national security supply chain risk points to broader anxieties surrounding the origins, development, and potential vulnerabilities of AI technologies. A “supply chain risk” in this context typically refers to the potential for foreign adversaries, or even untrusted third parties, to introduce malicious code, backdoors, or vulnerabilities at any stage of a technology product’s lifecycle. For AI models, this could involve compromised training data, biased algorithms, or hidden functionalities that could be exploited to undermine national interests or provide intelligence to adversaries. While the specific reasons for Anthropic’s designation remain under wraps, the general principle highlights the increasing scrutiny of AI providers and their geopolitical affiliations.

Implications for AI Governance and Military Ethics

The reported usage of Claude in a military strike, especially against a direct presidential order, brings several critical issues to the forefront:

  • Chain of Command and Autonomy: Who authorized the use of Claude after the ban? Does this suggest a breakdown in communication, or deliberate circumvention of orders within military intelligence operations?
  • AI in Lethal Operations: While the exact role of Claude in Operation Epic Fury is unclear, even its assistance in intelligence gathering or targeting decisions in a strike operation raises ethical concerns about the increasing integration of AI into military actions. What level of human oversight was present?
  • Operational Security (OPSEC): If Anthropic was deemed a supply chain risk, then using its AI in a sensitive operation like a military strike potentially exposes critical intelligence or operational details. This could compromise not only the mission but also national security.

The Broader Spectrum of AI Supply Chain Vulnerabilities

This incident is a stark reminder of the inherent vulnerabilities within the AI supply chain, which can extend far beyond traditional software and hardware risks. Some key areas of concern include:

  • Data Poisoning: Malicious actors introducing corrupted or biased data during the AI model’s training phase, leading to erroneous or manipulated outputs.
  • Model Backdoors: Covert mechanisms embedded within the AI model, allowing an attacker to trigger specific, undesirable behaviors under certain conditions.
  • Inferential Attacks: Extracting sensitive information from the model by analyzing its outputs, even without direct access to the training data.
  • Hardware Dependencies: Relying on foreign-manufactured AI accelerators or processors that could harbor hidden vulnerabilities or surveillance capabilities.

While Claude itself is not a vulnerability in the traditional sense, the reported circumstances surrounding its deployment underscore the systemic risks associated with unvetted or sanctioned AI tools in critical systems. Specific CVEs related to AI model exploitation are still emerging, but instances like CVE-2023-38408 (though not directly related to Claude) highlight the growing landscape of AI-specific security flaws.

Remediation Actions and Future Considerations

To mitigate such risks and prevent similar incidents, organizations, particularly in national security, must implement robust AI governance and cybersecurity frameworks:

  • Strict AI Procurement Policies: Establish clear guidelines for acquiring and deploying AI systems, including rigorous vetting processes for vendors and their supply chains.
  • Independent Security Audits: Commission third-party assessments of AI models for vulnerabilities, biases, and adherence to security protocols, independent of vendor claims.
  • Clear Command and Control: Reinforce unambiguous directives regarding the use of sanctioned technologies and ensure strict adherence across all operational levels.
  • “AI Red Teaming”: Proactively test AI systems for potential misuses, adversarial attacks, and unexpected behaviors in simulated environments to identify weaknesses before deployment.
  • Human-in-the-Loop Safeguards: Ensure that critical decision-making processes, especially in lethal operations, retain significant human oversight and intervention capabilities, even when AI provides recommendations.
  • Continuous Monitoring: Implement solutions to monitor the performance and behavior of deployed AI models for any anomalies that could indicate compromise or malfunction.

Conclusion

The alleged deployment of Anthropic’s Claude AI by the U.S. military in direct contravention of a presidential ban on supply chain security grounds is a pivotal moment. It not only exposes potential fault lines in military-civilian command structures but also severely amplifies the conversation around AI governance, operational security, and the critical need for comprehensive vetting of advanced technologies. As AI becomes further embedded in operations of national importance, the imperative for stringent security protocols, clear ethical guidelines, and unwavering adherence to policy has never been more urgent.

Share this article

Leave A Comment