Versions :<123456Live

Reports: US Military Used Anthropic's Claude AI in Iran Strikes

Does AI in warfare enable precision strikes or does it create machine-driven decisions that results in indiscriminate killing?
Reports: US Military Used Anthropic's Claude AI in Iran Strikes
Above: Pete Hegseth at U.S. Central Command headquarters at MacDill Air Force Base in Tampa on March 5. Image credit: Octavio Jones/AFP/Getty Images

The Spin

Establishment-critical narrative

The Pentagon's use of Claude AI in Iran strikes raises horrifying questions about machine-driven warfare, especially after 165 elementary students died when a girls' school was obliterated. Military officials refuse to say whether AI suggested targeting Shajareh Tayyebeh, echoing Israel's Lavender system that treated algorithmic decisions as human judgment in Gaza. This marks a brutal new era where it's unclear if humans alone decide where to deploy deadly arsenals.

Pro-establishment narrative

AI enables unprecedented precision in military operations, processing vast intelligence at machine speed to identify legitimate targets and avoid civilian casualties far better than the indiscriminate city-bombing campaigns of World War II. The Pentagon's integration of Claude highlights technological advancement in warfare that could reduce collateral damage through superior data analysis and targeting accuracy. Proper oversight and transparency matter, but AI offers genuine potential to make military strikes more precise and humane.

Metaculus Prediction



Go Deeper


Establishment split

CRITICAL

PRO



© 2026 Improve the News Foundation. All rights reserved.Version 6.18.0

© 2026 Improve the News Foundation.

All rights reserved.

Version 6.18.0