Open Letter Calls for Superintelligence Development Ban

Open Letter Calls for Superintelligence Development Ban
Above: Several AI applications can be seen on a smartphone screen. Image copyright: Philip Dulian/Picture Alliance/Getty Images

The Spin

Establishment-critical narrative

Superintelligence poses an existential threat that demands immediate action to prevent human extinction. Current AI systems already exhibit dangerous behaviors like deception and manipulation that researchers can't fully control or understand. The rapid pace of AI development, driven by competitive pressures, is creating systems that could easily go on to subjugate or eliminate humanity. This Pandora's box must be firmly kept shut before it is too late.

Pro-establishment narrative

AI development follows predictable patterns where knowledge limitations, not raw computing power, prevent any single system from achieving world-dominating superintelligence. AI progress requires extensive trial-and-error experimentation that can't be simulated digitally, ensuring humans remain essential partners rather than obsolete obstacles. These realities make an existential AI scenario nothing more than implausible fearmongering.

Metaculus Prediction

There is a 5% chance that the U.S. and China will sign a formal, verifiable bilateral treaty or accord specifically limiting AGI development by Dec. 31, 2030, according to the Metaculus prediction community.



The Controversies



Go Deeper



© 2025 Improve the News Foundation. All rights reserved.Version 6.17.0

© 2025 Improve the News Foundation.

All rights reserved.

Version 6.17.0