Versions :<123456789101112Live>
Snapshot 7:Wed, Oct 22, 2025 11:59:56 AM GMT last edited by Joshua

Open Letter Calls for Superintelligence Development Ban

Open Letter Calls for Superintelligence Development Ban

Above: An AI logo during the Mobile World Congress 2025 in Barcelona, Spain, on March 5, 2025. Image copyright: Joan Cros/NurPhoto/Getty Images

The Spin

Superintelligence poses an existential threat that demands immediate action to prevent human extinction. Current AI systems already exhibit dangerous behaviors like deception and manipulation that researchers can't fully control or understand. The rapid pace of AI development, driven by competitive pressures, is creating systems that could easily go on to subjugate or eliminate humanity. This Pandora's box must be firmly kept shut before it is too late.

AI development follows predictable patterns where knowledge limitations, not raw computing power, prevent any single system from achieving world-dominating superintelligence. AI progress requires extensive trial-and-error experimentation that can't be simulated digitally, ensuring humans remain essential partners rather than obsolete obstacles. These realities make an existential AI scenario nothing more than implausible fearmongering.

Metaculus Prediction

There is a 5% chance that the U.S. and China will sign a formal, verifiable bilateral treaty or accord specifically limiting AGI development by Dec. 31, 2030, according to the Metaculus prediction community.


Disclosure of Potential Conflict of Interest

As a product of the Improve the News Foundation (ITN), Verity is primarily funded by the Future of Life Institute, an organization mentioned within this article.

The Controversies



Go Deeper



© 2025 Improve the News Foundation. All rights reserved.Version 6.17.0

© 2025 Improve the News Foundation.

All rights reserved.

Version 6.17.0