Versions :<12345678910111213Live>
Snapshot 11:Fri, Jul 18, 2025 2:07:09 PM GMT last edited by SofiaLanfri

Studies: Top AI Companies Have 'Unacceptable-Risk' Threshold

Studies: Top AI Companies Have 'Unacceptable-Risk' Threshold

Above: Nvidia DGX Spark super computer mother board. Image copyright: David Paul Morris/Bloomberg/Getty Images

The Spin

These studies reveal a concerning reality: AI companies are rushing toward superintelligence without establishing fundamental safety guardrails. The industry's admission that AGI could arrive within years, combined with their D-grade existential safety planning, represents an unacceptable gamble with humanity's future. Governments must step in to regulate the industry.

These studies reveal a concerning reality: AI companies are rushing toward superintelligence without establishing fundamental safety guardrails. The industry's admission that AGI could arrive within years, combined with their D-grade existential safety planning, represents an unacceptable gamble with humanity's future. Governments must step in to regulate the industry.

The AI industry operates in a competitive environment where safety measures must strike a balance between innovation and responsibility. Companies like Anthropic and OpenAI have made meaningful progress in safety frameworks and risk assessment, demonstrating that responsible development is achievable while maintaining technological advancement.



The Controversies



Go Deeper


Articles on this story

Sign Up for Our Free Newsletters
Sign Up for Our Free Newsletters

Sign Up!
Sign Up Now!