Versions :<123456Live

AI Firms Urged to Assess Risks to Prevent Loss of Control

AI Firms Urged to Assess Risks to Prevent Loss of Control
Above: Max Tegmark, president of the Future of Life Institute, speaks at Web Summit in Lisbon, Portugal, on November 12, 2024. Image copyright: Rita Franca/NurPhoto via Getty Images

The Spin

Establishment-critical narrative

Increasingly sophisticated AI and the rising potential that their human handlers will lose control pose a very real danger. Given that this is an existential threat, on par with nuclear war or another pandemic, AI companies must calculate the percentage risk that matters could spiral out of control to properly understand the jeopardy posed by their models. Only then will there be the political will to impose the safety standards that are desperately needed.

Pro-establishment narrative

The claim that AI poses an extinction-level threat is greatly exaggerated. While not impossible, such a scenario remains highly improbable, especially when compared to far more likely existential risks like nuclear war, climate change, or future pandemics. Alarmist rhetoric not only dissuades people from adopting this revolutionary technology but also distracts from the real, present dangers of AI, such as its role in spreading disinformation and enabling fraud.



The Controversies



Go Deeper


Articles on this story

Sign Up for Our Free Newsletters
Sign Up for Our Free Newsletters

Sign Up!
Sign Up Now!