AI firms ‘unprepared’ for dangers of building human-level systems, report warns
Guardian3 days
These studies exposereveal a dangerousconcerning reality: AI companies are racingrushing toward superintelligence without basicestablishing fundamental safety guardrails. The industry's own admission that AGI could arrive within years, combined with their D-grade existential safety planning, represents an unacceptable gamble with humanity's future.
The AI industry operates in a competitive environment where safety measures must strike a balance between innovation withand responsibility. Companies like Anthropic and OpenAI have made meaningful progress in safety frameworks and risk assessment, demonstrating that responsible development is achievable while maintaining technological advancement.