Steven Adler, a former safety researcher at OpenAI, announced his departure from the company this past November after four years of leading safety-related research and programs for product launches and long-term AI systems.
In a series of recent social media posts, Adler expressed deep concerns about the rapid development of artificial general intelligence (AGI), describing it as a "very risky gamble" with potentially catastrophic consequences for humanity's future.
He emphasized that no laboratory currently has a solution to AI alignment — the process of ensuring AI systems work toward human goals and values rather than against them.
The rapid advancement of AI technology poses an existential threat to humanity, with companies prioritizing speed over safety in the race to develop Artificial General Intelligence. The current trajectory lacks adequate safety regulations and alignment solutions, while the competitive pressure forces even responsible companies to accelerate development dangerously.
AI development drives technological progress and innovation, creating healthy competition that benefits society. The emergence of new competitors like DeepSeek is invigorating and pushes the industry forward, while concerns about AI safety are being addressed through ongoing research and development efforts.