According to WIRED magazine, OpenAI, the developer of ChatGPT, has dissolved its "superalignment team" created last year to prevent artificial intelligence (AI) from going rogue.
Instead of a standalone entity, the team will reportedly be embedded across OpenAI's research efforts to help the firm realize its safety goals.
OpenAI has a strong AI safety track record, proven by its detailed internal studies on its products. One study tasked regular users and experts with asking ChatGPT how to make a biothreat, to which there was little difference between what the chatbot gave and what a regular internet search would provide. The reason for this was the guardrails embedded into the chatbot by OpenAI itself.
OpenAI is gutting its safety team because CEO Sam Altman and his investor friends at Microsoft want to pump out new products quickly. Altman cares more about soliciting chip-making investments from Saudi Arabia than he does retaining his former colleagues and friends who care about keeping humanity safe. Altman aims to create superintelligence, whether it helps or hurts people.