AI companies test their systems carefully, release them slowly, and keep a close eye on how they develop in real time. Through this method, developers spot actual risks while keeping important safety rules in place. As companies, researchers, and governments continue to work together, the world will create better oversight that keeps AI both safe and useful.
If adding "please" or citing experts breaks safety controls, then, in reality, they're nothing more than security theater. With AI risks clearly outweighing benefits and the technology advancing faster than society can adapt safely, to deploy systems that can be manipulated by anyone with basic persuasion skills is a disaster waiting to happen.
There is a 50% chance that the first general AI system will be devised, tested, and publicly announced by March 2033, according to the Metaculus prediction community.