On Monday, UN Secretary-General António Guterres stated he was "favourable [sic] to the idea" that a global watchdog, similar to the International Atomic Energy Agency (IAEA), should be founded to monitor artificial intelligence (AI) development, a proposal put forward by AI industry leaders.
The remarks came while Guterres was speaking at the launch of a new disinformation policy at the UN, noting the potential risks AI poses to democracy and human rights.
The IAEA is not the model AI luminaries should follow if they are serious about the risks of artificial intelligence. Multilateral cooperation is a slow process, and would not be able to respond effectively to technology moving at such a breakneck speed. Indeed, nuclear armament increased dramatically in the first decade of the IAEA's existence. The onus for AI safety is on the developers themselves, who cannot shrug off this burden onto others. AI developers must work with each other and with governments to protect humanity from AI risks.
While international organizations are far from perfect, they are our best chance to get ahead of the worst consequences of unchecked AI development. With such fierce global competition in the digital world, a patchwork, country-by-country model simply would not suffice. The risks of AI are comparable to nuclear war and infectious disease and could shake our world to its core. Countries around the world are treating this issue with the gravity it deserves as they get the ball rolling on international guidelines.