European Union officials announced on Friday that a provisional deal on world-leading comprehensive artificial intelligence (AI) regulations had been reached, reportedly establishing obligations for "high-risk" general-purpose AI systems and restricting the use of biometric systems.
Despite the deal, the European Parliament will have to formally vote on the act early next year, with the eventual law that threatens harsh financial penalties coming into full effect in 2025 at the earliest.
The EU has a moral duty to push the AI Act through as soon as possible to protect the common good and address the risks inherent to the absurd proposal to refrain from regulating foundation models. Otherwise, one of the biggest scandals in its history will have taken place as lobbyists would have co-opted institutions to promote non-European interests.
Despite its downsides, self-regulation remains the best policy course to deal with AI, even compared to well-intentioned government regulation. The very complex nature of the controversial technology and its dynamic pace would make it challenging and costly to enforce government-imposed guidelines. Additionally, countries that over-regulate may lag behind in the global AI race.