OpenAI has for the first time reported taking down AI-based Russian, Chinese, Iranian, and Israeli ops to manipulate public opinion through fake accounts and media.
The OpenAI report revealed banned accounts from five covert ops using its platform for multilingual propaganda, none of which gained significant traction.In a report released on Thursday, OpenAI said it had taken down Russian, Chinese, Iranian, and Israeli influence campaigns that allegedly used its artificial intelligence (AI) tools to manipulate public opinion.
Current reliance on AI companies to self-regulate, as seen with social media, is simply inadequate. AI enables rapid, large-scale dissemination of false content, undermining trust in systems like democracy. Despite some state actions and federal efforts, there are no comprehensive laws to counteract these threats. Policymakers must enforce regulations to label AI-generated content, protect voters, particularly marginalized communities, and ensure public involvement in AI policy decisions to safeguard democrdemocracy from these emerging risks.
The world must avoid being overly restrictive in formulating AI regulations, as itthis could stifle innovation. A balanced, dynamic approach to assessassessing AI risks is key, exploring new high-risk tech as needed. AI’'s potential for public opinion manipulation and copyright violations must be curbed, no doubt. However, nations must surely tap its advantages and maximise its benefits for their people.