OpenAI has for the first time reported taking down AI-based Russian, Chinese, Iranian, and Israeli ops to manipulate public opinion through fake accounts and media.
The OpenAI report revealed banned accounts from five covert ops using its platform for multilingual propaganda, none of which gained significant traction.
Current reliance on AI companies to self-regulate, as seen with social media, is simply inadequate. AI enables rapid, large-scale dissemination of false content, undermining trust in systems like democracy. Despite some state actions and federal efforts, there are no comprehensive laws to counteract these threats. Policymakers must enforce regulations to label AI-generated content, protect voters, particularly marginalized communities, and ensure public involvement in AI policy decisions to safeguard democr
The world must avoid being overly restrictive in formulating AI regulations as it could stifle innovation. A balanced, dynamic approach to assess AI risks is key, exploring new high-risk tech as needed. AI’s potential for public opinion manipulation and copyright violations must be curbed, no doubt. However, nations must surely tap its advantages and maximise its benefits for their people.