Meta Platforms, which runs Facebook and Instagram, has asked advertisers to disclose digitally created or altered political ads, including through artificial intelligence.
According to the new rules, which will go into effect in 2024 worldwide, Meta would penalize advertisers who fail to disclose ads featuring a person saying or doing things they never said or did, altering footage of an actual event, or showing realistic-looking people and events.
Not only should tech companies be reviewing AI-generated images and videos, but governments around the globe should enact laws to prevent this insidious content from polarizing society even more. Until that happens, however, everyone must learn to carefully analyze any content they see online before sharing it widely over the internet.
While Meta isn't calling for an outright ban on deepfake images, requiring disclaimers is a slippery slope that could lead to forced labels on other content, such as satire. Twisting on what people say and doctoring images to fit a narrative is a decades-long issue; just because AI exists doesn't mean we should lose our right to use it as a function of protected speech.