Google Tests Watermarks to Identify AI Images

Image copyright: Unsplash

The Facts

  • Google's DeepMind is testing out its new artificial intelligence (AI) technology, dubbed SynthID, which it says will be able to label images that have been generated by AI. The labels, however, will be invisible to the human eye to not ruin the picture.

  • The watermark is embedded in the pixels of the image, but DeepMind CEO Demis Hassabis says it won't change the "quality" or "experience" of the image. He added that it can also defend against attempts to erase the watermark like cropping or resizing.


The Spin

Narrative A

Google understands that pictures truly are worth a thousand words, which is why it's working to build a firewall between false images made by bad actors and the end users they're manipulating. The tech giant also understands the importance of not altering the style and beauty of creators' images, which makes this microscopic watermark the perfect solution to the world's technological problems.

Narrative B

While watermarks will help combat AI misuse, questions still remain surrounding their efficacy and the public's interpretation of what they actually are. Most people still regard watermarks as a company's logo in the bottom right corner of a picture, so Google and the other AI companies must communicate better what purpose these new watermarks will serve and how they will work. Furthermore, even sophisticated watermarks are still vulnerable to alterations — something that must be tackled before the world can trust Big Tech to moderate this issue.


Metaculus Prediction


Go Deeper