DeepMind works with Google Cloud to watermark AI-generated images.
|Image Credits: Google
Together with Google Cloud, Google is launching DeepMind, Google’s AI research arm, a tool for watermarking and recognizing AI-generated images. However, only images created using Google’s image generation model are eligible.
The tool called SynthID is in beta for select users of Vertex AI (Google's platform for creating artificial intelligence programs and models) and inserts digital watermarks directly into image pixels this gives the appearance of the human eye invisible, but visible through algorithms. While SynthID only supports Imagen, Google’s text-to-copy counterpart is only available on Vertex AI.
Google previously announced the inclusion of metadata to inform visual content generated by generative AI models. SynthID is obviously a step beyond that.
"While generative AI unlocks tremendous creative potential, it also poses additional risks, such as the possibility that creators may intentionally or unintentionally spread fake news," DeepMind wrote in a blog post I'm Here. "The ability to recognize AI-generated content is critical to detecting when people are interacting with generated media and preventing the spread of misinformation."
|Image Credits: DeepMind
DeepMind claims that SynthID, which was developed and optimized in collaboration with Google Research (Google's R&D team), survives changes such as adding filters to images or changing the color of images more The tool uses two AI models, one for watermarking and another for presentation, which were trained together on “varied” models, DeepMind says.
SynthID cannot recognize watermarked images with 100% confidence. However, the tool distinguishes between where there may be a watermark in the image, and an image where there is a high likelihood of a watermark in the image.
“SynthID isn’t foolproof against image manipulation, but it does offer a promising technology platform for empowering people and organizations to responsibly use AI-powered content,” DeepMind writes in a blog post in -could have been more advanced than science.”
Watermark techniques for generative art are nothing new. Founded in 2020, French startup Imatag offers a watermarking tool that claims not to affect editing, cropping, editing or drawing large images like SynthID Another company, Steg.AI, uses AI models to watermark symbols that survive changes and other transformations play a role.
But pressure is mounting on tech firms to provide a way to make it clear that AI has created jobs.
Recently, the Chinese Cyberspace Administration issued regulations allowing generative AI vendors to mark products — including text and image generators — without achieving the user and transparency of generative AI,. including the watermark recently used by Senator Kirsten Sinema (I-AZ) U.S. Senate court Emphasis on necessity
At its annual Build conference in May, Microsoft promised to watermark AI-generated images and video “using cryptographic techniques”. Elsewhere, Shutterstock and generative AI startup Midjourney adopted the guidelines for embedding symbols indicating that the content was created with a generative AI tool and placed a small watermark in the lower right corner of OpenAI DALL-E 2, a text-imaging tool provided the picture
But so far, conventional watermarking standards — for watermarking and identification — have proven elusive.
SynthID, like the other technologies provided, will not be useful for any image generator other than Imagen — at least not in its current form. DeepMind says it is considering making SynthID available to third parties in the near future. But whether third parties — particularly third parties developing open source AI image generators, without many of the guardrails of gated generators outside of APIs — will adopt the tech is another matter entirely.