Google declares war on deepfakes
As artificial intelligence (AI) technology continues to advance, so too does the potential for its misuse. One area of concern is the use of AI-generated images to create deepfakes, which are manipulated videos or images that make it appear as if someone is saying or doing something they never actually did.
To help combat the misuse of AI-generated images, Google has launched SynthID, a tool that embeds an invisible watermark into images created by AI.
This watermark can then be used to identify and verify the authenticity of an image.
What is Google SynthID?
Google SynthID is a tool that embeds an invisible watermark into images created by AI. This watermark can then be used to identify and verify the authenticity of an image.
SynthID works by using a technique called "generative adversarial networks" (GANs). GANs are a type of machine learning algorithm that can be used to create realistic images from text descriptions.
To create a SynthID watermark, Google first trains a GAN on a large dataset of real images. The GAN learns to identify the unique features of these images, such as the distribution of colors, the textures of objects, and the lighting conditions.
Once the GAN is trained, it can be used to create a watermark that is specific to a particular AI model. The watermark is then embedded into the image in a way that is imperceptible to the human eye.
See how Google SynthID identifies AI-generated content in the Google Deepmind's YouTube video below.
But why would anyone should consider using Google SynthID? Well,
- It is invisible to the naked eye, so it does not interfere with the appearance of the image
- It is very difficult to remove or tamper with
- It can be used to identify and verify the authenticity of an image
- It can be used to track the origin and usage of an image
If you are interested in learning more about SynthID, you can visit the Google DeepMind blog post. You can also sign up for the SynthID beta program to get early access to the tool.
Advertisement