Google launched a verification portal called SynthID Detector to help users identify content generated using artificial intelligence (AI). The site allows users to upload content and identifies whether the file contains the SynthID watermark that is automatically embedded in media generated by Google’s AI tools. The site can identify areas in photos that are likely to have been altered by AI. Google recently added SynthID detection to Google Photos, which shows when photos have been altered with the company’s Magic Editor. The new portal will perform similar detection on a wider range of media, including text and images generated by Gemini, videos generated by Veo, and audio generated by Lyria. Media generated using Google’s AI tools will automatically be embedded with an invisible SynthID watermark. The watermark is designed to be resistant to basic digital manipulation and remains detectable even when shared on social media or messaging apps.
Limits on what can be detectedThis portal is an important first step to protect users from AI-generated misinformation such as deepfakes and to distinguish original works from synthetic media. However, it does not support the detection of content generated by tools that do not embed SynthID watermarks, such as ChatGPT. Google is working with external parties to expand the use of SynthID beyond its own tools, such as announcing a partnership with NVIDIA in March this year. The company also announced a new partnership with content verification company GetReal, and plans to add SynthID detection capabilities to its verification tools.
Limits of digital watermarkingHowever, it is impossible to fight deepfakes and AI-generated misinformation using digital watermarks alone. Open source AI tools will continue to exist, and it will be difficult to force them to embed watermarks. Although it may still feel easy to spot AI-generated content at present, that is not necessarily the case in the future. As generative AI becomes more sophisticated, tools like SynthID are likely to be crucial in distinguishing between human-created content and what AI has produced. Google’s SynthID Detector portal is already open to a select group of testers, with a waiting list for media and researchers.On May 20, Google launched a verification portal called SynthID Detector to help users identify content generated using artificial intelligence (AI). The site allows users to upload content and identify whether the file contains the SynthID watermark that is automatically embedded in media generated by Google’s AI tools. The site can identify areas in photos that are likely to have been altered by AI. Google recently added SynthID detection to Google Photos, which shows when photos have been altered with the company’s Magic Editor. The new portal will perform the same detection on a wider range of media, including text and images generated by Gemini , videos generated by Veo, and audio generated by Lyria. Media generated using Google’s AI tools will automatically be embedded with an invisible SynthID watermark. The watermark is designed to be resistant to basic digital manipulation and remains detectable even when shared on social media or messaging apps.
Also Read: Invox Inc Launches Its AI Tool for CO2 Emission Tracking
Limits on what can be detectedThis portal is an important first step to protect users from AI-generated misinformation such as deepfakes and to distinguish original works from synthetic media. However, it does not support the detection of content generated by tools that do not embed SynthID watermarks, such as ChatGPT. Google is working with external parties to expand the use of SynthID beyond its own tools, such as announcing a partnership with NVIDIA in March this year . The company also announced a new partnership with content verification company GetReal, and plans to add SynthID detection capabilities to its verification tools.
Limits of digital watermarkingHowever, it is impossible to fight deepfakes and AI-generated misinformation using digital watermarks alone. Open source AI tools will continue to exist, and it will be difficult to force them to embed watermarks. Although it may still feel easy to spot AI-generated content at present, that is not necessarily the case in the future. As generative AI becomes more sophisticated, tools like SynthID are likely to be crucial in distinguishing between human-created content and what AI has produced. Google’s SynthID Detector portal is already open to a select group of testers, with a waiting list for media and researchers.
SOURCE: Yahoo