As experts warn that AI-generated images, audio and video could influence the US presidential election this fall, OpenAI is releasing a tool designed to detect content created by its own popular image generator, DALL-E. This is reported by The New York Times.
► Read the Ministry of Finance on Instagram: the main news about investments and finance
According to OpenAI, this The tool is just a small part of what will be needed to combat deepfakes in the coming months and years.
The company will share its new deepfake detector with a small group of disinformation researchers so they can test the tool in real-world situations and help identify ways its improvements.
“This is the impetus for new research,” said Sandhini Agarwal, an OpenAI researcher who focuses on security and policy.
OpenAI said the detector can correctly identify 98.8 % of images generated by DALL-E 3, the latest version of the image generator.
But the company noted that its tool is not designed to detect images created by other popular generators such as Midjourney and Stability.
< p>OpenAI also said it is developing watermarks for AI-generated sounds so they can be instantly identified. The company hopes that these watermarks will be difficult to remove.