OpenAI's deepfake detector can identify images generated with DALL-E 3

The tool correctly identified 98% of the images

Reading time icon 2 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team. Read more

An AI generated image of OpenAI's deepfake detector

AI gives us many opportunities to generate, enhance, and modify content. In addition, AI models are evolving fast, and their content is becoming more qualitative. However, while artificial intelligence can be a great assistant, wrongdoers can use it to generate deepfakes. So, OpenAI decided to launch a deepfake detector.

Deepfakes are nothing new. They’ve been around for a while now. Yet, in the past, they weren’t so accurate. After all, even if we had the tools to edit photos, videos, and audio tapes, the process took a while and was costly. But, the AI solved those issues. For example, Sora AI can generate high-quality videos in a few minutes. 

When can the deepfake detector from OpenAI do?

The deepfake detector from OpenAI can discover images generated with DALL-E 3. In addition, it can check for AI’s assistance. Also, according to OpenAI, the tool successfully identified 98% of the cases. Yet, it incorrectly tagged 0.5% of the results as DALL-E 3 products.

OpenAI trained the deepfake image detector to spot content created with their tools. So, unfortunately, it might not be able to detect AI images generated with other tools. Additionally, the classifier handles AI content with modifications such as cropping, compression, and saturation, but it is less proficient with others.

Do AI images have metadata?

Besides the deepfake detector, OpenAI acknowledged introducing C2PA metadata to the images and with DALL-E 3. On top of that, the company will add it to Sora. However, skilled threat actors could remove the metadata.

In addition, OpenAI incorporated audio watermarks into the content generated with their Voice Engine. However, like Sora, their voice model is still under development, and it will take a while before it becomes available.

Ultimately, the deepfake image detector from OpenAI will help us detect DALL-E 3 and Sora creations once released. According to OpenAI, it will be available for a limited number of testers, research labs, and research-oriented journalism nonprofits. If you think the tool can help you, you can apply to become a tester for the DALL-E detection program.

What are your thoughts? Are you going to apply? Let us know in the comments.

More about the topics: AI, OpenAI

User forum

0 messages