YouTube Moves to Protect Journalists and Leaders from AI Deepfakes
YouTube is taking another step toward tackling AI impersonation on its platform. The company has announced that it is expanding its likeness detection tool to a pilot group of journalists, government officials, and political candidates.
The feature was first introduced last year for creators in the YouTube Partner Program. Now, as AI-generated content becomes more common online, the platform appears to be widening access to those who often find themselves at the center of public discourse.
The system works in a way similar to YouTube’s long-running Content ID technology. However, instead of identifying copyrighted audio or video, the tool scans for a person’s likeness inside AI-generated content. If the system detects a potential match, the affected individual can review the content and request its removal if it violates YouTube’s privacy policies.
This comes at a time when every online platform is doing its best by labeling AI-generated content to address the growing problem of deepfakes, where someone’s face or voice may be used without consent. That being said, YouTube says detection alone does not automatically result in removal. The company notes that it still evaluates requests carefully to balance privacy protections with freedom of expression.
That includes protecting forms of content such as satire, parody, or commentary that may involve public figures. As of now, the feature is being rolled out to a limited group to ensure it works effectively for people in high-visibility roles. Participants must verify their identity before enrolling.
YouTube says the information used for verification is only used to power the feature and confirm identities. The company also clarified that the data will not be used to train Google’s generative AI models.
Article feature image source: Unsplash/Zulfugar Karimov
Read our disclosure page to find out how can you help Windows Report sustain the editorial team. Read more
User forum
0 messages