YouTube logo
Credit: Photo Agency/Shutterstock
Stephen Johnson Stephen Johnson
Senior Staff Writer

Covering smart glasses, VR headsets, popular culture, and more.


Read Full Bio
Key Takeaways
  1. YouTube introduced likeness-detection tools to help creators remove AI-generated deepfake videos.
  2. This feature is only one part of YouTube's broader effort to control AI-generated content.

Table of Contents


YouTube is about to get less scammy. The video-sharing platform today rolled out likeness-detection technology designed to identify AI-generated content featuring fake faces and voices of YouTube creators. This program is only open to eligible creators in the YouTube Partner Program right now.

Creators interested in the program upload a picture and a voice recording of themselves with proof of their identity, then they can view any detected videos, and request their removal, either through YouTube’s privacy guidelines or a copyright request. There's also an option to archive the video, to prevent sneaky deletions.

In the short term, this will likely slow the growth of YouTube videos featuring influencers endorsing products or ideas they've never heard of. And with new AI tools making realistic video fakes possible in minutes—I made this one of myself endorsing a product in about 3 minutes—this kind of protection may soon be something everyone uses.

YouTube's larger AI-control program

The likeness identification program is part of YouTube’s broader effort to deal with the glut of AI-generated content on its site. Earlier this year, the company began requiring creators to label “realistic” AI videos and updated its monetization policies to cut the revenues earned from the kind of low-effort, inauthentic content that is often generated by AI.

What do you think so far?

The potential danger of large-scale identity verification

Of course proving you're you isn't risk-free. Proving your identity to any company involves uploading a driver's license, passport, or other official ID, or handing over biometric data, and tech companies often fail to keep that private information out of the hands of bad actors. Here are only a few recent examples:

YouTube’s new system might fight deepfakes and make the platform less spammy, but it also adds to the growing library of personal data people are trusting tech companies to guard.