If you’ve been watching a lot of YouTube lately, you’ve likely seen the rise of AI-generated content—from thumbnails and voiceovers to entire videos powered by artificial intelligence. YouTube has taken notice of this shift and is rolling out new tools to safeguard its creators against AI-based impersonation.
YouTube is expanding its Content ID system, which has long been used to flag copyright violations, to detect AI-generated singing voices that mimic real artists. This AI detection feature is still under development with YouTube’s partners and is slated for a 2025 launch. The goal is to protect creators from having their work replicated or altered by AI without their consent.
But the platform isn’t stopping at audio. YouTube is also working on systems to detect AI-generated images and videos, specifically deepfakes. These tools aim to spot and manage AI-generated content that mimics real people, although a specific timeline for their release hasn’t been set yet.
In a broader push to defend against AI misuse, YouTube is also addressing content scraping, where videos are used to train AI models without consent. This issue has come under scrutiny, particularly after reports that companies like Nvidia have scraped YouTube videos to train their AI systems—an act that could breach YouTube’s terms of service. While YouTube has promised to crack down on unauthorized scraping, it has yet to reveal exactly how it plans to enforce these measures.
As the competition within the AI industry heats up, YouTube and Google are at the forefront, but their creators are increasingly concerned about the potential theft of their likenesses. Scraping tools designed to train AI on publicly accessible content are easily accessible and can be run on relatively simple hardware, which raises concerns about how far-reaching AI impersonation could become.
Although YouTube is stepping up its defenses, there’s an interesting caveat. Its terms of service don’t prevent the platform or its parent company, Google, from using videos uploaded by creators for their own AI purposes. While creators are now required to disclose AI usage, a recent report revealed that YouTube allowed OpenAI to scrape content freely in order to avoid setting a legal precedent—potentially protecting its own future AI projects.