Google has confirmed that it is using videos from its YouTube platform to train artificial intelligence (AI) systems, including the Gemini model and its advanced video and audio generator, Veo 3.
The revelation, first reported by CNBC, has raised significant concerns among digital creators and intellectual property experts over transparency, consent, and the future of creative ownership.
According to CNBC, the company has relied on a portion of its extensive YouTube catalogue—estimated at around 20 billion videos—for training its AI tools. While Google insists that only a subset of videos is involved and that it honours agreements with creators and media firms, the company has not disclosed the exact volume or criteria used to select the training data.
A YouTube spokesperson said the platform “has always used YouTube content to make our products better” and that the introduction of generative AI has not changed this practice. The company maintains that it has implemented “robust protections” for creators, including tools for safeguarding their image and likeness. However, YouTube’s own terms of service grant it a broad, worldwide, royalty-free licence to use uploaded content—a provision that effectively permits such AI training without further approval from the creator.
Many creators interviewed by CNBC stated they had not been informed or consulted about their content being used to train AI models. While YouTube claims it has previously communicated this information, several leading creators and intellectual property professionals said they were unaware of the practice.
The scale of the training effort is considerable. Experts estimate that even 1% of YouTube’s library represents more than 2.3 billion minutes of video—over 40 times the volume of training data used by competing platforms. The commercial implications are also significant. Outputs from Veo 3, which was unveiled in May, include entirely AI-generated cinematic sequences and audio. Google positions Veo 3 as one of the most advanced systems of its kind currently in development.
Critics warn that such tools could displace the very creators whose content was used to develop them. Luke Arrigoni, CEO of Loti, which focuses on protecting digital identities, said the model effectively creates “a synthetic version, a poor facsimile” of original creative work. He questioned the fairness of this approach, given the lack of credit, compensation, or the opportunity to opt out.
Vermillio, a firm specialising in content protection, has developed a detection tool called Trace ID to assess the overlap between AI-generated content and original videos. In one cited case, a video produced by YouTube creator Brodie Moss received a Trace ID score of 71 for visuals and over 90 for audio, indicating a high degree of similarity with content produced by Veo 3. Vermillio CEO Dan Neely stated that creators increasingly encounter fake versions of themselves and their work, a trend he expects to accelerate with more powerful generative tools.
Although Google includes indemnification clauses for its AI products—agreeing to assume legal liability in cases of copyright challenges—the absence of meaningful opt-out mechanisms has frustrated creators. YouTube does allow creators to prevent third-party AI firms such as Amazon or Nvidia from training on their content, but no such option exists to stop Google from using videos for its own models.
In December, YouTube entered a partnership with Creative Artists Agency to develop systems enabling high-profile individuals to manage the use of their likeness in AI-generated media. The platform also offers takedown tools for creators who believe their image has been misused, though Arrigoni claims these systems have proven unreliable for many users.
Legal and political pressure is also mounting. In the United States, recent hearings have drawn attention to the unregulated use of AI to replicate individuals’ likenesses. Senator Josh Hawley warned that, without enforceable rights, creators and artists stand to lose control over their work and their identities.
Some creators see potential opportunities. Sam Beres, a YouTube creator with over 10 million subscribers, described the technology as “friendly competition”, acknowledging its inevitability while expressing hope for positive use cases.
Nonetheless, the broader sentiment is one of unease. Google’s handling of the situation—its selective disclosure, lack of opt-out provisions, and the growing capabilities of models like Veo 3—has led to renewed calls for legal frameworks that can safeguard creator rights in the age of generative AI. As the industry rapidly evolves, the balance between innovation and ownership remains unresolved.
Read also:
European Commission Signals Potential Delay in AI Act Enforcement