YouTube is introducing a method for music labels to remove content that imitates an artist’s distinctive singing or rapping voice. Starting next year, creators must label AI-generated content as well.
YouTube is implementing two sets of content guidelines for AI-generated deepfakes: a highly stringent set to safeguard the platform’s music industry partners and a more lenient set for all other users.
This distinction is explicitly outlined in a recent company blog post, detailing YouTube’s initial considerations regarding moderating AI-generated content. The key requirements are straightforward: creators uploading videos with “realistic” AI-generated content must label it, with a particular emphasis on topics such as elections or ongoing conflicts.
Labels will be visible in video descriptions and atop the videos for sensitive material. The term “realistic” lacks a specific definition according to YouTube at this point. Jack Malon, a spokesperson for YouTube, states that the company will furnish more comprehensive guidance with examples when the disclosure requirement is implemented next year.
YouTube states that the consequences for inaccurately labeling AI-generated content will differ but may involve takedowns and demonetization. Detecting whether an unlabeled video was genuinely generated by AI poses a challenge. According to YouTube’s Malon, the platform is actively working on tools to aid in detecting and confirming creators’ compliance with disclosure requirements for synthetic or altered content. However, these tools are not currently available, and existing ones have a well-known history of limited accuracy.
The situation becomes considerably more intricate from this point onward. YouTube will enable individuals to request the removal of videos that “simulate an identifiable individual, including their face or voice,” using the current privacy request form. If someone undergoes deepfake manipulation, there is a process to follow, potentially leading to the removal of the video. However, the company clarifies that it will “consider various factors” in assessing these requests. Factors include determining if the content is parody or satire and evaluating whether the individual is a public official or a “well-known individual.”
Manual Processes for Safeguarding Singing and Rapping Voices
If this sounds somewhat familiar, it’s because these are akin to the analyses conducted by courts. Parody and satire play a crucial role in the fair use defence in copyright infringement cases, and determining whether someone is a public figure is a significant aspect of defamation law. As there is no specific federal law governing AI deepfakes, YouTube is establishing its regulations proactively. These rules, subject to the platform’s discretion, lack a specific mandate for transparency or consistency. They will coexist with the customary creator disputes involving fair use and copyright law.
It will be exceedingly complex. Currently, there is no established definition for “parody and satire” concerning deepfake videos. Malon reiterated that guidance and examples would be provided when the policy is implemented next year.
Adding to the complexity, there won’t be any exemptions for parody and satire in the realm of AI-generated music content from YouTube’s partners. This applies to content that replicates an artist’s distinctive singing or rapping voice. Consequently, scenarios like Frank Sinatra singing The Killers’ “Mr. Brightside” may face challenges if Universal Music Group opts to disapprove.
Many channels are entirely devoted to producing AI covers of both living and deceased artists. Under YouTube’s updated regulations, most of these channels could face takedowns initiated by the labels. The sole exception mentioned in YouTube’s blog post is if the content is “the subject of news reporting, analysis, or critique of the synthetic vocals” – reminiscent of a standard fair use defense, lacking specific guidelines at this point. Historically, YouTube has been a challenging platform for music analysis and critique due to stringent copyright enforcement. It remains to be seen whether the labels will exercise any restraint and if YouTube will resist these measures.
The distinct safeguard for singing and rapping voices will not integrate into YouTube’s automated Content ID system, scheduled for implementation next year. Malon explains that partner labels will need to manually complete a form for music removal requests. In the initial phase, the platform won’t penalize creators navigating these ambiguous boundaries. Malon assures that “content removed for either a privacy request or a synthetic vocals request will not result in penalties for the uploader.
CHECK THESE OUT:
- Apple Now Serious About It New Privacy Rules for iOS Apps
- Everyone Could Clone Their Voice In The Future – The Ease of Voice Rentals
- Star Wars: The Bad Batch Will Return For a Final Season
- Nintendo Last of Us Clone Taken Down After Sony’s Copyright Claim
- WhatsApp Begins Testing AI-Generated Stickers on the Messaging Platform