YouTube to Update Policies to Crack Down on “Inauthentic” Content
YouTube is gearing up to implement changes to its policies in an effort to curb creators’ ability to monetize “inauthentic” content, particularly mass-produced videos and repetitive material that has become more prevalent with the use of AI technology.
On July 15, the platform will roll out updates to its YouTube Partner Program (YPP) Monetization policies, offering more specific guidelines on what types of content can generate revenue for creators and what cannot.
While the exact details of the policy language are yet to be revealed, a page on YouTube’s Help documentation indicates that creators have always been required to upload “original” and “authentic” content. The impending update aims to provide creators with a clearer understanding of what constitutes “inauthentic” content in today’s landscape.
Despite concerns from some YouTube creators that the changes may restrict their ability to monetize certain video formats like reaction videos or clips, YouTube’s Head of Editorial & Creator Liaison, Rene Ritchie, has clarified that this is not the case.
In a recent video update shared on the platform, Ritchie reassured creators that the update is merely a “minor adjustment” to YouTube’s existing YPP policies, intended to better identify mass-produced or repetitive content. Furthermore, he emphasized that such content has long been ineligible for monetization due to viewers’ perception of it as spam.
However, Ritchie did not address the growing ease of creating such content, particularly with the proliferation of AI technology. YouTube has seen a surge in “AI slop,” a term used to describe low-quality content produced using generative AI tools. For example, it is not uncommon to encounter AI-generated videos featuring an AI voice narrating over images or videos, courtesy of text-to-video AI applications. Channels dedicated to AI-generated music have amassed millions of subscribers, while fake news videos created using AI, such as the Diddy trial hoax, have garnered millions of views.
One notable instance involved a viral true crime murder series on YouTube, which was later discovered to be entirely AI-generated, as reported by 404 Media earlier this year. Additionally, YouTube CEO Neal Mohan’s likeness was exploited in an AI-generated phishing scam on the platform, despite the availability of tools for reporting deepfake content.
While YouTube may downplay the significance of the impending policy changes as a “minor update,” the proliferation of such content and its creators profiting from it could potentially tarnish YouTube’s reputation and diminish its value. Consequently, the platform is keen on implementing clear policies that enable mass bans of AI slop creators from the YPP.
