newcohospitality.com

The Perils of Copyright Takedowns: A Story Ignored

Written on

On July 14, I will deliver the closing keynote at the fifteenth Hackers On Planet Earth event in Queens, NY. Following that, on July 20, I’ll be at Chicago’s Exile in Bookville.

Currently, we are in a period where a vast number of individuals are becoming intensely interested in fair use, a complex and often misunderstood element of copyright law. Understanding this topic cannot be achieved simply by skimming a Wikipedia entry!

For over two decades, I have discussed fair use with the general public. I've encountered numerous individuals who exhibit a misplaced confidence in their understanding, believing that they can take a specific number of words from a book or a certain number of seconds from a song without worry, while anything beyond that is deemed unfair.

Some think that failing even one of the four factors makes their use automatically unfair, or that if they don’t meet all four, they are certainly infringing (but remember, the Supreme Court has insights about the Betamax case that might surprise you!).

You might assume that quoting a song lyric in a book equates to copyright infringement, or that every musical sample must be cleared. If you believe that scraping the internet to train an AI constitutes infringement, then you are not grasping the intricate, fact-based nature of fair use.

However, it is possible to educate yourself! Fair use is actually a fascinating and nuanced subject, cherished by copyright scholars who engage in compelling debates about it. These discussions often arise from current controversies yet invariably reference past disputes, whether regarding piano rolls, 2 Live Crew, or antiracist reinterpretations of Gone With the Wind.

An intriguing fair use debate occurred in 2019 during a symposium titled “Proving IP” hosted by the NYU Engelberg Center on Innovation Law & Policy. This panel showcased musicologists arguing over the implications of the Blurred Lines case, which represented a significant shift in music copyright law when Marvin Gaye's estate successfully sued Robin Thicke and Pharrell Williams for capturing the “vibe” of Gaye’s “Got to Give It Up.”

Naturally, this discussion included clips from both songs as the experts, alongside some of the leading copyright scholars in the U.S., explored the case's legal reasoning and potential future impacts. It would have been impossible to address this case without those clips.

Herein lies the dilemma: once the symposium was uploaded to YouTube, it was flagged and subsequently removed by Content ID, Google’s $100 million copyright enforcement system. The initial takedown was fully automated, which is how Content ID functions: rights holders upload their audio to claim it, and Content ID then removes any videos that feature that audio. Rights holders can also demand that matching videos be demonetized or that the ad revenue from those videos be redirected to them.

Content ID does have a safety valve: an uploader whose video has been wrongly flagged can contest the takedown. The case then goes back to the rights holder, who must manually choose to maintain or drop their claim. In this situation, the rights holder was Universal Music Group (UMG), the largest record label globally. UMG’s team reviewed the video and did not abandon the claim.

Typically, 99.99% of the time, this is where the story concludes for many reasons. Most individuals lack sufficient understanding of fair use to challenge the judgment of a colossal, wealthy monopolist intent on silencing their video. Additionally, Content ID is a convoluted system that is nearly as intricate as fair use itself, but it operates entirely in private, managed by another vast monopolist (Google).

Google’s copyright enforcement system is a quasi-legal framework with all the drawbacks of the law, plus its own peculiarities (for instance, it operates without lawyers—only corporate experts battling with laypeople). A single misstep can lead to the deletion of your video or the permanent loss of your account, along with every video you’ve ever uploaded. For those who rely on audiovisual content for their livelihood, losing a YouTube account is akin to a catastrophic event.

For the average YouTuber, Content ID operates like a Kafkaesque system that is generally avoided and seldom scrutinized. However, the Engelberg Center is not your typical YouTuber; they are backed by some of the nation’s foremost copyright experts specializing in the very issues that YouTube's Content ID is meant to adjudicate.

Thus, they challenged the takedown, only to face UMG’s unwavering stance. UMG is notorious for disregarding fair use in takedown requests, and their position is so unreasonable that a court has found them guilty of violating the DMCA’s provisions against fraudulent takedowns.

Yet, the DMCA’s takedown framework is grounded in actual law, while Content ID represents a fabricated legal environment, constructed and overseen by a tech monopolist rather than a judiciary. Therefore, the outcome of the Blurred Lines discussion hinged on the Engelberg Center's capacity to navigate both the law and the complex intricacies of Content ID’s takedown flowchart.

After more than a year, Engelberg ultimately triumphed.

But then they didn’t.

If Content ID were a person, it would be akin to an infant—specifically, one younger than 18 months—lacking "object permanence." Until about the 18th month, we struggle to reason about things that are not within our immediate sight—this is when young children find peek-a-boo truly fascinating. Object permanence is the ability to understand that objects exist even when they are not visible.

Content ID, however, lacks this understanding. Despite the fact that the Engelberg Blurred Lines panel was the most intricate fair use inquiry the system has ever been tasked with, it repeatedly failed to remember that it had previously decided the panel could remain available. Time and time again since that first determination, Content ID has taken down the panel’s video, compelling Engelberg to restart the entire process.

But this is just the beginning, as YouTube isn’t the sole platform where a copyright enforcement bot is making billions of unregulated, unaccountable decisions about what audiovisual materials you are permitted to access.

Spotify stands as yet another monopolist, notoriously antagonistic toward artists’ interests, largely due to the influence of UMG and other major record labels in shaping its business practices.

Spotify has invested hundreds of millions of dollars trying to penetrate the podcasting market, hoping to convert one of the last truly open digital publishing systems into a product under its control.

Fortunately, that endeavor has faltered—but millions have unwisely abandoned their independent podcatchers for Spotify’s less favorable app, compelling every podcaster to target Spotify for distribution to reach those captive users.

Guess who hosts a podcast? The Engelberg Center.

Predictably, Engelberg’s podcast features the audio from the Blurred Lines panel, which includes samples from both “Blurred Lines” and “Got to Give It Up.”

Consequently, UMG has consistently taken down the podcast.

Spotify offers its own equivalent to Content ID, and remarkably, it is even more complicated and challenging to navigate than Google’s pseudo-legal framework. As Engelberg elaborates in its recent post, UMG and Spotify have conspired to ensure that this now-iconic discussion of fair use will never be able to utilize fair use itself.

It’s crucial to note that this represents the best-case scenario for discussing fair use with a monopolist like UMG, Google, or Spotify. As Engelberg succinctly states:

> "The Engelberg Center had an extraordinarily high level of interest in pursuing this issue, and legal confidence in our position that would have cost an average podcaster tens of thousands of dollars to develop. That cannot be what is required to challenge the removal of a podcast episode."

Automated takedown systems serve as the tech industry’s response to the “notice-and-takedown” framework designed to mediate the relationship between copyright law and the internet, beginning with the U.S.’s 1998 Digital Millennium Copyright Act (DMCA). The DMCA implements (and surpasses) a pair of 1996 UN treaties, the WIPO Copyright Treaty and the Performances and Phonograms Treaty, and most nations possess some form of notice-and-takedown.

Corporate rights holders assert that notice-and-takedown is a boon to the tech sector, allowing companies to evade copyright infringement. They advocate for a “strict liability” model, where any platform permitting users to post infringing content becomes liable for that infringement, subject to penalties of up to $150,000 in statutory damages.

However, there is no feasible way for a platform to determine in advance whether something a user posts infringes on someone else's copyright. There is no comprehensive registry of copyrighted materials, and fair use allows for numerous legitimate reproductions of someone’s work without their consent (or even against their wishes). Even if every aspiring copyright attorney devoted their entire effort to scrutinizing every tweet, video, audio clip, and image posted to a single platform, they would still only manage to evaluate a fraction of what gets uploaded.

The “compromise” sought by the entertainment industry is automated takedown—an approach like Content ID, where rights holders register their copyrights, and platforms block anything that aligns with the registry. This “filternet” proposal was codified in the EU in 2019 with Article 17 of the Digital Single Market Directive.

This directive sparked significant controversy, and as experts cautioned at the time, it is impossible to implement without infringing upon the GDPR, Europe’s privacy law, leading to its current state of limbo.

Critics during the EU debate pointed out numerous issues with filternets. For one, these copyright filters are incredibly costly: recall that Google has invested $100 million in Content ID alone, which only addresses a small fraction of what filternet proponents demand. Establishing a filternet would require such substantial investment that only the largest tech monopolists could afford it, essentially creating a legal requirement that sustains these monopolists while obstructing smaller, more innovative platforms from emerging.

Filternets also struggle to distinguish between similar files. This presents significant challenges for classical musicians, who often find their work blocked or demonetized by Sony Music, which claims copyright to performances of all significant classical compositions.

Content ID cannot differentiate between your rendition of “The Goldberg Variations” and Glenn Gould’s performance. For classical musicians, the best-case scenario is that they receive no payment for their work while Sony fraudulently claims copyright over their recordings. The worst-case scenario is having their videos blocked, channels deleted, and their names blacklisted from ever creating another account on one of the monopoly platforms.

Yet, when it comes to free expression, the influence of notice-and-takedown and filternets in the creative industries is merely a sideshow. By creating a system of takedowns that require no evidence and impose no real repercussions for fraudulent removals, these systems provide immense advantages to the world’s most despicable criminals. For instance, “reputation management” firms assist convicted felons, including rapists and murderers, in erasing genuine accounts of their crimes from the internet by falsely claiming copyright over them.

Consider how, during the COVID lockdowns, dishonest individuals marketed ineffective products by asserting they would protect users from the virus. While these fraudulent products remained available, authentic scientific articles cautioning against the scams were swiftly removed through false copyright claims.

Copyfraud—making illegitimate copyright claims—is an exceptionally low-risk crime, not limited to just COVID quacks and war criminals. Tech giants like Adobe readily exploit the takedown system, even if it means subjecting millions to spyware risks.

Unscrupulous law enforcement officers play copyrighted music loudly during confrontations with the public, hoping to trigger copyright filters on platforms like YouTube and Instagram and obstruct videos documenting their misconduct.

Even if all the issues with filternets and takedown systems were resolved, the framework would still falter when confronted with fair use and other copyright exceptions. These are “fact-intensive” queries that even the leading experts struggle to navigate (as anyone who views the Blurred Lines panel can attest). There is no viable way for software to accurately determine when a use qualifies as fair or not.

This is a dilemma that the entertainment industry itself is increasingly grappling with. The Blurred Lines ruling opened the floodgates to a new breed of copyright trolls—individuals who sue record labels and their biggest stars for allegedly borrowing the “vibe” of songs that few have heard. Musicians like Ed Sheeran have faced lawsuits amounting to millions over these supposed violations. These lawsuits have prompted the record industry to reassess its stance on fair use, advocating for a broader interpretation to protect those creating works similar to existing ones. The labels recognized that if “vibe rights” became legally recognized, they would be ensnared in the same quagmire that many encounter when attempting to publish online—where anything they produce could instigate takedowns, prolonged legal disputes, and substantial liability.

However, the music industry remains deeply conflicted regarding fair use. Take the peculiar case of Katy Perry’s song “Dark Horse,” which attracted a multimillion-dollar lawsuit from a little-known Christian rapper who alleged that a brief phrase in “Dark Horse” bore an impermissible resemblance to his song “A Joyful Noise.”

Perry and her publisher, Warner Chappell, lost the lawsuit and were ordered to pay $2.8 million. Although they later won an appeal, this experience instilled significant apprehension within Warner Chappell regarding future similar lawsuits.

This situation takes a bizarre and darkly humorous turn. A YouTuber named Adam Neely produced a wildly popular viral video discussing the lawsuit, defending Perry's song. In the video, Neely featured a short clip of “A Joyful Noise,” the song in question.

In court, Warner Chappell argued that “A Joyful Noise” was not similar to Perry’s “Dark Horse.” Yet, when they requested Google to remove Neely’s video, they claimed that the sample from “A Joyful Noise” was actually derived from “Dark Horse.” Astonishingly, they maintained this assertion through multiple appeals within the Content ID system.

In essence, they argued that the song they had previously claimed was completely different from their own was now so indistinguishable from their song that they could not differentiate between them!

The ongoing debate over vibes, similarity, and fair use has only intensified since Neely’s video was removed. Recently, the RIAA sued several AI firms, alleging that the songs generated by these companies bore infringing similarities to tracks in their catalog.

Even prior to the Blurred Lines case, such fair use questions were complex, laden with nuanced details. Just ask George Harrison.

Yet, as demonstrated by the Engelberg panel of competing musicologists and esteemed copyright experts, these issues only become more convoluted over time. If you listen to that panel (if you can access it), you will struggle to arrive at any definitive conclusions regarding the questions posed by this latest lawsuit.

The notice-and-takedown framework is classified as an “intermediary liability” rule. Platforms are considered “intermediaries” as they connect end users to one another and to businesses. eBay, Etsy, and Amazon link buyers with sellers; Facebook, Google, and TikTok connect performers, advertisers, and publishers with audiences, and so forth.

In terms of copyright, notice-and-takedown provides platforms with a “safe harbor.” A platform is not obligated to remove material following an infringement allegation, but if it chooses not to, it may be jointly liable for any ensuing judgment. In other words, while YouTube is not required to remove the Engelberg Blurred Lines panel, if UMG sues Engelberg and wins, Google would also need to compensate.

During the ratification of the 1996 WIPO treaties and the 1998 U.S. DMCA, this safe harbor provision was portrayed as a compromise between the public’s right to publish online and the interests of rights holders whose work might be infringed. The intention was for material likely to infringe to be swiftly removed once the platform received notification, while platforms would disregard obviously fraudulent or spurious takedowns.

That has not materialized. Whether it’s Sony Music claiming ownership of your rendition of “Für Elise” or a war criminal asserting authorship over a news story detailing their crimes, platforms eliminate content without questioning. Why? If they disregard a takedown notice and are mistaken, they face significant penalties ($150,000 per claim). Conversely, if they act on a dubious claim, there are no repercussions. Naturally, they will delete anything they are asked to.

This pattern is how platforms routinely manage liability, a lesson we should have internalized by now. After all, the DMCA is the second-most renowned intermediary liability system on the internet—the most infamous is Section 230 of the Communications Decency Act.

This is a 27-word law stating that platforms are not accountable for civil damages resulting from their users’ speech. While this is a U.S. law, civil damages from speech are relatively rare in the U.S. The First Amendment complicates obtaining libel judgments, and even when such judgments are achieved, damages are usually limited to “actual damages”—typically a modest amount. Most of the most egregious online speech is actually not illegal: hate speech, misinformation, and disinformation are all protected under the First Amendment.

Nonetheless, there are categories of speech that U.S. law criminalizes: actual threats of violence, criminal harassment, and various forms of legal, medical, election, or financial fraud. These are exempted from Section 230, which only shields against civil suits, not criminal acts.

What Section 230 truly safeguards platforms from is being subjected to unwinable nuisance lawsuits from unscrupulous parties who bet that platforms would prefer to eliminate legal speech they find objectionable rather than face court battles. A cadre of copyfraudsters has shown this to be a remarkably safe gamble.

In other words, if you made a #MeToo allegation, or if you were a gig worker utilizing an online forum to organize a union, or if you exposed your employer’s toxic waste violations, or if you were any other under-resourced individual facing bullying from a wealthy, powerful entity, that organization could silence you by threatening to sue the platform hosting your speech. The platform would capitulate immediately. Conversely, those affluent and powerful individuals would possess the legal resources and connections to prevent you from doing the same to them—hence, while Sony can have your Brahms recital taken down, you are powerless to retaliate.

This phenomenon applies to every intermediary liability system, and it has been consistently demonstrated since the early days of the internet. Six years ago, Trump signed SESTA/FOSTA, a law that allowed platforms to be held civilly liable by victims of sex trafficking. At the time, advocates asserted that this would solely impact “sexual slavery” and would not affect consensual sex work.

However, from the outset and continuing to this day, SESTA/FOSTA has predominantly targeted consensual sex work, resulting in immediate, lasting, and profound harm to sex workers.

SESTA/FOSTA dismantled the “bad date” forums that sex workers used to share information about violent or unstable clients, shut down the online booking sites that enabled sex workers to vet their clients, and eliminated payment processors that allowed sex workers to avoid carrying dangerous amounts of cash.

Despite half a decade of SESTA/FOSTA, 15 years of filternets, and a quarter-century of notice-and-takedown, individuals continue to assert that eliminating safe harbors will punish Big Tech and enhance life for everyday internet users.

Currently, it seems probable that Section 230 will be abolished by the end of 2025, even if there is nothing prepared to replace it.

This outcome is not the victory some believe it to be. By imposing responsibility on platforms to screen the content their users post, we are establishing a system where only the largest tech monopolies can endure, and only by removing or blocking anything that threatens or displeases the wealthy and powerful.

Filternets are not finely-tuned takedown mechanisms; they are indiscriminate destruction machines that obliterate anything near illegal speech—including (and especially) the most informed, enlightening discussions on how these systems falter and how they obstruct the grievances of the powerless, marginalized, and abused.

Support me this summer on the Clarion Write-A-Thon and help raise funds for the Clarion Science Fiction and Fantasy Writers’ Workshop!

If you’d prefer a formatted essay version of this post to read or share, here’s a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

https://pluralistic.net/2024/06/27/nuke-first/#ask-questions-never

EFF copyright enforcement banner