The Rise of Robo Notice
/2015. Joe Karaganis, Jennifer Urban
Communications of the ACM, September 2015
"Rise of the Robo Notice" is a preview of our longer publication, Notice and Takedown in Everyday Practice (2016).
Here's an excerpt of the book's introduction:
Most Internet professionals have some familiarity with the “notice and takedown” process created by the 1998 U.S. Digital Millennium Copyright Act (the DMCA). Notice and takedown was conceived to serve three purposes: it created a cheap and relatively fast process for resolving copyright claims against the users of online services (short of ling a lawsuit); it established steps online services could take to avoid liability as intermediaries in those disputes—the well-known DMCA “safe harbor”; and it provided some protection for free speech and fair use by users in the form of “counter notice” procedures.
The great virtue of the notice and takedown process for online services is its proceduralism. To take the most common example, if a service reliant on user-generated content follows the statutory procedures, acts on notices, and otherwise lacks specific knowledge of user infringement on its site (the complicated “red flag” knowledge standard), it can claim safe harbor protection in the event of a lawsuit. Services can make decisions about taking down material based on substantive review and their tolerance for risk. They may also adopt technologies or practices to supplement notice and takedown, though the law makes no such demands beyond a requirement for repeat infringer policies. The resulting balance has enabled a relatively broad scope for innovation in search and user-generated content services. As one entrepreneur put it in our recent study of these issues, notice and takedown was “written into the DNA” of the Internet sector.
This basic model held for about a decade. In the last five or six years, however, the practice of notice and takedown has changed dramatically, driven by the adoption of automated notice-sending systems by rights holder groups responding to sophisticated infringing sites. As automated systems became common, the number of takedown requests increased exponentially.
For some online services, the numbers of complaints went from dozens or hundreds per year to hundreds of thousands or millions. In 2009, Google’s search service received less than 100 takedown requests. In 2014, it received 345 million requests. Although Google is the extreme outlier, other services— especially those in the copyright ‘hot zones’ around search, storage, and social media—saw order-of-magnitude increases. Many others—through luck, obscurity, or low exposure to copyright conflicts—remained within the “DMCA Classic” world of low-volume notice and takedown.
This split in the application of the law undermined the rough industry consensus about what services needed to do to keep their safe harbor protection. As automated notices overwhelmed small legal teams, targeted services lost the ability to fully vet the complaints they received. Because companies exposed themselves to high statutory penalties if they ignored valid complaints, the safest path afforded by the DMCA was to remove all targeted material. Some companies did so. Some responded by developing automated triage procedures that prioritized high-risk notices for human review (most commonly, those sent by individuals).
Others began to move beyond the statutory requirements in an effort to reach agreement with rights holder groups and, in some cases, to reassert some control over the copyright disputes on their services.
Available at: