**Key points:**
1. A music publisher sued Anthropic for copyright infringement. The latter defended it on the grounds of "transformative use" and accused the publisher of "subjective behavior" in the infringing content generated by the AI model.
2.anthropic claims that using lyrics to train AI models is a "transformative use" and quotes research director Jared Kaplan, emphasizing that the purpose is to "create data sets to teach neural networks how human language operates."
3.anthropic denies the publisher's claims of "irreparable harm" and points to a lack of evidence of reduced song licensing revenue since the launch of its AI model, challenging the claims.
Webmaster Home (chinaz.com) January 19 News: anthropic, a major generative artificial intelligence startup, has filed an ineffective defense against infringement claims brought against it by a group of music publishers and content owners. In a new court filing Wednesday, Anthropic detailed why the music publishers' claims are invalid.
In the fall of 2023, music publishers including Concord, Universal, and ABKCO filed lawsuits against Anthropic, alleging copyright infringement through its chatbot Claude (now replaced by Claude2). The complaint, filed in federal court in Tennessee (one of America's "music cities" and home to many record labels and musicians), claims that Anthropic's business began by "illegally" scraping lyrics from the internet to train its artificial intelligence model. These copyrighted lyrics are then copied for the user in the form of a chatbot.
In response to the preliminary injunction — which, if granted by the court, would force Anthropic to stop offering its Claude AI models — Anthropic raised familiar arguments that have arisen in numerous other copyright disputes involving AI training data. Generative AI companies like OpenAI and Anthropic rely heavily on scraping information from vast troves of publicly available data, including copyrighted works, to train their models, but they insist that this use What legally constitutes fair use. The data-scraping copyright issue is expected to reach the Supreme Court.
In its response, Anthropic argued that its "use of Plaintiff's lyrics to train Claude was a transformative use" that added "a further purpose or different character" to the original work. To support this, the document directly quotes Jared Kaplan, director of research at Anthropic, who states that the purpose is to "create a dataset that teaches neural networks how human language operates."
anthropic claimed that its actions had "no 'material adverse effect' on the legitimate market for plaintiffs' copyrighted works," noting that the lyrics constituted only a "minuscule proportion" of the training data and that the size of the license required was incompatible.
Like OpenAI, Anthropic claims that licensing the large amounts of text required to train a neural network such as Claude is technically and financially unfeasible. Training tens of thousands of clips across genres may be an elusive licensing scale for any party. Perhaps the most novel argument in the
filing is that the plaintiffs themselves, rather than the Anthropic, engaged in "subjective conduct" regarding liability for tort liability.
In copyright law, "subjective conduct" means that the person accused of infringement must be proven to have control over the output of the infringing content. In this case, Anthropic is basically saying that the label plaintiffs caused their AI model Claude to generate infringing content and therefore have control and responsibility for the infringements they reported, rather than Anthropic or its Claude products, which were responsible for Respond autonomously to user input. The
document states that the output was generated through "attacks" conducted by the plaintiffs on Claude that were designed to elicit lyrics.
In addition to fighting copyright liability, anthropopic maintains that the plaintiffs cannot prove irreparable harm.
anthropic noted that there was no evidence that song licensing revenue had decreased since Claude's launch, or that the harm to quality was "certain and immediate." Anthropic points out that the publishers themselves believe that monetary damages can compensate for the losses, which contradicts their claims of "irreparable harm" (because by definition, accepting monetary damages would indicate that these harms have a price that is quantifiable and can be paid).
anthropic argued that the injunction against it and its AI models was unreasonable because the plaintiffs were weak in proving irreparable harm.
It claims the music publisher's request is overly broad, seeking to limit the use not just of the 500 representative works in this case, but of millions of other works the publisher claims to control.
Additionally, the AI startup noted that the lawsuit was filed in the wrong jurisdiction. Anthropic insists it has no relevant business ties to Tennessee. The company noted that its headquarters and principal operations are in California.
The copyright war in the generative artificial intelligence industry continues to intensify. A growing number of artists have joined lawsuits against art generators such as MidJourney and OpenAI, bolstering evidence of infringement from reconstructions of diffusion models. Recently, the New York Times filed a copyright infringement lawsuit against OpenAI and Microsoft, claiming that they violated its copyright by using scraped Times content to train models for chatgpt and other artificial intelligence systems. The lawsuit seeks billions in damages and the destruction of any models or data trained on Times content.
Amid these debates, a nonprofit called "fairly trained" launched this week to advocate for "licensed model" certification for the data used to train artificial intelligence models. Major platforms have also joined in, including Anthropic, Google and OpenAI, as well as content companies such as Shutterstock and Adobe, which have pledged to provide corporate users with legal defense for AI-generated content.
Creators are undeterred, though, and are protesting the rejection of the authors' claims from companies like OpenAI. Judges will need to weigh technological advances and legal rights in complex disputes.
Additionally, regulators are raising concerns about the scope of data mining. Litigation and congressional hearings may determine whether fair use shields proprietary appropriation, frustrating some while benefiting others. Overall, consultation seems inevitable to satisfy all stakeholders in the ongoing development of generative AI. The future direction of
is unclear, but this week’s filings suggest that generative AI companies are building consensus around a core set of fair use and harm-based defenses, forcing courts to weigh technological advancements against rights holder control. relation. As Venturebeat previously reported, to date, there is no precedent for a copyright plaintiff to win a preliminary injunction in this type of AI dispute. Anthropic's argument aims to ensure that the precedent persists at least through this phase of the numerous ongoing legal battles.