close
close

Anthropic defends itself against injunction from music publishers

Anthropic defends itself against injunction from music publishers

Anthropic claims that using copyrighted song lyrics to train artificial intelligence is a permissible use and that it has not caused “irreparable harm” to publishers.

The dispute between AI company Anthropic and a group of music publishers led by Concord Music Group is heating up. Concord claims Anthropic is committing copyright infringement by training its Claude LLM (Large Language Model) using song lyrics owned by the publishers. Now Anthropic is hitting back, reiterating its claim that using copyrighted song lyrics for training constitutes fair use – and should not be subject to an injunction because it did not cause “irreparable harm” to publishers.

Previously, Concord and the music publishers had sought an injunction against Anthropic because Anthropic’s LLM model accused Claude of copyright infringement. Anthropic argues that the publishers have not proven “irreparable harm” in court, a necessary condition for granting an injunction. In addition, the company claims an injunction would seriously impede the development of its AI model.

Anthropic argues that Concord’s claims are either speculative in nature or can be enforced by way of compensatory damages – by continuing the proceedings to determine whether any damage was caused – rather than by way of injunctive relief.

Throughout the complaint, Anthropic reiterates that using copyrighted works for AI training constitutes fair use by adapting the work for another purpose. The company also emphasizes the public interest in enabling AI development and the potential harm an injunction could do to “innovation.”

To this end, Anthropic has reaffirmed its ongoing efforts to portray itself as a more ethical and transparent AI company by making public its system prompts that enable the Claude LLM feature.

Alex Albert, Anthropic’s head of developer relations, said in a post on X (formerly Twitter) that Anthropic plans to release such information regularly as it updates and refines its system prompts.

Meanwhile, a group of authors filed a class action lawsuit against Anthropic last week, accusing the company of misusing their work to train its AI model.

Leave a Reply

Your email address will not be published. Required fields are marked *