Anthropic hits back at music publishers in AI copyright lawsuit, accusing them of 'willful conduct'

Anthropic, A major generative AI to start up, laid out It is case Why accusations of Copyright offense Since A band of music editors And content the owners are invalid In A new court deposit on Wednesday.

In autumn 2023, music editors including Concord, Universal, And ABKCO deposit A trial against Anthropic accusing he of Copyright offense on It is chatbot Claude (NOW supplanted by Claude 2).

THE complaint, deposit In federal court In Tennessee (A of America "Music Cities" And House has a lot Labels And the musicians), alleged that Anthropic business profits Since "illegally" scraping song Words Since THE the Internet has form It is AI models, which SO reproduce THE protected by copyright Words For users In THE form of chatbot answers.

Answer has A movement For preliminary injunction — A measure that, if granted by THE court, would be force Anthropic has stop manufacturing It is Claude AI model available — Anthropic laid out familiar arguments that to have emerged In many other Copyright disputes involving AI training data.

Gen. AI companies as OpenAI And Anthropic rely on strongly on scraping massive the amounts of publicly available data, including protected by copyright works, has form their models but they maintain This to use constitutes fair to use below THE law. It is expected THE question of data scraping Copyright will reach THE Supreme Court.

Song Words only A 'tiny fraction' of training data

In It is answer, Anthropic argues It is "to use of The plaintiffs Words has form Claude East A transformer to use" that adds "A further aim Or different character" has THE original works.

HAS support This, THE deposit directly quotes Anthropic research director Jared Kaplan, declaring THE aim East has "create A database has teach A neural network how human language It works."

Anthropic dispute It is to drive "has No 'substantially negative impact' on A legitimate walk For The plaintiffs protected by copyright works," noting song Words TO DO up "A tiny fraction" of training data And Licence THE ladder required East incompatible.

Join OpenAI, Anthropic complaints Licence THE vast treasures of text necessary has correctly form neural networks as Claude East technically And financially unachievable. Training requests Billions of extracts through genres can be A unachievable Licence ladder For any of them party.

Maybe THE deposit most novel argument complaints THE plaintiffs themselves, not Anthropic, engaged In THE "volitional to drive" required For direct offense responsibility concerning outputs.

"Volitional to drive" In Copyright law refers to has THE idea that A person accused of commit offense must be watch has to have control on THE infringing content outputs. In This case, Anthropic East basically saying that THE label plaintiffs cause It is AI model Claude has produce THE infringing content, And Thus, are In control of And responsible For THE offense they report, as opposite has Anthropic Or It is Claude product, which reacts has contributions of users in an autonomous way.

THE deposit points has evidence THE outputs were generated through THE of the complainants own "attacks" on Claude designed has get lyrics.

Irreparable harm?

On high of challenge Copyright responsibility, Anthropic maintains THE plaintiffs can't prove irreparable harm.

Quoting A lack of evidence that song Licence income to have decreases Since Claude spear Or that qualitative night are "certain And immediate," Anthropic sharp out that THE editors themselves believe monetary damage could TO DO them entire, contradicting their own complaints of "irreparable harm" (as, by definition, accepting monetary damage would be indicate THE night TO DO to have A price that could be quantified And paid).

Anthropic asserts THE "extraordinary relief" of A injunction against he And It is AI models East unjustified given THE of the complainants weak showing of irreparable harm...

Anthropic hits back at music publishers in AI copyright lawsuit, accusing them of 'willful conduct'

Anthropic, A major generative AI to start up, laid out It is case Why accusations of Copyright offense Since A band of music editors And content the owners are invalid In A new court deposit on Wednesday.

In autumn 2023, music editors including Concord, Universal, And ABKCO deposit A trial against Anthropic accusing he of Copyright offense on It is chatbot Claude (NOW supplanted by Claude 2).

THE complaint, deposit In federal court In Tennessee (A of America "Music Cities" And House has a lot Labels And the musicians), alleged that Anthropic business profits Since "illegally" scraping song Words Since THE the Internet has form It is AI models, which SO reproduce THE protected by copyright Words For users In THE form of chatbot answers.

Answer has A movement For preliminary injunction — A measure that, if granted by THE court, would be force Anthropic has stop manufacturing It is Claude AI model available — Anthropic laid out familiar arguments that to have emerged In many other Copyright disputes involving AI training data.

Gen. AI companies as OpenAI And Anthropic rely on strongly on scraping massive the amounts of publicly available data, including protected by copyright works, has form their models but they maintain This to use constitutes fair to use below THE law. It is expected THE question of data scraping Copyright will reach THE Supreme Court.

Song Words only A 'tiny fraction' of training data

In It is answer, Anthropic argues It is "to use of The plaintiffs Words has form Claude East A transformer to use" that adds "A further aim Or different character" has THE original works.

HAS support This, THE deposit directly quotes Anthropic research director Jared Kaplan, declaring THE aim East has "create A database has teach A neural network how human language It works."

Anthropic dispute It is to drive "has No 'substantially negative impact' on A legitimate walk For The plaintiffs protected by copyright works," noting song Words TO DO up "A tiny fraction" of training data And Licence THE ladder required East incompatible.

Join OpenAI, Anthropic complaints Licence THE vast treasures of text necessary has correctly form neural networks as Claude East technically And financially unachievable. Training requests Billions of extracts through genres can be A unachievable Licence ladder For any of them party.

Maybe THE deposit most novel argument complaints THE plaintiffs themselves, not Anthropic, engaged In THE "volitional to drive" required For direct offense responsibility concerning outputs.

"Volitional to drive" In Copyright law refers to has THE idea that A person accused of commit offense must be watch has to have control on THE infringing content outputs. In This case, Anthropic East basically saying that THE label plaintiffs cause It is AI model Claude has produce THE infringing content, And Thus, are In control of And responsible For THE offense they report, as opposite has Anthropic Or It is Claude product, which reacts has contributions of users in an autonomous way.

THE deposit points has evidence THE outputs were generated through THE of the complainants own "attacks" on Claude designed has get lyrics.

Irreparable harm?

On high of challenge Copyright responsibility, Anthropic maintains THE plaintiffs can't prove irreparable harm.

Quoting A lack of evidence that song Licence income to have decreases Since Claude spear Or that qualitative night are "certain And immediate," Anthropic sharp out that THE editors themselves believe monetary damage could TO DO them entire, contradicting their own complaints of "irreparable harm" (as, by definition, accepting monetary damage would be indicate THE night TO DO to have A price that could be quantified And paid).

Anthropic asserts THE "extraordinary relief" of A injunction against he And It is AI models East unjustified given THE of the complainants weak showing of irreparable harm...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow