This was breaking news…
It is likely to cause a major stir amongst writers. If you are easily distressed, please keep your eyes closed while reading.

U.S. District Judge, William Alsup of the Northern District of California ruled in Jube, 2025, for Bartz v. Anthropic that AI company Anthropic’s training of its Claude LLMs on authors’ works was “exceedingly transformative,” and therefore protected under the fair use doctrine as specified in Section 107 of the Copyright Act.
The case was brought by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who alleged that Anthropic used their copyrighted works without permission to train its AI systems.
Anthropic, which probably generates a billion dollars in annual revenue from its Claude AI service, downloaded over seven million (pirated?) books between 2021 and 2022 to build its training datasets. Notably, the lawsuit challenged only the inputs, or works used to train Claude, and did not allege that the outputs, or works produced by the LLM, reproduced the plaintiffs’ copyrighted works.
If you are interested in the legal speak, you can see it here.
Note that the issue of possible pirated books is still ongoing.
You may or may not be aware that there are huge legal struggles over the training of LLMs or Large Language Models, this is not limited to books, but also magazines and news articles. Even giants like Disney and Universal Studios are suing Midjourney on copyright infringement of images. In fact, anything that any AI machine needs to learn from is subject to copyright challenges.
In the past, there have been some AI companies that agreed licensing deals with content creators. This offers licensing agreements and some payment for use of material to train their LLMs. I do not have details here.
Alsup’s judgment has been hailed as a win for tech companies. This potentially sets a precedent for fair use of material to train LLMs.
However, many others have stated that this is not the end of the debate, it will certainly be appealed.
Joanna Bryson, a professor of AI ethics at the Hertie School in Berlin, says the ruling is “absolutely not” a blanket win for tech companies. “First of all, it’s not the Supreme Court. Secondly, it’s only one jurisdiction.
OK, fair enough, the story doesn’t stop here.
My Opinion…
…is probably not worth anything.
Humans read books, watch films, look at pictures, and basically experience the world. This is how we learn.
Machines must be spoon fed with this information and programmed how to parse it into understandable constructs that it can use.
I see absolutely no difference between the 2, hence fair use for both.
Maybe a machine will occasionally output a phrase, word for word from an author’s work. Oops, that is wrong.
What if a human quotes a phrase word for word from an author’s work? Oops, that is also wrong.
The simple answer is to filter out that response, either by human or machine.
I say it must be up to the human to filter any such direct quotes from any AI generated output. It is the same that humans are required to filter out any direct material from their own experience.
How often has this happened in the publishing world before the advent of AI?
How many songs have been written that sound the same as previous material.
Case in point: Led Zeppelin’s “Stairway to Heaven,” Was accused of stealing the opening riff from Spirit’s “Taurus.” Other cases involved accusations of borrowing from Willie Dixon’s “You Need Love” for “Whole Lotta Love,” and Jake Holmes’ “Dazed and Confused.” While some cases were settled out of court, the “Stairway to Heaven” case went through multiple appeals before being dismissed by the Supreme Court.
I have met many writers who say their work was ripped off by some AI system. Not a single person has volunteered to show me any work that was compromised.
AI is not without its faults, and such legalities need to be ironed out in the courts.
In the meantime, stop bitching about it.
Leave a Reply