Anthropic Wins Landmark Case as Judge Rules AI Book Training Is Fair Use
There's one exception to the ruling
2 min. read
Published on
Read our disclosure page to find out how can you help Windows Report sustain the editorial team. Read more

A U.S. federal judge has ruled in favor of Anthropic, deciding that using copyrighted books to train AI models counts as fair use. The decision could reshape how courts handle copyright in the age of generative models.
Judge William Alsup said Anthropic’s use was “transformative,” meaning the models aren’t copying authors but learning from them to generate new content. In his words, the models “turn a hard corner,” creating something different rather than replacing the original.
This is a major boost for companies like Anthropic, OpenAI, and Meta, all of which have relied heavily on published material for training. But the ruling wasn’t a clean sweep. The judge also said it likely violates copyright law to store books in a central library before training even begins. That part of the decision introduces some uncertainty.
Another issue hanging over the case is where these books come from. Authors have long argued that many are sourced from piracy sites. Judge Alsup didn’t take a firm stance on that but hinted companies should at least legally obtain a copy before training on it.
This ruling is one of the first of its kind in the U.S. and could set a lasting precedent. Outside the U.S., though, the outcome may carry less weight. Countries like the UK follow stricter “fair dealing” rules instead of broad fair use.
User forum
0 messages