If approved, the settlement would be the largest in the history of American copyright cases, according to a lawyer for the authors behind the lawsuit.
Anthropic, a major artificial intelligence company, has agreed to pay at least $1.5 billion to settle a copyright infringement lawsuit filed by a group of authors who alleged the platform had illegally used pirated copies of their books to train large-language models, according to court documents.
“If approved, this landmark settlement will be the largest publicly reported copyright recovery in history, larger than any other copyright class action settlement or any individual copyright case litigated to final judgment,” said Justin Nelson, a lawyer for the authors.
The lawsuit, filed in federal court in California last year, centered on roughly 500,000 published works. The proposed settlement amounts to a gross recovery of $3,000 per work, Nelson said in a memorandum to the judge in the case.
They essentially still have the information in the weights, so I guess they won’t fret too much over not having it in the original training data.
They will need actual training data when they want to develop the next version of their LLM.
I guess it depends on how important old data is when building upon new models, which I fully admit I don’t know the answer to. As I understand it though, new models are not trained fully from scratch, but instead are a continuation of the older model trained with new techniques/new data.
To speculate, I guess not having the older data present in the new training stages might make the attributes of that data be less pronounced in the new output model.
Maybe they could cheat the system by trying to distill that data out of the older models and put that into the training data, but I guess the risk of model collapse is not-insignificant there
Again, limited understanding here, take everything I speculate with a grain of salt
It’s true that a new model can be initialized from an older one, but it will never outperform the older one unless it is given actual training data (not necessarily the same training data used previously).
Kind of like how you can learn ancient history from your grandmother, but you will never know more ancient history than your grandmother unless you do some independent reading.
I think we’re in agreement with each other? The old model has the old training data, and then you train a new one on that model with new training data, right?
No, the old model does not have the training data. It only has “model weights”. You can conceptualize those as the abstract rules that the old model learned when it read the training data. By design, they are not supposed to memorize their training data.
To outperform the old model, the new model needs more than what the old model learned. It needs primary sources, ie the training data itself. Which is going to be deleted.
I expressed myself poorly, this is what I meant - it has the “essence” of the training data, but of course not the verbatim training data.
I wonder how valuable in relative terms the old training data is to the process, compared to just the new training data. I can’t answer it, but it would be interesting to know.
A new model needs training data, it doesn’t matter if the data is new or old. But generally, a more advanced model needs more training data, so AI devs generally need at least some new training data.
My guess is they don’t actually need the half a million closed books to train their models. It’s not the only thing they’re training on.
Now that they’re making their billions, they could actually afford to pay for the useful subset of the content they need to train the models. I always felt that the kitchen sink approach everyone used by including every book imaginable was over the top.
I think it’ll be more interesting when they finally get around to making all the diffusion models pull out the IP. There really isn’t a good reason why mid-journey can draw Batman.