Generative artificial intelligence (GenAI) company Anthropic has claimed to a US court that using copyrighted content in large language model (LLM) training data counts as “fair use”, however.
Under US law, “fair use” permits the limited use of copyrighted material without permission, for purposes such as criticism, news reporting, teaching, and research.
In October 2023, a host of music publishers including Concord, Universal Music Group and ABKCO initiated legal action against the Amazon- and Google-backed generative AI firm Anthropic, demanding potentially millions in damages for the allegedly “systematic and widespread infringement of their copyrighted song lyrics”.
…then maybe they shouldn’t exist. If you can’t pay the copyright holders what they’re owed for the license to use their materials for commercial use, then you can’t use ‘em that way without repercussions. Ask any YouTuber.
You might want to read this article by Kit Walsh, a senior staff attorney at the EFF, and this one by Katherine Klosek, the director of information policy and federal relations at the Association of Research Libraries. YouTube’s one-sided strike-happy system isn’t the real world.
Headlines like these let people assume that it’s illegal, rather than educate them on their rights.
When Annas-Archive or Sci-Hub get treated the same as these giant corporations, I’ll start giving a shit about the “fair use” argument.
When people pirate to better the world by increasing access to information, the whole world gets together to try to kick them off the internet.
When giant companies with enough money to make Solomon blush pirate to make more oodles of money and not improve access to information, it’s “fAiR uSe.”
Literally everyone knew from the start that books3 was all pirated and from ebooks with the DRM circumvented and removed. It was noted when it was created it was basically the entirety of private torrent tracker Bibliotik.
AI training should not be a privilege of the mega-corporations. We already have the ability to train open source models, and organizations like Mozilla and LAION are working to make AI accessible to everyone. We can’t allow the ultra-wealthy to monopolize a public technology by creating barriers that make it prohibitively expensive for regular people to keep up. Mega corporations already have a leg up with their own datasets and predatory terms of service that exploit our data. Don’t do their dirty work for them.
Denying regular people access to a competitive, corporate-independent tool for creativity, education, entertainment, and social mobility, we condemn them to a far worse future, with fewer rights than we started with.
How am I doing their dirty work for them? I literally will stop thinking that they’re getting away with piracy for profit when we stop haranguing people who are committing to piracy for the benefit of mankind.
I’m not saying Meta should be stopped, I’m saying the prosecution of Sci-Hub and Annas-Archive need to be stopped under the same pretenses.
If it’s okay to pirate for the purpose of making money (what we put The Pirate Bay admins in jail for), then it’s okay to pirate to benefit mankind.
There is literally no way in hell someone can convince me what Meta and others are doing is not pirating to use the data contained within to make money. What’s good for the goose is good for the gander, as they say.
I reiterate, they knew it was pirated and had DRM circumvented when they downloaded it. There was zero question of the source of this data. They knew from the beginning they intended to profit from the use of this data. How is that different than what we accused The Pirate Bay admins of?
It really feels like “Well these corporations have money to steal more prolifically than little people, so since they’re stealing is so big, we have to ignore it. They have lots of money and lawyers to fight us, The Pirate Bay didn’t, nor do Sci-Hub or Annas-Archive, so let’s just not try against those with money to fight us.”
There is literally no way in hell someone can convince me what Meta and others are doing is not pirating
Then your argument is non-falsifiable, and therefore, invalid.
Major corporations and pirates are finally on the same side for once. “Fair Use” finally has financial backing. Meta is certainly not a friend, but our interests currently align.
The worst possible outcome here is that copyright trolls manage to convince the courts that they are owed licensing fees. Next worse is a settlement that grants rightsholders a share of profits generated by AI, like they got from manufacturers of blank tapes and CDs.
Best case is that the MPAA, RIAA, and other copyright trolls get reminded that “Fair Use” is not an exception to copyright law, but the fundamental reason it exists: Fair Use is the promotion of science and the useful arts. Fair Use is the rule; Restriction is the exception.
Then your argument is non-falsifiable, and therefore, invalid.
Wow this is some powerful internet word salad, just shot gunning scientific sounding words at the wall to try to pretty up a basic internet debate. Falsifiability is about scientific hypotheses, not statements of belief. “Nothing you can say can convince me that murder isn’t wrong” may mean there’s no further use in debate, but it isn’t “non-falsifiable” in any meaningful way nor does it somehow make the argument for the immorality of murder “invalid”.
By and large copyright infringement is illegal. That some things aren’t infringement doesn’t change that a general stance of “if I don’t have permission, I can’t copy it” is correct. The first argument in the EFF article is effectively the title: “it can’t be copyright, because otherwise massive AI models would be impossible to build”. That doesn’t make it fair use, they just want it to become so.
The purpose of copyright is to promote the sciences and useful arts. To increase the depth, width, and breadth of the public domain. “Fair Use” is not the exception. “Fair Use” is the fundamental purpose for which copyrights and patents exist. Copyright is not the rule. Copyright is the exception. The temporary exception. The limited exception. The exception we grant to individuals for their contribution to the public.
“it can’t be copyright, because otherwise massive AI models would be impossible to build”.
If that is, indeed, true, and if AI is a progression of science or the useful arts, then it is copyright that must yield, not AI.
Most things that I could talk about were already addressed by other users (specially @OttoVonNoob@lemmy.ca), so I’ll address a specific point - better models would skip this issue altogether.
The current models are extremely inefficient on their usage of training data. LLMs are a good example; Claude v2.1 was allegedly trained on hundreds of billions of words. In the meantime, it’s claimed that a 4yo child hears something between 45 millions and 13 millions words through their still short life. It’s four orders of magnitude of difference, so even if someone claims that those bots are as smart as a 4yo*, they’re still chewing through the training data without using it efficiently.
Once this is solved, the corpus size will get way, way smaller. Then it would be rather feasible to train those models without offending the precious desire for greed of the American media mafia, in a way that still fulfils the entitlement of the GAFAM mafia.
*I seriously doubt that, but I can’t be arsed to argue this here - it’s a drop in a bucket.
The thing is, i’m not sure at all that it’s even physically possible for an LLM be trained like a four year old, they learn in fundamentally different ways. Even very young children quickly learn by associating words with concepts and objects, not by forming a statistical model of how often x mingingless string of characters comes after every other meaningless string of charecters.
Similarly when it comes to image classifiers, a child can often associate a word to concept or object after a single example, and not need to be shown hundreds of thousands of examples until they can create a wide variety of pixel value mappings based on statistical association.
Moreover, a very large amount of the “progress” we’ve seen in the last few years has only come by simplifying the transformers and useing ever larger datasets. For instance, GPT 4 is a big improvement on 3, but about the only major difference between the two models is that they threw near the entire text internet at 4 as compared to three’s smaller dataset.
It doesn’t matter what business we’re talking about. If you can’t afford to pay the costs associated with running it, it’s not a viable business. It’s pretty fucking simple math.
And no, we’re not talking about “to big to fail” business (that SHOULD be allowed to fail, IMHO) we’re talking about AI, that thing they keep trying to shove down our throats and that we keep saying we don’t want or need.
Removed by mod
deleted by creator
Until one of these AIs just starts selling other people’s work as its own, and no I don’t mean derivative work I mean the copyrighted material, nobody is breaking the rules here.
Except of course that’s not how copyright law works in general.
Of course the questions are 1) is training a model fair use and 2) are the resulting outputs derivative works. That’s for the courts to decide.
But in general, just because I publish content on my website, does not give anyone else license or permission to republish that content or create derivative works, whether for free or for profit, unless I explicitly license that content accordingly.
That’s why things like Creative Commons exists.
But surely you already knew that.
Right, but I think it’s going to be a tough legal argument that using a text to adjust database weighting links between word associations is copying or distributing any part of that work. Assuming courts understand the math/algorithms.