OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling’s Harry Potter series::A new research paper laid out ways in which AI developers should try and avoid showing LLMs have been trained on copyrighted material.

  • Blapoo@lemmy.ml
    link
    fedilink
    English
    arrow-up
    84
    arrow-down
    10
    ·
    1 year ago

    We have to distinguish between LLMs

    • Trained on copyrighted material and
    • Outputting copyrighted material

    They are not one and the same

    • Even_Adder@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      8
      ·
      1 year ago

      Yeah, this headline is trying to make it seem like training on copyrighted material is or should be wrong.

      • scv@discuss.online
        link
        fedilink
        English
        arrow-up
        28
        arrow-down
        3
        ·
        1 year ago

        Legally the output of the training could be considered a derived work. We treat brains differently here, that’s all.

        I think the current intellectual property system makes no sense and AI is revealing that fact.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        1 year ago

        I think this brings up broader questions about the currently quite extreme interpretation of copyright. Personally I don’t think its wrong to sample from or create derivative works from something that is accessible. If its not behind lock and key, its free to use. If you have a problem with that, then put it behind lock and key. No one is forcing you to share your art with the world.

    • TwilightVulpine@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Should we distinguish it though? Why shouldn’t (and didn’t) artists have a say if their art is used to train LLMs? Just like publicly displayed art doesn’t provide a permission to copy it and use it in other unspecified purposes, it would be reasonable that the same would apply to AI training.

      • Terrasque@infosec.pub
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        1 year ago

        Just like publicly displayed art doesn’t provide a permission to copy it and use it in other unspecified purposes

        But it kinda does. If I see a van Gogh painting, I can be inspired to make a painting in the same style.

        When “ai” “learns” from an image, it doesn’t copy the image or even parts of the image directly. It learns the patterns involved instead, over many pictures. Then it uses those patterns to make new images.

      • Blapoo@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Ah, but that’s the thing. Training isn’t copying. It’s pattern recognition. If you train a model “The dog says woof” and then ask a model “What does the dog say”, it’s not guaranteed to say “woof”.

        Similarly, just because a model was trained on Harry Potter, all that means is it has a good corpus of how the sentences in that book go.

        Thus the distinction. Can I train on a comment section discussing the book?

    • Tetsuo@jlai.lu
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      1 year ago

      Output from an AI has just been recently considered as not copyrightable.

      I think it stemmed from the actors strikes recently.

      It was stated that only work originating from a human can be copyrighted.

      • Anders429@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Output from an AI has just been recently considered as not copyrightable.

        Where can I read more about this? I’ve seen it mentioned a few times, but never with any links.

        • Even_Adder@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          They clearly only read the headline If they’re talking about the ruling that came out this week, that whole thing was about trying to give an AI authorship of a work generated solely by a machine and having the copyright go to the owner of the machine through the work-for-hire doctrine. So an AI itself can’t be authors or hold a copyright, but humans using them can still be copyright holders of any qualifying works.

  • Skanky@lemmy.world
    link
    fedilink
    English
    arrow-up
    68
    arrow-down
    1
    ·
    1 year ago

    Vanilla Ice had it right all along. Nobody gives a shit about copyright until big money is involved.

          • kmkz_ninja@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            3
            ·
            1 year ago

            His point is equally valid. Can an artist be compelled to show the methods of their art? Is it as right to force an artist to give up methods if another artist thinks they are using AI to derive copyrighted work? Haven’t we already seen that LLMs are really poor at evaluating whether or not something was created by an LLM? Wouldn’t making strong laws on such an already opaque and difficult-to-prove issue be more of a burden on smaller artists vs. large studios with lawyers-in-tow.

      • Asuka@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        If I read Harry Potter and wrote a novel of my own, no doubt ideas from it could consciously or subconsciously influence it and be incorporated into it. Hey is that any different from what an LLM does?

    • TropicalDingdong@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      6
      ·
      1 year ago

      Exactly. If I write some Loony toons fan fiction, Warner doesn’t own that. This ridiculous view of copyright (that’s not being challenged in the public discourse) needs to be confronted.

          • Sethayy@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Can’t but theyre pretty open on how they trained the model, so like almost admitted guilt (though they werent hosting the pirated content, its still out there and would be trained on). Cause unless they trained it on a paid Netflix account, there’s no way to get it legally.

            Idk where this lands legally, but I’d assume not in their favour

    • CoderKat@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 year ago

      It’s honestly a good question. It’s perfectly legal for you to memorize a copyrighted work. In some contexts, you can recite it, too (particularly the perilous fair use). And even if you don’t recite a copyrighted work directly, you are most certainly allowed to learn to write from reading copyrighted books, then try to come up with your own writing based off what you’ve read. You’ll probably try your best to avoid copying anyone, but you might still make mistakes, simply by forgetting that some idea isn’t your own.

      But can AI? If we want to view AI as basically an artificial brain, then shouldn’t it be able to do what humans can do? Though at the same time, it’s not actually a brain nor is it a human. Humans are pretty limited in what they can remember, whereas an AI could be virtually boundless.

      If we’re looking at intent, the AI companies certainly aren’t trying to recreate copyrighted works. They’ve actively tried to stop it as we can see. And LLMs don’t directly store the copyrighted works, either. They’re basically just storing super hard to understand sets of weights, which are a challenge even for experienced researchers to explain. They’re not denying that they read copyrighted works (like all of us do), but arguably they aren’t trying to write copyrighted works.

    • SubArcticTundra@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      1 year ago

      No, because you paid for a single viewing of that content with your cinema ticket. And frankly, I think that the price of a cinema ticket (= a single viewing, which it was) should be what OpenAI should be made to pay.

  • rosenjcb@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    3
    ·
    edit-2
    1 year ago

    The powers that be have done a great job convincing the layperson that copyright is about protecting artists and not publishers. It’s historically inaccurate and you can discover that copyright law was pushed by publishers who did not want authors keeping second hand manuscripts of works they sold to publishing companies.

    Additional reading: https://en.m.wikipedia.org/wiki/Statute_of_Anne

  • Sentau@lemmy.one
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    7
    ·
    edit-2
    1 year ago

    I think a lot of people are not getting it. AI/LLMs can train on whatever they want but when then these LLMs are used for commercial reasons to make money, an argument can be made that the copyrighted material has been used in a money making endeavour. Similar to how using copyrighted clips in a monetized video can make you get a strike against your channel but if the video is not monetized, the chances of YouTube taking action against you is lower.

    Edit - If this was an open source model available for use by the general public at no cost, I would be far less bothered by claims of copyright infringement by the model

    • Tyler_Zoro@ttrpg.network
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      4
      ·
      1 year ago

      AI/LLMs can train on whatever they want but when then these LLMs are used for commercial reasons to make money, an argument can be made that the copyrighted material has been used in a money making endeavour.

      And does this apply equally to all artists who have seen any of my work? Can I start charging all artists born after 1990, for training their neural networks on my work?

      Learning is not and has never been considered a financial transaction.

      • maynarkh@feddit.nl
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        1 year ago

        Actually, it has. The whole consept of copyright is relatively new, and corporations absolutely tried to have people who learned proprietary copyrighted information not be able to use it in other places.

        It’s just that labor movements got such non-compete agreements thrown out of our society, or at least severely restricted on humanitarian grounds. The argument is that a human being has the right to seek happiness by learning and using the proprietary information they learned to better their station. By the way, this needed a lot of violent convincing that we have this.

        So yes, knowledge and information learned is absolutely withing the scope of copyright as it stands, it’s only that the fundamental rights that humans have override copyright. LLMs (and companies for that matter) do not have such fundamental rights.

        Copyright by the way is stupid in its current implementation, but OpenAI and ChatGPT does not get to get out of it IMO just because it’s “learning”. We humans ourselves are only getting out of copyright because of our special legal status.

        • Even_Adder@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          You kind of do. Fair use protects reverse engineering, indexing for search engines, and other forms of analysis that create new knowledge about works or bodies of works. These models are meant to be used to create new works which is where the “generative” part of generative models comes in, and the fact that the models consist only of original analysis of the training data in comparison with one another means as your tool, they are protected.

          • maynarkh@feddit.nl
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            https://en.wikipedia.org/wiki/Fair_use

            Fair use only works if what you create is to reflect on the original and not to supercede it. For example if ChatGPT gobbled up a work on the reproduction of firefies, if you ask it a question about the topic and it just answers, that’s not fair use since you made the original material redundant. If it did what a search engine would do and just tell you that “here’s where you can find it, you might have to pay for it”, that’s fair use. This is of course US law, so it may be different everywhere, and US law is weird so the courts may say anything.

            That’s the gist of it, fair use is fine as long as you are only creating new information and only use the copyrighted old work as is absolutely necessary for your new information to make sense, and even then, you can’t use so much of the copyrighted work that it takes away from the value of it.

            Otherwise if I pirated a movie and put subtitles on it, I could argue it’s fair use since it’s new information and transformative. If I released the subtitles separately, that would be a strong argument for fair use. If I included a 10 sec clip in it to show my customers what the thing is like in action, then that may be argued. If it’s the pivotal 10 seconds that spoils the whole movie, that’s not fair use, since I took away from the value of the original.

            ChatGPT ate up all of these authors’ works and for some, it may take away from the value they have created. It’s telling that OpenAI is trying to be shifty about it as well. If they had a strong argument, they’d want to settle it as soon as possibe as this is a big stormcloud on their company IP value. And yeah it sucks that people created something that may turn out to not be legal because some people have a right to profit from some pieces of capital assets, but that’s the story of the world the past 50 years.

            • Even_Adder@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              First of all, fair use is not simple or as clear-cut a concept that can be applied uniformly to all cases than you make it out to be. It’s flexible and context-dependent on careful analysis of four factors: the purpose and character of the use, the nature of the copyrighted work, the amount and substantiality of the portion used, and the effect of the use upon the potential market. No one factor is more important than the others, and it is possible to have a fair use defense even if you do not meet all the criteria of fair use.

              Generative models create new and original works based on their weights, such as poems, stories, code, essays, songs, images, video, celebrity parodies, and more. These works may have their own artistic merit and value, and may be considered transformative uses that add new expression or meaning to the original works. Providing your own explanation on the reproduction of fireflies isn’t making the original redundant nor isn’t reproducing the original, so it’s likely fair use. Plenty of competing works explaining the same thing exist, and they’re not invalid because someone got to it first, or they’re based on the same sources.

              Your example about subtitling a movie doesn’t meet the criteria for fair use because subtitling a movie isn’t a transformative use. It doesn’t add any expression or meaning, you doubly reproduce the original work in a different language, and it isn’t commentary, criticism, or parody. Subtitling a movie also involves using the entire work, which again weighs against fair use. The more of the original you use, the less likely it’s fair use. This might also have a negative effect on the potential market for the original, since it could reduce demand for the original or its authorized translations. Now, subtitling a short clip from a movie to illustrate a point in an educational video or a review would likely fly.

              Finally, uses that can result in lost sales for already established markets tend to be determined as not fair use by the courts. This doesn’t mean that uses that affect the market are unfair. That would mean you wouldn’t be able to create a parody movie or use snippets of a work for a review. These can be considered a fair use because they comment on or criticize the original work, unlike uploading a full movie, song, or translated script. Though I could be getting the wrong read here, since you didn’t explain how you came to any of your conclusions.

              I think you’re being too narrow and rigid with your interpretation of fair use, and I don’t think you understand the doctrine that well. I recommend reading this article by Kit Walsh, who’s a senior staff attorney at the EFF, a digital rights group, who recently won a historic case: border guards now need a warrant to search your phone. I’d like to hear your thoughts.

              • maynarkh@feddit.nl
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                1 year ago

                I am not a lawyer by the way, I don’t even live in the US, so what I write is just my opinion.

                But fair use seems a ridiculous defense when we talk about the Github Copilot case, which is the first tangible lawsuit about it that I know of. The plaintiffs lay out the case of a book for Javascript developers as their example. The objective of the book to give you excercises in Javascript development, I would get the book if I wanted to do Javascript excercises. The book is copyrighted under a share-alike attribution required licence. The defendants Github and OpenAI don’t honour the license with Copilot and Codex. They claim fair use.

                So with the four factors:

                • the purpose and character of your use: .Well, they present their Javascript excercises as original work while it’s obvious they are not, they are reproducing the task they want letter by letter. It is even missing critical context that makes it hard to understand without the book, so their work does not even stand on its own. Also, they do this for monetary compensation, while not respecting the original license, which if someone was giving a commentary or criticism covered by fair use, would be as trivial as providing a citation of the book. They are also not producing information beyond what’s available in the book. Quite funnily, the plaintiffs mention that the “derivative” work is also not quite valuable, as the model answered with an example from a “what’s wrong with this, can you fix it?” section for a question about how to determine if a number is even.

                • the nature of the copyrighted work: It’s freely available, the licence only requires if you republish it, you should provide proper attribution. It is not impossible to provide use cases based on fair use while honouring the license. There is no monetary or other barrier.

                • the amount and substantiality of the portion taken: All of it, and it is reproduced verbatim.

                • the effect of the use upon the potential market: Github Copilot is in the same market as the original work and is competing with it, namely in showing people how to use Javascript.

                And again, I feel this is one layer. Copyright enforcement has never been predictable, and US courts are not predictable either. I think anything can come of this now that it’s big tech that is on the defendant side, and they have the resources to fight, not like random Joe Schmoes caught with bootleg DVDs. Maybe they abolish copyright? Maybe they get an exception? Since US courts have such wide jurisdiction and can effectively make laws, it is still a toss-up. That said, the Github Copilot class action case is the one to watch, and so far, the judge denied orders to dismiss the case, so it may go either way.

                Also by the way, the EU has no fair use protections, it only allows very specific exceptions for public criticism and such, none of which fits AI. Going by the example of Copilot, this would mean that EU users can’t use Copilot, and also that anything that was produced with the assistance of Copilot (or ChatGPT for that matter) is not marketable in the EU.

                • Even_Adder@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  1 year ago

                  I am not a lawyer either or a programmer for that matter, but the Copilot case looks pretty fucked. We can’t really get a look at the plaintiff’s examples since they have to be kept anonymous. Generative models weights don’t copy and paste from their training data unless there’s been some kind of overfitting, and some cases of similar or identical code snippets, might be inevitable given the nature of programming languages and common tasks. If the model was trained correctly, it should only ever see infinitesimally tiny parts of its training data. We also can’t tell how much of the plaintiff’s code is being used for the same reasons. The same is true of the plaintiff’s claims about the “Suggestions matching public code”.

                  This case is still in discovery and mired in secrecy, we might not ever find out what’s going on even once the proceedings have concluded.

      • zbyte64@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        8
        ·
        1 year ago

        Ehh, “learning” is doing a lot of lifting. These models “learn” in a way that is foreign to most artists. And that’s ignoring the fact the humans are not capital. When we learn we aren’t building a form a capital; when models learn they are only building a form of capital.

        • Tyler_Zoro@ttrpg.network
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          4
          ·
          1 year ago

          Artists, construction workers, administrative clerks, police and video game developers all develop their neural networks in the same way, a method simulated by ANNs.

          This is not, “foreign to most artists,” it’s just that most artists have no idea what the mechanism of learning is.

          The method by which you provide input to the network for training isn’t the same thing as learning.

          • Sentau@lemmy.one
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            1 year ago

            Artists, construction workers, administrative clerks, police and video game developers all develop their neural networks in the same way, a method simulated by ANNs.

            Do we know enough about how our brain functions and how neural networks functions to make this statement?

            • Yendor@reddthat.com
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              Do we know enough about how our brain functions and how neural networks functions to make this statement?

              Yes, we do. Take a university level course on ML if you want the long answer.

              • Sentau@lemmy.one
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                My friends who took computer science told me that we don’t totally understand how machine learning algorithms work. Though this conversation was a few years ago in college. Will have to ask them again

          • zbyte64@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 year ago

            ANNs are not the same as synapses, analogous yes, but different mathematically even when simulated.

            • Prager_U@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              This is orthogonal to the topic at hand. How does the chemistry of biological synapses alone result in a different type of learned model that therefore requires different types of legal treatment?

              The overarching (and relevant) similarity between biological and artificial nets is the concept of connectionist distributed representations, and the projection of data onto lower dimensional manifolds. Whether the network achieves its final connectome through backpropagation or a more biologically plausible method is beside the point.

        • Yendor@reddthat.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          When we learn we aren’t building a form a capital; when models learn they are only building a form of capital.

          What do you think education is? I went to university to acquire knowledge and train my skills so that I could later be paid for those skills. That was literally building my own human capital.

    • FMT99@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      3
      ·
      1 year ago

      But wouldn’t this training and the subsequent output be so transformative that being based on the copyrighted work makes no difference? If I read a Harry Potter book and then write a story about a boy wizard who becomes a great hero, anyone trying to copyright strike that would be laughed at.

      • Sentau@lemmy.one
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        1 year ago

        Your probability of getting copyright strike depends on two major factors - • How similar your story is to Harry Potter. • If you are making money of that story.

        • uis@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          It doesn’t matter how similar. Copyright doesn’t protect meaning, copyright protect form. If you read HP and then draw a picture of it, said picture becomes its separate work, not even derivative.

    • 1ird@notyour.rodeo
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      3
      ·
      edit-2
      1 year ago

      How is it any different from someone reading the books, being influenced by them and writing their own book with that inspiration? Should the author of the original book be paid for sales or the second book?

      • Sentau@lemmy.one
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        1 year ago

        Again that is dependent on how similar the two books are. If I just change the names of the characters and change the grammatical structure and then try to sell the book as my own work, I am infringing the copyright. If my book has a different story but the themes are influenced by another book, then I don’t believe that is copyright infringement. Now where the line between infringement and no infringement lies is not something I can say and is a topic for another discussion

        • uis@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          1 year ago

          change the grammatical structure

          I.e. change form. Copyright protect form, thus in coutries that judge either by spirit or letter of law instead of size of moneybags this is ok.

    • Affine Connection@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      using copyrighted clips in a monetized video can make you get a strike against your channel

      Much of the time, the use of very brief clips is clearly fair use, but the people who issue DMCA claims don’t care.

    • ciwolsey@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      edit-2
      1 year ago

      You could run a paid training course using a paid-for book, that doesn’t mean you’re breaking copyright.

    • Schadrach@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I think a lot of people are not getting it. AI/LLMs can train on whatever they want but when then these LLMs are used for commercial reasons to make money, an argument can be made that the copyrighted material has been used in a money making endeavour.

      Only in the same way that I could argue that if you’ve ever watched any of the classic Disney animated movies then anything you ever draw for the rest of your life infringes on Disney’s copyright, and if you draw anything for money then the Disney animated movies you have seen in your life have been used in a money making endeavor. This is of course ridiculous and no one would buy that argument, but when you replace a human doing it with a machine doing essentially the same thing (observing and digesting a bunch of examples of a given kind of work, and producing original works of the general kind that meet a given description) suddenly it’s different, for some nebulous reason that mostly amounts to creatives who believed their jobs could not at least in part be automated away trying to get explicit protection from their jobs being at least in part automated away.

      • Corkyskog@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        They used to be a non profit, that immediately turned it into a for profit when their product was refined. They took a bunch of people’s effort whether it be training materials or training Monkeys using the product and then slapped a huge price tag on it.

        • Touching_Grass@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          1 year ago

          I didn’t know they were a non profit. I’m good as long as they keep the current model. Release older models free to use while charging for extra or latest features

      • BURN@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        1 year ago

        They’re stealing a ridiculous amount of copyrighted works to use to train their model without the consent of the copyright holders.

        This includes the single person operations creating art that’s being used to feed the models that will take their jobs.

        OpenAI should not be allowed to train on copyrighted material without paying a licensing fee at minimum.

        • uzay@infosec.pub
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          1 year ago

          Also Sam Altman is a grifter who gives people in need small amounts of monopoly money to get their biometric data

          • LifeInMultipleChoice@lemmy.ml
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            1 year ago

            So hypothetical here. If Dreddit did launch a system that made it so users could trade Karma in for real currency or some alternative, does that mean that all fan fictions and all other fan boy account created material would become copyright infringement as they are now making money off the original works?

        • Stamets@startrek.website
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          “Stealing”.

          It cannot be theft as the product is publicly available and the original product is still available to other consumers.

          You can not like this and you can argue against it but it isn’t theft. Hasn’t and never will be. The same way piracy isn’t theft.

          People might respect this bizarre corporate protection stance if you use the correct terminology. And yes. You’re defending larger companies here, not individual artists. Copyright was invented for companies and corporations. They have extended copyright for decades to be able to hold on to stuff they believe to be theirs. They suppress creatives to take their work and put a copyright on it themselves.

          The only people you’re protecting with your argument are massive corporations. Have fun with that.

        • Touching_Grass@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          5
          ·
          1 year ago

          If they purchased the data or the data is free its theirs to do what they want without violating the copyright like reselling the original work as their own. Training off it should not violate any copyright if the work was available for free or purchased by at least one person involved. Capitalism should work both ways

          • BURN@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            1 year ago

            But they don’t purchase the data. That’s the whole problem.

            And copyright is absolutely violated by training off it. It’s being used to make money and no longer falls under even the widest interpretation of free use.

            • GroggyGuava@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              edit-2
              1 year ago

              You need to expand on how learning from something to make money is somehow using the original material to make money. Considering that’s how art works in general, I’m having a hard time taking the side of “learning from media to make your own is against copyright”. As long as they don’t reproduce the same thing as the original, I don’t see any issues with it. If they learned from Lord of the rings to then make “the Lord of the rings” then yes, that’d be infringement. But if they use that data to make a new IP with original ideas, then how is that bad for the world/ artists.

              • BURN@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                1 year ago

                Creating an AI model is a commercial work. They’re made to make money. Now these models are dependent on other artists data to train on. The models would be useless if they weren’t able to train on anything.

                I hold the stance that using copyrighted data as part of a training set is a violation of copyright. That still hasn’t been fully challenged in court, so there’s no specific legal definition yet.

                Due to the requirement of copywritten materials to make the model function I feel that they are using copyrighted works in order to build a commercial product.

                Also AI doesn’t learn. LLMs build statistical models based on sentence structure of what they’ve seen before. There’s no level of understanding or inherent knowledge, and there’s nothing new being added.

              • BURN@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                1
                ·
                1 year ago

                It may be freely available for non-commercial works, eg. Photos on Photobucket, internet archive free book archives, etc.

                Most everything is on the internet these days, copyrighted or not. I’m sure if I googled enough I could find the entire text of Harry Potter for free. I still haven’t purchased it, and technically it’s not legally freely available. But in training these models I guarantee they didn’t care where the data came from, just that it was data.

                I’m against piracy as well for the record, but pretty much everything is available through torrenting and pirate sites at this point, copyright be damned.

                • Touching_Grass@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  4
                  ·
                  edit-2
                  1 year ago

                  Don’t care, that’s not mine or these LLMs problem they don’t secure their copyright. They shouldn’t come asking for others to pay for them not securing their data. I see it as a double edged sword.

                  I really hope this is a wake up call to all creative types to pack up and not use the internet like a street corner while they busk.

                  If they want to come online to contribute like everybody else. Just have fun and post stuff, that’s great. But all of them are no different then any other greedy corporation. They all want more toll roads. When they do make it and earn millions and get our attention they exploit it with more ads. It swallows all the free good content. Sites gear towards these rich creators. They lawyer up and sue everybody and everything that looks or sounds like them. We lose all our good spaces to them.

                  I hope the LLM allows regular people to shit post in peace finally.

  • paraphrand@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    12
    ·
    1 year ago

    Why are people defending a massive corporation that admits it is attempting to create something that will give them unparalleled power if they are successful?

    • bamboo@lemm.ee
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      5
      ·
      1 year ago

      Mostly because fuck corporations trying to milk their copyright. I have no particular love for OpenAI (though I do like their product), but I do have great distain for already-successful corporations that would hold back the progress of humanity because they didn’t get paid (again).

        • bamboo@lemm.ee
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          Perhaps, and when that happens I would be equally disdainful towards them.

        • LifeInMultipleChoice@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 year ago

          In the United States there was a judgement made the other day saying that works created soley by AI are not copyright-able. So that that would put a speed bumb there.
          I may have misunderstood what you though.

          • msage@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Yeah, they might not copyright it, but after it becomes the ‘one true AI’, it will be at the hands of Microsoft, so please do not act friendly towards them.

            It will turn on you just like every private company has.

            (don’t mean specifically you, but everyone generally)

          • uis@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            1 year ago

            Huh. Doesn’t this means technically AI cannot do copyright infringement.

            • LifeInMultipleChoice@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              Nah, it would mean that you cannot copyright a work created by an AI, such as a piece of art.

              E.g. if you tell it to draw you a donkey carting avocados, the picture can be used by anyone from what I understand.

              • uis@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                you cannot copyright a work created by an AI, such as a piece of art.

                That’s what I said. Copyright infringement is when there is another copyrightable object that is copy of first object. AI is not witin copyright area. You can’t copyright it, but also you can’t be sued for copyright infringement too.

                if you tell it to draw you a donkey carting avocados, the picture can be used by anyone from what I understand.

                Yes. Same for Public Domain, but PD is another status. PD applies only to copyrightable work.

        • uis@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          It’s like argument “but new politicians will steal more” that I hear in Russia from people who protect Putin

          • msage@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            It’s literally not, wtf.

            Do not let any private entity to get overwhelming majority on anything period.

            But do not kid yourself that Microsoft will let OpenAI do anything for public once it gets big enough.

            OpenAI is open only in name after they rolled back all the promises of being for everyone.

            • uis@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              1 year ago

              That’s my entire point. It’s not who, but how long.

              Also Microsoft plays both sides here. OpenAI vs copyright is wrong question. There’s more: both are status-quo. Both are for keeping corporate ownership of ideas.

      • assassin_aragorn@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        1 year ago

        There’s a massive difference though between corporations milking copyright and authors/musicians/artists wanting their copyright respected. All I see here is a corporation milking copyrighted works by creative individuals.

    • Whimsical@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      The dream would be that they manage to make their own glorious free & open source version, so that after a brief spike in corporate profit as they fire all their writers and artists, suddenly nobody needs those corps anymore because EVERYONE gets access to the same tools - if everyone has the ability to churn out massive content without hiring anyone, that theoretically favors those who never had the capital to hire people to begin with, far more than those who did the hiring.

      Of course, this stance doesn’t really have an answer for any of the other problems involved in the tech, not the least of which is that there’s bigger issues at play than just “content”.

      • otherbastard@lemm.ee
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        10
        ·
        1 year ago

        An LLM is not a person, it is a product. It doesn’t matter that it “learns” like a human - at the end of the day, it is a product created by a corporation that used other people’s work, with the capacity to disrupt the market that those folks’ work competes in.

        • Touching_Grass@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          6
          ·
          edit-2
          1 year ago

          And it should be able to freely use anything that’s available to it. These massive corporations and entities have exploited all the free spaces to advertise and sell us their own products and are now sour.

          If they had their way they are going to lock up much more of the net behind paywalls. Everybody should be with the LLMs on this.

          • assassin_aragorn@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            3
            ·
            1 year ago

            Except the massive corporations and entities are the ones getting rich on this. They’re seeking to exploit the work of authors and musicians and artists.

            Respecting the intellectual property of creative workers is the anti corporate position here.

            • uis@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              Except corporations have infinitely more resources(money, lawyers) compared to people who create. Take Jarek Duda(mathematician from Poland) and Microsoft as an example. He created new compression algorythm, and Microsoft came few years later and patented it in Britain AFAIK. To file patent contest and prior art he needs 100k£.

              • assassin_aragorn@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                I think there’s an important distinction to make here between patents and copyright. Patents are the issue with corporations, and I couldn’t care less if AI consumed all that.

                • uis@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 year ago

                  And for copyright there is no possible way to contest it. Also when copyright expires there is no guarantee it will be accessable by humanity. Patents are bad, copyright even worse.

            • uis@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              There is nothing anti corporate if result can be alienated.

          • Cosmic Cleric@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            5
            ·
            1 year ago

            If they had their way they are going to lock up much more of the net behind paywalls.

            This!

            When the Internet was first a thing corpos tried to put everything behind paywalls, and we pushed back and won.

            Now, the next generation is advocating to put everything behind a paywall again?

            • Touching_Grass@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              2
              ·
              1 year ago

              Its always weird to me how the old values from early internet days sort of vanished. Is it by design there aren’t any more Richard Stallmans or is it the natural progression on an internet that was taken over

              • Cosmic Cleric@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                1
                ·
                1 year ago

                Not to inject politics into this, but the Internet started off way more socialist than it is today.

                Capitalism creeping in taking over slowly. And it’s being done in a slow boiling the toad in a pot sort of way.

          • otherbastard@lemm.ee
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            3
            ·
            1 year ago

            You are somehow conflating “massive corporation” with “independent creator,” while also not recognizing that successful LLM implementations are and will be run by massive corporations, and eventually plagued with ads and paywalls.

            People that make things should be allowed payment for their time and the value they provide their customer.

            • Touching_Grass@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              3
              ·
              edit-2
              1 year ago

              People are paid. But they’re greedy and expect far more compensation then they deserve. In this case they should not be compensated for having an LLM ingest their work work if that work was legally owned or obtained

          • scarabic@lemmy.world
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            7
            ·
            1 year ago

            First, we don’t have to make AI.

            Second, it’s not about it being unable to learn, it’s about the fact that they aren’t paying the people who are teaching it.

              • FatCrab@lemmy.one
                link
                fedilink
                English
                arrow-up
                6
                arrow-down
                3
                ·
                1 year ago

                The reasoning that claims training a generative model is infringing IP would still mean a robot going into a library with a card it has to optically read all the books there to create the same generative model would still be infringing IP.

              • AncientMariner@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                1 year ago

                Humans can judge information make decisions on it and adapt it. AI mostly just looks at what is statistically what is most likely based on training data. If 1 piece of data exists, it will copy, not paraphrase. Example was from I think copilot where it just printed out the code and comments from an old game verbatim. I think Quake2. It isn’t intelligence, it is statistical copying.

                • uis@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  2
                  ·
                  1 year ago

                  Well, mathematics cannot be copyrighted. In most countries at least.

              • assassin_aragorn@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                4
                ·
                1 year ago

                because it might hurt authors and musicians and artists and other creative workers

                FTFY. Corporations shouldn’t be making a fucking dime from any of these works without fairly paying the creators.

    • Crozekiel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      9
      ·
      1 year ago

      AI is the new fan boy following since it became official that nfts are all fucking scams. They need a new technological God to push to feel superior to everyone else…

    • SCB@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      1 year ago

      Leftists hating on AI while dreaming of post-scarcity will never not be funny

  • Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    8
    ·
    edit-2
    1 year ago

    Training AI on copyrighted material is no more illegal or unethical than training human beings on copyrighted material (from library books or borrowed books, nonetheless!). And trying to challenge the veracity of generative AI systems on the notion that it was trained on copyrighted material only raises the specter that IP law has lost its validity as a public good.

    The only valid concern about generative AI is that it could displace human workers (or swap out skilled jobs for menial ones) which is a problem because our society recognizes the value of human beings only in their capacity to provide a compensation-worthy service to people with money.

    The problem is this is a shitty, unethical way to determine who gets to survive and who doesn’t. All the current controversy about generative AI does is kick this can down the road a bit. But we’re going to have to address soon that our monied elites will be glad to dispose of the rest of us as soon as they can.

    Also, amateur creators are as good as professionals, given the same resources. Maybe we should look at creating content by other means than for-profit companies.

    • Draedron@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      1 year ago

      Also this argument if replacing human workers has been made with every single industrial revolution.

        • Draedron@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          The point is fighting back against it is stupid. The point is people still have work. New technology opens up new was to work with new jobs.

  • RadialMonster@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    3
    ·
    1 year ago

    what if they scraped a whole lot of the internet, and those excerpts were in random blogs and posts and quotes and memes etc etc all over the place? They didnt injest the material directly, or knowingly.

    • beetus@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      1 year ago

      Not knowing something is a crime doesn’t stop you from being prosecuted for committing it.

      It doesn’t matter if someone else is sharing copyright works and you don’t know it and use it in ways that infringes on that copyright.

      “I didn’t know that was copyrighted” is not a valid defence.

      • stewsters@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        1 year ago

        Is reading a passage from a book actually a crime though?

        Sure, you could try to regenerate the full text from quotes you read online, much like you could open a lot of video reviews and recreate larger portions of the original text, but you would not blame the video editing program for that, you would blame the one who did it and decided to post it online.

    • chemical_cutthroat@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      8
      ·
      1 year ago

      That’s why this whole argument is worthless, and why I think that, at its core, it is disingenuous. I would be willing to be a steak dinner that a lot of these lawsuits are just fishing for money, and the rest are set up by competition trying to slow the market down because they are lagging behind. AI is an arms race, and it’s growing so fast that if you got in too late, you are just out of luck. So, companies that want in are trying to slow down the leaders, at best, and at worst they are trying to make them publish their training material so they can just copy it. AI training models should be considered IP, and should be protected as such. It’s like trying to get the Colonel’s secret recipe by saying that all the spices that were used have been used in other recipes before, so it should be fair game.

      • Kujo@lemm.ee
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        1 year ago

        If training models are considered IP then shouldn’t we allow other training models to view and learn from the competition? If learning from other IPs that are copywritten is okay, why should the training models be treated different?

        • chemical_cutthroat@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          1 year ago

          They are allegedly learning from copyrighted material, there is no actual proof that they have been trained on the actual material, or just snippets that have been published online. And it would be illegal for them to be trained on full copyrighted materials, because it is protected by laws that prevent that.

  • ClamDrinker@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    edit-2
    1 year ago

    This is just OpenAI covering their ass by attempting to block the most egregious and obvious outputs in legal gray areas, something they’ve been doing for a while, hence why their AI models are known to be massively censored. I wouldn’t call that ‘hiding’. It’s kind of hard to hide it was trained on copyrighted material, since that’s common knowledge, really.

  • Default_Defect
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    4
    ·
    1 year ago

    They made it read Harry Potter? No wonder its gonna kill us all one day.

  • Thorny_Thicket@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    6
    ·
    1 year ago

    I don’t get why this is an issue. Assuming they purchased a legal copy that it was trained on then what’s the problem? Like really. What does it matter that it knows a certain book from cover to cover or is able to imitate art styles etc. That’s exactly what people do too. We’re just not quite as good at it.

    • Hildegarde@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      12
      ·
      1 year ago

      A copyright holder has the right to control who has the right to create derivative works based on their copyright. If you want to take someone’s copyright and use it to create something else, you need permission from the copyright holder.

      The one major exception is Fair Use. It is unlikely that AI training is a fair use. However this point has not been adjudicated in a court as far as I am aware.

      • FatCat@lemmy.world
        link
        fedilink
        English
        arrow-up
        27
        arrow-down
        8
        ·
        1 year ago

        It is not a derivative it is transformative work. Just like human artists “synthesise” art they see around them and make new art, so do LLMs.

        • BURN@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          LLMs don’t create anything new. They have limited access to what they can be based on, and all assumptions made by it are based on that data. They do not learn new things or present new ideas. Only ideas that have been already done and are present in their training.

        • Hildegarde@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          6
          ·
          1 year ago

          Transformative works are not a thing.

          If you copy the copyrightable elements of another work, you have created a derivative work. That work needs to be transformative in order to be eligible for its own copyright, but being transformative alone is not enough to make it non-infringing.

          There are four fair use factors. Transformativeness is only considered by one of them. That is not enough to make a fair use.

          • Cosmic Cleric@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 year ago

            Transformativeness is only considered by one of them. That is not enough to make a fair use.

            Somebody better let YouTube content creators know that. /s

      • LordShrek@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        5
        ·
        1 year ago

        this is so fucking stupid though. almost everyone reads books and/or watches movies, and their speech is developed from that. the way we speak is modeled after characters and dialogue in books. the way we think is often from books. do we track down what percentage of each sentence comes from what book every time we think or talk?

        • SpiderShoeCult@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          1 year ago

          Aye, but I’m thinking the whole notion of copyright is banking on the fact that human beings are inherently lazy and not everyone will start churning out books in the same universe or style. And if they do, it takes quite some time to get the finished product and they just get sued for it. It’s easy, because there’s a single target.

          So there’s an extra deterrent to people writing and publishing a new harry potter novel, unaffiliated with the current owner of the copyright. Invest all that time and resources just to be sued? Nah…

          Issue with generating stuff with 'puters is that you invest way less time, so the same issue pops up for the copyright owner, they’re just DDoS-ed on their possible attack routes. Will they really sue thousands or hundreds of thoudands of internet randos generating harry potter erotica using a LLM? Would you even know who they are? People can hide money away in Switzerland from entite governments, I’m sure there are ways to hide your identity from a book publisher.

          It was never about the content, it’s about the opportunities the technology provides to halt the gears of the system that works to enforce questionable laws. So they’re nipping it in the bud.

          • LordShrek@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            3
            ·
            1 year ago

            this brings up the question: what is a book? what is art? if an “AI” can now churn out the next harry potter sequel and people literally can’t tell that it’s not written by JK Rowling, then what does that mean for what people value in stories? what is a story? is this a sign that we humans should figure something new out, instead of reacting according to an outdated protocol?

            yes, authors made money in the past before AI. now that we have AI and most people can get satisfied by a book written by AI, what will differentiate human authors from AI? will it become a niche thing, where some people can tell the difference and they prefer human authors? or will there be some small number of exceptional authors who can produce something that is obviously different from AI?

            i see this as an opportunity for artists to compete with AI, rather than say “hey! no fair! he can think and write faster than me!”

            • SpiderShoeCult@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              Well, poor literature has always existed, which some might not even dignify to call literature. Are writers of such things threatened by LLMs? Of course they are. Every new technology has beought with it the fear of upending somebody’s world. And to some extent, every new technology has indeed done just that.

              Personally, and… this will probably be highly unpopular, I honestly don’t care who or what created a piece of art. Is it pretty? Does it satisfy my need for just the right amount of weird, funny and disturbing to stir emotions or make me go ‘heh, interesting!’? Then it really doesn’t matter where it comes from. We put way too much emphasis on the pedigree of art and not on the content. Hell, one very nice short story I read was the greentext one about humans being AI and escaping from the simulation. Wonder how many would scoff at calling art something that came out of 4chan?

              Maybe this is the issue? Art is thought of as a purely human endeavour (also birds do it, and that one pufferfish that draws on the seabed, but they’re “dumb” animals so they don’t count, right? hell, there’s even a jumping spider that does some pretty rad dances). And if code in a machine can do it just as well (can it? let it - we’ll be all the better for it. can’t it? let it be then - no issue) then what would be the significance of being human?

  • afraid_of_zombies@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 year ago

    I am sure they have patched it by now but at one point I was able to get chatgpt to give me copyright text from books by asking for ever large quotations. It seemed more willing to do this with books out of print.

    • stewsters@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Yeah, it refuses to give you the first sentence from Harry Potter now.

      Which is kinda lame, you can find that on thousands of webpages. Many of which the system indexed.

      If someone was looking to pirate the book there are way easier ways than issuing thousands of queries to ChatGPT. Type “Harry Potter torrent” into Google and you will have them all in 30 seconds.

      • BURN@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        ChatGPT has a ton of extra query qualifiers added behind the scenes to ensure that specific outputs can’t happen