• Mahlzeit@feddit.de
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    12
    ·
    1 year ago

    They almost certainly had, as it was downloaded from the net. Some stuff gets published accidentally or illegally, but that’s hardly something they can be expected to detect or police.

    • MoogleMaestro@kbin.social
      link
      fedilink
      arrow-up
      16
      arrow-down
      2
      ·
      1 year ago

      They almost certainly had, as it was downloaded from the net.

      That’s not how it works. That’s not how anything works.

    • merc@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      1 year ago

      Unless you’re arguing that any use of data from the Internet counts as “fair use” and therefore is excepted under copyright law, what you’re saying makes no sense.

      There may be an argument that some of the ways ChatGPT uses data could count as fair use. OTOH, when it’s spitting out its training material 1:1, that makes it pretty clear it’s copyright infringement.

      • Mahlzeit@feddit.de
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        1 year ago

        In reality, what you’re saying makes no sense.

        Making something available on the internet means giving permission to download it. Exceptions may be if it happens accidentally or if the uploader does not have the necessary permissions. If users had to make sure that everything was correct, they’d basically have to get a written permission via the post before visiting any page.

        Fair use is a defense against copyright infringement under US law. Using the web is rarely fair use because there is no copyright infringement. When training data is regurgitated, that is mostly fair use. If the data is public domain/out of copyright, then it is not.

          • Mahlzeit@feddit.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Oh. I see. The attempts to extract training data from ChatGPT may be criminal under the CFAA. Not a happy thought.

            I did say “making available” to exclude “hacking”.

            • JackbyDev@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              The point I’m illustrating is that plenty of things reasonable people would assume are fine the law can call hacking.

        • PugJesus@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          1 year ago

          Making something available on the internet means giving permission to download it.

          Literally and explicitly untrue.

          • Mahlzeit@feddit.de
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            1 year ago

            Sure, you can put something up and explicitly deny permission to visit the link. But courts rarely back up that kind of silliness.

        • merc@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          Making something available on the internet means giving permission to download it.

          No permission is given to download it. In particular, no permission is given to copy it.

          Fair use is a defense against copyright infringement under US law

          Yes, but it’s often unclear what constitutes fair use.

          Using the web is rarely fair use because there is no copyright infringement

          What are you even talking about.

          When training data is regurgitated, that is mostly fair use

          You have no idea what fair use is, just admit it.

    • MNByChoice
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      6
      ·
      1 year ago

      that’s hardly something they can be expected to detect or police.

      Why not?

      I couldn’t, but I also do not have an “awesomely powerful AI on the verge of destroying humanity”. Seems it would be simple for them. I mean, if I had such a thing, I would be expected to use it to solve such simple problems.

      • WldFyre@lemm.ee
        link
        fedilink
        English
        arrow-up
        11
        ·
        1 year ago

        but I also do not have an “awesomely powerful AI on the verge of destroying humanity”

        Neither do they lol