Two authors sued OpenAI, accusing the company of violating copyright law. They say OpenAI used their work to train ChatGPT without their consent.

  • jecxjo
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    I think it’s very relevant because those laws were created at a time when there was no machine generated material. The law makes the assumption that one human being is creating material and another human being is stealing some material. In no part of these laws do they dictate rules on creating a non-human third party that would do the actual copying. There were specific rules added for things like photocopy machines and faxes where attempts are made to create exact facsimiles. But ChatGPT isn’t doing what a photocopier does.

    The current lawsuits, at least the one’s I’ve read over, have not been explicitly about outputting copyright material. While ChatGPT could output the material just as i could recite a poem, the issues being brought up is that the training materials were copyright and that the AI system then “contains” said material. That is why i asked my initial question. My brain could contain your poem and as long as i dont write it down as my own, what violation is occuring? OpenAI could go to the library, rent every book and scan them in and all would be ok, right? At least from the recent lawsuits.

    • FunctionFn
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The current (at least in the US) laws do cover work that isn’t created by a human. It’s well-tread legal ground. The highest profile case of it was a monkey taking a photograph: https://en.m.wikipedia.org/wiki/Monkey_selfie_copyright_dispute

      Non-human third parties cannot hold copyright. They are not afforded protections by copyright. They cannot claim fair use of copyrighted material.

      • jecxjo
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I meant in the opposite direction. If I teach an elephant to paint and then show him a Picasso and he paints something like it am I the one violating copyright law? I think currently there is no explicit laws about this type of situation but if there was a case to be made MY intent would be the major factor.

        The 3rd party copying we see laws around are human driven intent to make exact replicas. Photocopy machines, Cassette/VHS/DVD duplication software/hardware, Faxes, etc. We have personal private fair use laws but all of this about humans using tools to make near exact replicas.

        The law needs to catch up to the concept of a human creating something that then goes out and makes non replica output triggered by someone other than the tool’s creator. I see at least 3 parties in this whole process:

        • AI developer creating the system
        • AI teacher feeding it learning data
        • AI consumer creating the prompt

        If the data fed to the AI was all gathered by legal means, lets say scanned library books, who is in violation if the content output were to violate copyright laws?

        • FunctionFn
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          These are questions that, again, are tread pretty well in the copyright space. ChatGPT in this case acts more like a platform than a tool, because it hosts and can reproduce material that it is given. Again, US only perspective, and perspective of a non-lawyer, the DMCA outlines requirements for platforms to be protected from being sued for hosting and reproducing copyrighted works. But part of the problem is that the owners of the platforms are the parties that are uploading, via training the MLL, copyrighted works. That automatically disqualifies a platform from any sort of safe harbor protections, and so the owners of the ChatGPT platform would be in violation.