Two authors sued OpenAI, accusing the company of violating copyright law. They say OpenAI used their work to train ChatGPT without their consent.

  • jecxjo
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    9
    ·
    edit-2
    1 year ago

    The only question I have to content creators of any kind who are worried about AI…do you go after every human who consumed your content when they create anything remotely connected to your work?

    I feel like we have a bias towards humans, that unless you’re actively trying to steal someone’s idea or concepts we ignore the fact that your content is distilled into some neurons in their brain and a part of what they create from that point forward. Would someone with an eidetic memory be forbidden from consuming your work as they could internally reference your material when creating their own?

    • assassin_aragorn@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      1 year ago

      Look at it this way, if an AI is developed by a private company, its purpose is to make money. It’s consuming material for that sole purpose. That isn’t the case with humans. Humans read for pleasure and for information’s sake itself. If an AI reads the same concept but with different wording, it generates different content. If a human reads the same concept but with different wording, it makes no difference.

      Now, if these companies release their AI for free use, then that’s different.

      • jecxjo
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Hmm so we should define what is acceptable based on having emotions? There could be people who read purely to steal and abuse others work and do not enjoy the content.

        I’d disagree with your claim that different inputs for humans wouldn’t generate different outputs. Two people can read the same thing and get different outputs. Heck I’ve read a book a second time and come away with a different understanding.

        I get what you’re saying but what is going to happen is laws will be written but people a lot dumber than us. Not that we are looking to make General AI but a lot of arguments currently being made are basically stating GAI could never be legal and the only justification I’ve seen is that it’s “not a human.”

    • FunctionFn
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      By nature of a human creating something “connected” to another work, then the work is transformative. Copyright law places some value on human creativity modifying a work in a way that transforms it into something new.

      Depending on your point of view, it’s possible to argue that machine learning lacks the capacity for transformative work. It is all derivative of its source material, and therefore is infringing on that source material’s copyright. This is especially true when learning models like ChatGPT reproduce their training material whole-cloth like is mentioned elsewhere in the thread.

      • jecxjo
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        I’d argue that all human work is derivative as well. Not from the legal stance of copyright law but from a fundamental stance of how our brains work. The only difference is that humans have source material outside that which is created. You have seen an apple on a tree before, not all of your apple experiences are pictures someone drew, photos someone took or a poem someone wrote. At what point would you consider enough personal experience to qualify as being able to generate transformative work? If I were to put a camera in my head and record my life and donate it as public domain would that be enough data to allow an AI to be considered able to create transformative works? Or must the AI have genuine personal experiences?

        Our brains can do some level of randomness but it’s current state is based on its previous state and the inputs it received. I wonder when trying to come up with something unique, what portion of our brains dive into memories versus pure noise generation. That’s easily done on a computer.

        As for whole cloth reproduction…I memorized many poems in school. Does that mean I can never generate something unique?

        Don’t get me wrong, they used stolen material, that’s wrong. But had it been legally obtained I see less of an issue.

        • FunctionFn
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          But derivative and transformative are legal terms with legal meanings. Arguing how you feel the word derivative applies to our brain chemistry is entirely irrelevant.

          You’ve memorized poems, and (assuming the poem is not in the public domain) if you reproduce that poem housed in a collection of poems without any license from the copyright owner you’ve infringed on that copyright. It is not any different when ChatGPT reproduces a poem in it’s output.

          • jecxjo
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 year ago

            I think it’s very relevant because those laws were created at a time when there was no machine generated material. The law makes the assumption that one human being is creating material and another human being is stealing some material. In no part of these laws do they dictate rules on creating a non-human third party that would do the actual copying. There were specific rules added for things like photocopy machines and faxes where attempts are made to create exact facsimiles. But ChatGPT isn’t doing what a photocopier does.

            The current lawsuits, at least the one’s I’ve read over, have not been explicitly about outputting copyright material. While ChatGPT could output the material just as i could recite a poem, the issues being brought up is that the training materials were copyright and that the AI system then “contains” said material. That is why i asked my initial question. My brain could contain your poem and as long as i dont write it down as my own, what violation is occuring? OpenAI could go to the library, rent every book and scan them in and all would be ok, right? At least from the recent lawsuits.

            • FunctionFn
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              The current (at least in the US) laws do cover work that isn’t created by a human. It’s well-tread legal ground. The highest profile case of it was a monkey taking a photograph: https://en.m.wikipedia.org/wiki/Monkey_selfie_copyright_dispute

              Non-human third parties cannot hold copyright. They are not afforded protections by copyright. They cannot claim fair use of copyrighted material.

              • jecxjo
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                I meant in the opposite direction. If I teach an elephant to paint and then show him a Picasso and he paints something like it am I the one violating copyright law? I think currently there is no explicit laws about this type of situation but if there was a case to be made MY intent would be the major factor.

                The 3rd party copying we see laws around are human driven intent to make exact replicas. Photocopy machines, Cassette/VHS/DVD duplication software/hardware, Faxes, etc. We have personal private fair use laws but all of this about humans using tools to make near exact replicas.

                The law needs to catch up to the concept of a human creating something that then goes out and makes non replica output triggered by someone other than the tool’s creator. I see at least 3 parties in this whole process:

                • AI developer creating the system
                • AI teacher feeding it learning data
                • AI consumer creating the prompt

                If the data fed to the AI was all gathered by legal means, lets say scanned library books, who is in violation if the content output were to violate copyright laws?

                • FunctionFn
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  These are questions that, again, are tread pretty well in the copyright space. ChatGPT in this case acts more like a platform than a tool, because it hosts and can reproduce material that it is given. Again, US only perspective, and perspective of a non-lawyer, the DMCA outlines requirements for platforms to be protected from being sued for hosting and reproducing copyrighted works. But part of the problem is that the owners of the platforms are the parties that are uploading, via training the MLL, copyrighted works. That automatically disqualifies a platform from any sort of safe harbor protections, and so the owners of the ChatGPT platform would be in violation.

    • Eccitaze@yiffit.net
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      7
      ·
      1 year ago

      The problem with AI as it currently stands is that it has no actual comprehension of the prompt, or ability to make leaps of logic, nor does it have the ability to extend and build upon existing work to legitimately transform it, except by using other works already fed into its model. All it can do is blend a bunch of shit together to make something that meets a set of criteria. There’s little actual fundamental difference between what ChatGPT does and what a procedurally generated game like most roguelikes do–the only real difference is that ChatGPT uses a prompt while a roguelike uses a RNG seed. In both cases, though, the resulting product is limited solely to the assets available to it, and if I made a roguelike that used assets ripped straight from Mario, Zelda, Mass Effect, Crash Bandicoot, Resident Evil, and Undertale, I’d be slapped with a cease and desist fast enough to make my head spin.

      The fact that OpenAI stole content from everybody in order to make its model doesn’t make it less infringing.

      • ClamDrinker@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        3
        ·
        edit-2
        1 year ago

        That’s incorrect. Sure it has no comprehension of what the words it generates actually means, but it does understand the patterns that can be found in the words. Ask an AI to talk like a pirate, and suddenly it knows how to transform words to sound pirate like. It can also combine data from different text about similar topics to generate new responses that never existed in the first place.

        Your analogy is a little flawed too, if you mixed all the elements in a transformative way and didn’t re-use any materials as-is, even if you called it Mazefecootviltale, as long as the original material were transformed sufficiently, you haven’t infringed on anything. LLMs don’t get trained to recreate existing works (which would make it only capable of producing infringing works), but to predict the best next word (or even parts of a word) based on the input information. It’s definitely possible to guide an AI towards specific source materials based on keywords that only exist in the source material that could be infringing, but in general it generates so generalized that it’s inherently transformative.

        • Eccitaze@yiffit.net
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          5
          ·
          edit-2
          1 year ago

          Again, that’s not comprehension, that’s mixing in yet more data that was put into the model. If you ask an AI to do something that is outside of the dataset it was trained on, it will massively miss the mark. At best, it will produce something that is close to what you asked, but not quite right. It’s why an AI model that could beat the world’s best Go players was beaten by a simple strategy that even amateur Go players could catch and defeat–the AI never came across that strategy while it was training against itself, so it had no idea what was going on.

          And fair use isn’t the bulletproof defense you think it is. Countless fan games have been shut down over the decades, most of them far more transformative than my hypothetical example, such as AM2R. You bet your ass that if I tried to profit off of that hypothetical crossover roguelike, using sprites, models, and textures directly ripped from their respective games, it would be shut down immediately.

          EDIT: I also want to address the assertion that AI isn’t trained to recreate existing works; in my view, that’s wholly irrelevant. If I made a program that took all the Harry Potter books, ran each word through a thesaurus, and sold it for profit, that would still be infringing, even if no meaningful words were identical to the original source material. Granted, if I curated the output and made a few of the more humorous excerpts available for free through a Mastodon or Lemmy post, that would likely qualify as fair use. However, that would be because a human mind is parsing the output and filtering out the 99% of meaningless gibberish that a thesaurus-ized Harry Potter would result in.

          The only human input to an AI that gave consent to being part of its output is the miniscule input of the prompt given to it by the human, which does not meet the minimis effort required for copyright protection under law. The rest of the input–the countless terabytes of data scraped from the internet and fed into the AI’s training model–was all taken without the author’s consent, and their contribution vastly outweighs that of the prompt author and OpenAI’s own transformative efforts via the LLM.

          • ClamDrinker@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            2
            ·
            edit-2
            1 year ago

            You seem to misunderstand what an LLM does. It doesn’t generate “right” text. It generates “probable” text. There’s no right or wrong since it only generates a single word ahead of where it currently is. Hence why it can generate information that’s complete bullshit. I don’t know the details about this Go AI you’re talking about, but it’s pretty safe to say it’s not an LLM or uses a similar technique to it as Go is a game and not a creative work. There are many techniques for creating algorithms that fall under the “AI” umbrella.

            Your second point is a whole different topic. I was referring to a “derivative work”, which is not the same as “fair use”. Derivative works are quite literally everywhere. https://en.wikipedia.org/wiki/Derivative_work A derivative work doesn’t require fair use, as it no longer falls under the same copyright as the original. While fair use is an exception under which copyrightable work can be used without infringing.

            And also, those projects most of the time do not get shut down because they are actually illegal, but they get shut down because companies with tons of money can send threatening letters all day and have a team of high quality lawyers to send them. A cease and desist isn’t a legal enforcement from a judge, it’s a “recommendation for us not to (attempt to) sue you”. And that works on most small projects. It very very rarely goes to court over these things. And sometimes it’s because it’s totally warranted. Especially for fan projects it’s extremely hard to completely erase all protected copyrightable work, since they are specifically made to at least imitate or expand upon what they’re a fan project of.

            EDIT: Minor clarification

            • ClamDrinker@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              2
              ·
              edit-2
              1 year ago

              Also, it should be mentioned that pretty much all games are in some form derivative works. Lets take Undertale since I’m most familiar with it. It’s well known that Undertale takes a lot of elements from other games. RPG mechanics from Mother and Earthbound. Bullet hell mechanics from games like Touhou Project. And more from games like Yume Nikki, Moon: Remix RPG Adventure, Cave Story. And funnily enough, the creator has even cited Mario & Luigi as a potential inspiration.

              So why was it allowed to exist without being struck down? Because it fits the definition of a derivative works to the letter. You can find individual elements which are taken almost directly from other games, but it doesn’t try to be the same as what it was created after.

              • Eccitaze@yiffit.net
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                4
                ·
                1 year ago

                Undertale was allowed to exist because none of the elements it took inspiration from were eligible for copyright protection. Everything that could have qualified for copyright protection–the dialogue, plot, graphical assets, music, source code–were either manually reproduced directly by Toby Fox and Temmie Chang, or used under permissive licenses that allowed reproduction (e.g. the GameMaker Studio engine). Meanwhile, the vast majority of content OpenAI used to feed its AI models were not produced by OpenAI directly, nor were they obtained under permissive license.

                So… thanks for proving my point?

                • tomulus@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  1 year ago

                  Meanwhile, the vast majority of content OpenAI used to feed its AI models were not produced by OpenAI directly, nor were they obtained under permissive license.

                  That’s input, not output, so not relevant to copyright law. If your arguments focused on the times that ChatGPT reproduced copyrighted works then we can talk about some kind of ContentID system for preventing that before it happens or compensating the creators of it does. I think we can all acknowledge that it feels iffy that these models are trained on copyrighted works but this is a brand new technology. There’s almost certainly a win-win outcome here.

                • ClamDrinker@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  arrow-down
                  2
                  ·
                  edit-2
                  1 year ago

                  The AI models (not specifically OpenAI’s models) do not contain the original material they were trained on. Just like the creators of Undertale consumed the games they were inspired by into their brain, and learned from them, so did the AI learn from the material it was trained on and learned how to make similar yet distinctly different output. You do not need a permissive license to learn from something once it has been publicized.

                  You can’t just put your artwork up on a wall and then demand every person who looks at it to not learn from it while simultaneously allowing them to look at it because you have a license that says learning from it is not allowed - that’s insane and hence why (as far as I know) no legal system acknowledges that as a legal defense.

            • Eccitaze@yiffit.net
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              1 year ago

              “right” and “probable” text are distinctions without difference. The simple fact is that an AI is incapable of handling anything outside its learning dataset. If you ask an AI to talk like a pirate, and it hasn’t had any pirate speak fed to it by a human via its training dataset, it will utterly fail. If I ask an AI to produce a Powershell script, and it hasn’t had code fed to it by a human via its training dataset, it will fail utterly. An AI cannot proactively buy a copy of Learn Powershell In a Month of Lunches and teach itself how to use Powershell. That fundamental shortcoming–the inability to self-improve, to proactively teach itself and apply that new knowledge to existing concepts–is a crucial, necessary element of transformative effort required to produce a derivative work (or fair use).

              When that happens, maybe I’ll buy that AI is anything more than the single biggest copyright infringement scheme the world has ever seen. Until then, though, I will wholeheartedly support the efforts of creative minds to defend their intellectual property rights against this act of blatant theft by tech companies profiting off their work.

              • ClamDrinker@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                edit-2
                1 year ago

                You realize LLMs are designed not to self improve by design right? It’s totally possible and has been tried - It’s just that they usually don’t end up very well once they do. And LLMs do learn new things, they’re just called new models. Because it takes time and resources to retrain LLMs with new information in mind. It’s up to the human guiding the AI to guide it towards something that isn’t copyright infringement. AIs don’t just generate things on their own without being prompted to by a human.

                You’re asking for a general intelligence AI, which would most likely be comprised of different specialized AIs to work together. Similar to our brains having specific regions dedicated to specific tasks. And this just doesn’t exist yet, but one of it’s parts now does.

                Also, you say “right” and “probable” are without difference, yet once again bring something into the conversation which can only be “right”. Code. You cannot create code that is incorrect or it will not work. Text and creative works cannot be wrong. They can only be judged by opinions, not by rule books which say “it works” or “it doesn’t”.

                The last line is just a bit strange honestly. The biggest users of AI are creative minds, and it’s why it’s important that AI models remain open source so all creative minds can use them.

                • Eccitaze@yiffit.net
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  3
                  ·
                  1 year ago

                  You realize LLMs are designed not to self improve by design right? It’s totally possible and has been tried - It’s just that they usually don’t end up very well once they do.

                  Tay is yet another example of AI lacking comprehension and intelligence; it produced racist and antisemitic content because it had no comprehension of ethics or morality, and so it just responded to the input given to it. It’s a display of “intelligence” on the same level as a slime mold seeking out the biggest nearby source of food–the input Tay received was largely racist/antisemitic, so its output became racist/antisemitic.

                  And LLMs do learn new things, they’re just called new models. Because it takes time and resources to retrain LLMs with new information in mind. It’s up to the human guiding the AI to guide it towards something that isn’t copyright infringement.

                  And the way that humans do that is by not using copyrighted material for its training dataset. Using copyrighted material to produce an AI model is infringing on the rights of the people who created the material, the vast majority of whom are small-time authors and artists and open-source projects composed of individuals contributing their time and effort to said projects). Full stop.

                  Also, you say “right” and “probable” are without difference, yet once again bring something into the conversation which can only be “right”. Code. You cannot create code that is incorrect or it will not work. Text and creative works cannot be wrong. They can only be judged by opinions, not by rule books which say “it works” or “it doesn’t”.

                  Then why does ChatGPT invent Powershell cmdlets out of whole cloth that don’t exist yet accomplish the exact precise task that the prompter asked it to do?

                  The last line is just a bit strange honestly. The biggest users of AI are creative minds, and it’s why it’s important that AI models remain open source so all creative minds can use them.

                  The biggest users of AI are techbros who think that spending half an hour crafting a prompt to get stable diffusion to spit out the right blend of artists’ labor are anywhere near equivalent to the literal collective millions of man hours spent by artists honing their skill in order to produce the content that AI companies took without consent or attribution and ran through a woodchipper. Oh, and corporations trying to use AI to replace artists, writers, call center employees, tech support agents…

                  Frankly, I’m absolutely flabbergasted that the popular sentiment on Lemmy seems to be so heavily in favor of defending large corporations taking data produced en masse by individuals without even so much as the most cursory of attribution (to say nothing of consent or compensation) and using it for the companies’ personal profit. It’s no different morally or ethically than Meta hoovering all of our personal data and reselling it to advertisers.

                  • ClamDrinker@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    1
                    ·
                    1 year ago

                    You’re shifting the goal post. You wanted an AI that can learn stuff while it’s being used and now you’re unhappy that one existed that did so in a primitive form. If you want a general artificial intelligence that is also able to understand the words it says, we are still decades off. For now it can simply only work off patterns, for which the training data needs to be curated. And as explained previously, it’s not infringing on copyright to train things on publicized works. You are simply denying that fact because you don’t want that to be true, but it is. And that’s why your sentiment isn’t shared outside of some anti-AI circle you’re part of.

                    The biggest users of AI are techbros who think that spending half an hour crafting a prompt to get stable diffusion to spit out the right blend of artists’ labor are anywhere near equivalent to the literal collective millions of man hours spent by artists honing their skill in order to produce the content that AI companies took without consent or attribution and ran through a woodchipper. Oh, and corporations trying to use AI to replace artists, writers, call center employees, tech support agents…

                    So because you don’t know any creative people who use the technology ethically, they don’t exist? Good to hear you’re sticking it up for the little guy who isn’t making headlines or being provocative. I don’t necessarily see these as ethical uses either, but I would be incredibly disingenuous to insinuate these are the only and primary ways to use AI - They are not, and your ignorance is showing if you actually believe so.

                    Frankly, I’m absolutely flabbergasted that the popular sentiment on Lemmy seems to be so heavily in favor of defending large corporations taking data produced en masse by individuals without even so much as the most cursory of attribution (to say nothing of consent or compensation) and using it for the companies’ personal profit. It’s no different morally or ethically than Meta hoovering all of our personal data and reselling it to advertisers.

                    I’m sorry, but you realize that this doesn’t make any sense right? Large corporations are the ones who would have enough information and/or money at their disposal to train their own AIs without relying on publicized works. Should any kind of blockade be created to stop people training AI models from using public work, you would effectively be taking AI away from the masses in the form of Open Source models, not from those corporations. So if anything, it’s you who is arguing for large corporations to have a monopoly on AI technology as it currently is.

                    Don’t think I actually like companies like OpenAI or Meta, it’s why I’ve been arguing about AI models in general, not their specific usage of the technology (As that is a whole different can of worms).

      • jecxjo
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        The fact that OpenAI stole content from everybody in order to make its model doesn’t make it less infringing.

        Totally in agreement with you here. They did something wrong and should have to deal with that.

        But my question is more about…

        The problem with AI as it currently stands is that it has no actual comprehension of the prompt, or ability to make leaps of logic, nor does it have the ability to extend and build upon existing work to legitimately transform it, except by using other works already fed into its model

        Is comprehension necessary for breaking copyright infringement? Is it really about a creator being able to be logical or to extend concepts?

        I think we have a definition problem with exactly what the issue is. This may be a little too philosophical but what part of you isn’t processing your historical experiences and generating derivative works? When I saw “dog” the thing that pops into your head is an amalgamation of your past experiences and visuals of dogs. Is the only difference between you and a computer the fact that you had experiences with non created works while the AI is explicitly fed created content?

        AI could be created with a bit of randomness added in to make what it generates “creative” instead of derivative but I’m wondering what level of pure noise needs to be added to be considered created by AI? Can any of us truly create something that isn’t in some part derivative?

        There’s little actual fundamental difference between what ChatGPT does and what a procedurally generated game like most roguelikes do

        Agreed. I think at this point we are in a strange place because most people think ChatGPT is a far bigger leap in technology than it truly is. It’s biggest achievement was being able to process synthesized data fast enough to make it feel conversational.

        What worries me is that we will set laws and legal precedent based on a fundamental misunderstanding of what the technology does. I fear that had all the sample data been acquired legally people would still have the same argument think their creations exist inside the AI in some full context when it’s really just synthesized down to what is necessary to answer the question posed “what’s the statically most likely next word of this sentence?”

        • Eccitaze@yiffit.net
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          1 year ago

          Is comprehension necessary for breaking copyright infringement? Is it really about a creator being able to be logical or to extend concepts?

          I think we have a definition problem with exactly what the issue is. This may be a little too philosophical but what part of you isn’t processing your historical experiences and generating derivative works? When I saw “dog” the thing that pops into your head is an amalgamation of your past experiences and visuals of dogs. Is the only difference between you and a computer the fact that you had experiences with non created works while the AI is explicitly fed created content?

          That’s part of it, yes, but nowhere near the whole issue.

          I think someone else summarized my issue with AI elsewhere in this thread–AI as it currently stands is fundamentally plagiaristic, because it cannot be anything more than the average of its inputs, and cannot be greater than the sum of its inputs. If you ask ChatGPT to summarize the plot of The Matrix and write a brief analysis of the themes and its opinions, ChatGPT doesn’t watch the movie, do its own analysis, and give you its own summary; instead, it will pull up the part of the database it was fed into by its learning model that relates to “The Matrix,” “movie summaries,” “movie analysis,” find what parts of its training dataset matches up to the prompt–likely an article written by Roger Ebert, maybe some scholarly articles, maybe some metacritic reviews–and spit out a response that combines those parts together into something that sounds relatively coherent.

          Another issue, in my opinion, is that ChatGPT can’t take general concepts and extend them further. To go back to the movie summary example, if you asked a regular layperson human to analyze the themes in The Matrix, they would likely focus on the cool gun battles and neat special effects. If you had that same layperson attend a four-year college and receive a bachelor’s in media studies, then asked them to do the exact same analysis of The Matrix, their answer would be drastically different, even if their entire degree did not discuss The Matrix even once. This is because that layperson is (or at least should be) capable of taking generalized concepts and applying them to specific scenarios–in other words, a layperson can take the media analysis concepts they learned while earning that four-year degree, and apply them to a specific thing, even if those concepts weren’t explicitly applied to that thing. AI, as it currently stands, is incapable of this. As another example, let’s say a brand-new computing language came out tomorrow that was entirely unrelated to any currently existing computing languages. AI would be nigh-useless at analyzing and helping produce new code for that language–even if it were dead simple to use and understand–until enough humans published code samples that could be fed into the AI’s training model.

          • jecxjo
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Hmm that is an interesting take.

            The movie summary question is interesting. For most people I doubt they have asked ChatGPT for its own personal views on the subject matter. Asking for a movie plot summary doesn’t inherrantly require the one giving it to have experienced the movie. If this were the case then pretty much all papers written in a history class would fall under this category. No high schooler today went to war but could write about it because they are synthesizing other’s writings about the topic. Granted we know this to be the case and the students are required to cite their sources even when not directly quoting them…would this resolve the first proble?

            If we specifically asked ChatGPT “Can you give me your personal critique of the movie The Matrix?” and it returned something along the lines of “Well I cannt view movies and only generate responses based on writings of others who have seen it.” would that make the usage more clear? If its required for someone to have the ability to have their own critical analysis, there would be a handful of kids from my high school who would fail at that task too and did so regularly.

            I like your college example as that is getting better at a definition, but I think we need to find a very explicit way of describing what is happening. I agree current AI can’t do any of this so we are very much talking about future tech.

            With the idea of extending matterial, do we have a good enough understanding of how humans do it? I think its interesting when we look at computer neural networks. One of the first ones we build in a programming class is an AI that can read single digit, hand written numbers. What eventually happens is the system generates a crazy huge and unreadable equation to convert bits of an image into a statistically likely answser. When you disect it you’d think, “Oh to see the number 9 the equation must see a round top and a straight part on the right side below it.” And that assumption would be wrong. Instead we find its dozens of specific areas of the image that you and I wouldn’t necessarily associate with a “9”.

            But then if we start to think about our own brains, do we actually process reading the way we think we do? Maybe for individual characters. But we know when we read words we focus specifically on the first and last character, the length of the word and any variation of the height of the text. We can literally scramble up the letters in the middle and still read the text.

            The reason I bring this up iss that we often focus on how huamsn can transform data using past history but we often fail to explain how this works. When asking ChatGPT a more vague concept it does pull from other’s works but one thing it also does is creates a statistical analysis of human speech. It literally figures out what is the most likely next word to be said in the given sentence. The way this calculation occurs is directly related to the matterial provided, the order in which it was provided, the weights programmed into it to make decisions, etc. I’d ask how this is fundamentally different than what humans do.

            I’m a big fan of students learning a huge portion of the same literature when in high school. It creates a common dialog we can all use to understand concepts. I, in my 40s, have often referenced a character or event, statement or theme from classic literature and have noticed that only those older than me often get it. In less than a few words I’ve conveyed a huge amount of information that only occurs when the other side of the conversation gets the reference. I’m wondering if at some point AI is able to do this type of analysis would it be considered transformative?