• novibe@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    7 months ago

    I would agree with you, if that was at all how the AIs generate images.

    They don’t “copy and paste” anything. The images they make are novel. The AI is only trained on other images. It doesn’t have access to them to copy them once the training ends.

    The way the AI generates new images is really similar to humans. It goes over its references and literally creates a brand new image.

    Now, just like a person, you can ask it to make something as an exact copy of something that exists. And it can do it like a human, through “technique” and references. But it’s not copying directly, it’s making a new image that is like the one you asked it to copy.

    I really wish people would realise this. Idk why the idea image generating AI is “copying” from a database of images is so prevalent…

    The database of images is literally only used during training. Once the AI is set the database doesn’t exist to it anymore.

    The difference between an artist who studied their whole life, seeing paintings, seeing references, going to classes, to then create new images from their own mind -> to one that traces images from google.

    AI currently does the first, not the latter.

    • FunkyStuff [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      7 months ago

      Look, I know how deep learning works. I know it doesn’t literally copy the images from the training dataset. But the entire point of supervised learning is to burn information about the training data into the weights and biases of a neural network in such a way that it generalizes over some domain, and can correlate the desired inputs with the desired outputs. Just because you’re using stochastic methods to indirectly reproduce the training data (of course, in a way that’s invisible to humans because of the nature of deep neural networks), doesn’t suddenly erase the fact that the only substance an AI has to draw from is the training data itself.

      I think it’s really oversimplifying how humans make art to say that it’s just going over references and creating something new from it. As humans, we are influenced by the work we’ve seen, but because of our unique experience we inject something completely new into any art we make, no matter how derivative. An AI is incapable of doing the same (except for some random noise), because literally all it’s capable of doing is composing together information that has been baked into its weights and biases. It’s not like when you ask a generative AI to make something for you, it will decide to get funky with it. All it’s doing is drawing from the information that has been baked into it.

      Just like how ChatGPT doesn’t actually understand what it’s saying because it’s only capable of predicting statistical relationships between words one word at a time, and has no model of meaning, only of how words go together in the training data, AI that generates images doesn’t actually know what it’s making or why. That is totally different from humans who make a piece of art step by step and do so very deliberately.

      Edit: I recommend you watch this video by an astrophysicist who works with machine learning regularly, she makes my point a lot better than I can. https://youtu.be/EUrOxh_0leE

      • novibe@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        7 months ago

        How would you classify those “experiences” people have that influence their art or work other than data? Honest question.

        And very interesting video. I still don’t 100% align with this perspective, cause I feel it tries to give something extra to the brain than materiality. While I’m no material reductionist, I don’t think our human creativity is “special” or “metaphysical”. It’s our brain, and it’s physical. It can be physically replicated.

        I think AI will have a “soul” or consciousness because I think everything already has it. It’s just our human biology that allows this consciousness to be self-experiential and experience other things, such as thoughts and ideas and feelings. A rock doesn’t have those, but it has a “soul” or consciousness. But I feel I digressed a lot lol

        Also to make it clear, I don’t think AI exists already. I think these models and developments we have are part of AI though.

        • FunkyStuff [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          I don’t disagree that experiences are data. The major distinction I’m making is that the human creative process uses more than just data, we have intention, aesthetics, we make mistakes, change our minds, iterate, etc. For a generative AI, the “creative process” is tokenizing a string, running the tokens through an attention matrix, plugging that into a thousand different matrices that then go into a post processing layer and they spit out an image. At no point does it look at what it’s doing and evaluate how it’s gonna fit into the final picture.

          As for the rest of your reasoning, I neither agree nor disagree, I think we just don’t have the same definition of consciousness.

          • novibe@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            7 months ago

            I feel your description of what a generative AI does is pretty reductive. The middle part of “plugging the ‘token’ through thousands of different matrices” is not at all well understood. We don’t know how the AI generates the images or text. It can’t explain itself.

            And we have ample research showing these models have internal models of the world and can have “thoughts”.

            In any case, what would you say consciousness is? This is a more interesting question to me tbh.

            • FunkyStuff [he/him]@hexbear.net
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              7 months ago

              Well I don’t see the problem, AI can’t explain itself but it’s nothing more than matrix multiplication with a nonlinearity. Maybe you use a Fourier transform and a kernel instead of scalar weights for a convolutional neural network, maybe it has state instead of being purely feed forward, but at the core of it all you’re doing is multiplying matrices and applying a nonlinearity. I don’t know what you mean that we don’t know how it generates images and text. It’s literally just doing the thing it was programmed to do?

              What research? I’d like to see some evidence that these models “think,” given that the way every LLM I know of works is by generating a single word at a time. When you ask a GPT how to bake bread, and the first word it outputs is “Surely!” it has no clue what explanation it’ll start giving you. In fact, whether or not it chooses the exact word “Surely!” as the start of the response has a cascading response on the rest of the output. Then, as I had said earlier, LLMs don’t see anything more than the statistical correlations between words. No LLM knows what gravity is, but when you ask it why things fall down it has enough physics textbooks in its training data that it can parrot the answer from there.

              One of the ways I really broke down the idea that GPTs have any model of thought is playing this game. If AI had any actual model of meaning, it would understand security and it would understand not to just tell the player the password. Instead, it will literally blurt it out if you do as much as ask it for words that rhyme. You don’t even need to mention “password,” the way GPT works means that if it detects a lot of weight on a certain word in its previous prompt (which naturally would’ve emphasized the password), it’s almost guaranteed to bring it up again. I know it’s not exactly a hard proof, but it is fun.

              As for your last question you’re out of luck because I’m actually just a Catholic lol, not a lot more to say than I believe that there is a metaphysical nature to human experience connecting us to a soul. But that’s a completely unscientific belief to be honest, and it’s not a point I can argue because it’s not based on evidence.

              • novibe@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                7 months ago

                It’s not true to say that LLMs just do as they are programmed. That’s not how machine and deep learning work. The programming goes into making it able to learn and parse through data. The results are filtered and weighted, but they are not the result of the programming, they are the result of the training.

                Y’know, like our brain was programmed by natural selection and the laws of biology to learn and use certain tools (eyes, touch, thoughts etc.) and with “training data” (learning or lived experience) it outputs certain results which are then filtered and weighted (by parents, school, society)….

                I think LLMs and diffusors will be a part of the AI mind, generating thoughts like our mind does.

                Regarding the last part, do you think the brain or the mind create or are a part of the soul?

                I think discussing consciousness is very scientific. To think there’s no point in doing so is reductionist to materiality, which is unscientific. Unfortunately many people, even scientists, are more scientificists than actually scientific.

                • FunkyStuff [he/him]@hexbear.net
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  7 months ago

                  I don’t know how much you know about computer science and coding, but if you know how to program in Python and have some familiarity with NumPy, you can make your own feed forward neural network from scratch in an afternoon. You can make an AI that plays tic tac toe and train it against itself adversarially. It’s a fun project. What I mean by this is to say, yes they do, LLMs and generative models do as they are programmed. They are no different than a spreadsheet program. The thing that makes them special is the weights and biases that were baked into them by going through countless terabytes of training data, as you correctly state. But it’s not like AI have a secret, arcane mathematical operation that no computer scientist understands. What we don’t understand about them is why they activate the way they do; we don’t really know why any given part of the network gets activated, which makes sense because of the stochastic nature of deep learning: it’s all just convergence on a “pretty good” result after getting put through millions of random examples.

                  I think the mind and consciousness are separate from the soul that precedes their thoughts. But, again, I have absolutely no evidence for that. It’s just dogma.