• novibe@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    7 months ago

    It’s not true to say that LLMs just do as they are programmed. That’s not how machine and deep learning work. The programming goes into making it able to learn and parse through data. The results are filtered and weighted, but they are not the result of the programming, they are the result of the training.

    Y’know, like our brain was programmed by natural selection and the laws of biology to learn and use certain tools (eyes, touch, thoughts etc.) and with “training data” (learning or lived experience) it outputs certain results which are then filtered and weighted (by parents, school, society)….

    I think LLMs and diffusors will be a part of the AI mind, generating thoughts like our mind does.

    Regarding the last part, do you think the brain or the mind create or are a part of the soul?

    I think discussing consciousness is very scientific. To think there’s no point in doing so is reductionist to materiality, which is unscientific. Unfortunately many people, even scientists, are more scientificists than actually scientific.

    • FunkyStuff [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      I don’t know how much you know about computer science and coding, but if you know how to program in Python and have some familiarity with NumPy, you can make your own feed forward neural network from scratch in an afternoon. You can make an AI that plays tic tac toe and train it against itself adversarially. It’s a fun project. What I mean by this is to say, yes they do, LLMs and generative models do as they are programmed. They are no different than a spreadsheet program. The thing that makes them special is the weights and biases that were baked into them by going through countless terabytes of training data, as you correctly state. But it’s not like AI have a secret, arcane mathematical operation that no computer scientist understands. What we don’t understand about them is why they activate the way they do; we don’t really know why any given part of the network gets activated, which makes sense because of the stochastic nature of deep learning: it’s all just convergence on a “pretty good” result after getting put through millions of random examples.

      I think the mind and consciousness are separate from the soul that precedes their thoughts. But, again, I have absolutely no evidence for that. It’s just dogma.