• frezik
    link
    fedilink
    arrow-up
    14
    arrow-down
    5
    ·
    9 months ago

    I find that a lot of the reasons people put up for saying “LLMs are not intelligent” are wishy-washy, vague, untestable nonsense. It’s rarely something where we can put a human and ChatGPT together in a double-blind test and have the results clearly show that one meets the definition and the other does not. Now, I don’t think we’ve actually achieved AGI, but more for general Occam’s Razor reasons than something more concrete; it seems unlikely that we’ve achieved something so remarkable while understanding it so little.

    I recently saw this video lecture by a neuroscientist, Professor Anil Seth:

    https://royalsociety.org/science-events-and-lectures/2024/03/faraday-prize-lecture/

    He argues that our language is leading us astray. Intelligence and consciousness are not the same thing, but the way we talk about them with AI tends to conflate the two. He gives examples of where our consciousness leads us astray, such as seeing faces in clouds. Our consciousness seems to really like pulling faces out of false patterns. Hallucinations would be the times when the error correcting mechanisms of our consciousness go completely wrong. You don’t only see faces in random objects, but also start seeing unicorns and rainbows on everything.

    So when you say that people were convinced that ELIZA was an actual psychologist who understood their problems, that might be another example of our own consciousness giving the wrong impression.

    • vcmj@programming.dev
      link
      fedilink
      arrow-up
      6
      ·
      9 months ago

      Personally my threshold for intelligence versus consciousness is determinism(not in the physics sense… That’s a whole other kettle of fish). Id consider all “thinking things” as machines, but if a machine responds to input in always the same way, then it is non-sentient, where if it incurs an irreversible change on receiving any input that can affect it’s future responses, then it has potential for sentience. LLMs can do continuous learning for sure which may give the impression of sentience(whispers which we are longing to find and want to believe, as you say), but the actual machine you interact with is frozen, hence it is purely an artifact of sentience. I consider books and other works in the same category.

      I’m still working on this definition, again just a personal viewpoint.

        • vcmj@programming.dev
          link
          fedilink
          arrow-up
          5
          ·
          9 months ago

          I read this question a couple times, initially assuming bad faith, even considered ignoring it. The ability to change, would be my answer. I don’t know what you actually mean.

            • root_beer
              link
              fedilink
              English
              arrow-up
              6
              ·
              9 months ago

              Conscience and consciousness are not the same thing

            • vcmj@programming.dev
              link
              fedilink
              arrow-up
              2
              ·
              8 months ago

              I do think we’re machines, I said so previously, I don’t think there is much more to it than physical attributes, but those attributes let us have this discussion. Remarkable in its own right, I don’t see why it needs to be more, but again, all personal opinion.