• Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    61
    ·
    1 month ago

    Did someone not know this like, pretty much from day one?

    Not the idiot executives that blew all their budget on AI and made up for it with mass layoffs - the people interested in it. Was that not clear that there was no “reasoning” going on?

    • khalid_salad@awful.systems
      link
      fedilink
      English
      arrow-up
      36
      ·
      edit-2
      1 month ago

      Well, two responses I have seen to the claim that LLMs are not reasoning are:

      1. we are all just stochastic parrots lmao
      2. maybe intelligence is an emergent ability that will show up eventually (disregard the inability to falsify this and the categorical nonsense that is our definition of “emergent”).

      So I think this research is useful as a response to these, although I think “fuck off, promptfondler” is pretty good too.

      • LainTrain@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        18
        ·
        1 month ago

        Well are we not stochastic parrots then? Isn’t this a philosophical, rhetorical and equally unfalsifiable question to answer also?

        • FermiEstimate@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          23
          ·
          1 month ago

          No, there’s an actual paper where that term originated that goes into great deal explaining what it means and what it applies to. It answers those questions and addresses potential objections people might respond with.

          There’s no need for–and, frankly, nothing interesting about–“but, what is truth, really?” vibes-based takes on the term.

        • V0ldek@awful.systems
          link
          fedilink
          English
          arrow-up
          12
          ·
          edit-2
          1 month ago

          Only in the philosophical sense of all of physics being a giant stochastic system.

          But that’s equally useful as saying that we’re Turing machines? Yes, if you draw a broad category of “all things that compute in our universe” then you can make a reasonable (but disputable!) argument that both me and a Python interpreter are in the same category of things. That doesn’t mean that a Python interpreter is smart/sentient/will solve climate change/whatever Sammy Boi wants to claim this week.

          Or, to use a different analogy, it’s like saying “we’re all just cosmic energy, bro”. Yes we are, pass the joint already and stop trying to raise billions of dollars for your energy woodchipper.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      28
      ·
      1 month ago

      there’s a lot of people (especially here, but not only here) who have had the insight to see this being the case, but there’s also been a lot of boosters and promptfondlers (ie. people with a vested interest) putting out claims that their precious word vomit machines are actually thinking

      so while this may confirm a known doubt, rigorous scientific testing (and disproving) of the claims is nonetheless a good thing

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        ·
        1 month ago

        No they do not im afraid, hell I didnt even know that even ELIZA caused people to think it could reason (and this worried the creator) until a few years ago.

    • astrsk@fedia.io
      link
      fedilink
      arrow-up
      13
      ·
      1 month ago

      Isn’t OpenAI saying that o1 has reasoning as a specific selling point?

    • conciselyverbose@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      1 month ago

      Yes.

      But the lies around them are so excessive that it’s a lot easier for executives of a publicly traded company to make reasonable decisions if they have concrete support for it.

    • A_Very_Big_Fan@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      5
      ·
      1 month ago

      Seriously, I’ve seen 100x more headlines like this than people claiming LLMs can reason. Either they don’t understand, or think we don’t understand what “artificial” means.