• Ascrod
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 month ago

    Schools teaching “how to use AI” are missing the point: these things aren’t magic machines, they’re guessing games.

    LLMs have no ability to reason or understand anything. They just regurgitate words using a probabilistic model. They are little more than a fancy autocomplete, and autocomplete isn’t always right (see also DYAC).

    https://garymarcus.substack.com/p/llms-dont-do-formal-reasoning-and

    • m_fOPM
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Heh, you can see my opinion on Gary Marcus in another comment of mine: https://midwest.social/post/17934593/12838629

      Even as statistical parrots and all that, they’re fairly useful for learning, you just have to know their limitations. They’re very useful for questions that are hard to search for but easy to verify, like “What is this funny math symbol?”. Especially when the symbol has different meanings in different contexts, and you can explain the context to the LLM.

      • Ascrod
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        They are not useful for learning if the information they give is wrong.

        Neural networks are a tool and they have their uses, but something that generates garbage that is factually wrong a significant percentage of the time while burning our world to do it is not the tool our students need.