• Saledovil@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    6
    ·
    2 days ago

    What the LLMs do, at the end of the day, is statistics. If you want a more precise model, you need to make it larger. Basically, exponentially scaling marginal costs meet exponentially decaying marginal utility.

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        guess again

        what the locals are probably taking issue with is:

        If you want a more precise model, you need to make it larger.

        this shit doesn’t get more precise for its advertised purpose when you scale it up. LLMs are garbage technology that plateaued a long time ago and are extremely ill-suited for anything but generating spam; any claims of increased precision (like those that openai makes every time they need more money or attention) are marketing that falls apart the moment you dig deeper — unless you’re the kind of promptfondler who needs LLMs to be good and workable just because it’s technology and because you’re all-in on the grift

        • Saledovil@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          6
          ·
          2 days ago

          Well, then let me clear it up. The statistics becomes more precise. As in, for a given prefix A, and token x, the difference between the calculated probability of x following A (P(x|A)) to the actual probability of P(x|A) becomes smaller. Obviously, if you are dealing with a novel problem, then the LLM can’t produce a meaningful answer. And if you’re working on a halfway ambitious project, then you’re virtually guaranteed to encounter a novel problem.

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            7
            ·
            2 days ago

            Obviously, if you are dealing with a novel problem, then the LLM can’t produce a meaningful answer.

            it doesn’t produce any meaningful answers for non-novel problems either