Moore’s law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years.

Is there anything similar for the sophistication of AI, or AGI in particular?

  • TacoEvent@lemmy.zip
    cake
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    11 months ago

    We’ve reached far beyond practical necessity in model sizes for Moore’s Law to apply there. That is, model sizes have become so huge that they are performing at 99% of the capability they ever will be able to.

    Context size however, has a lot farther to go. You can think of context size as “working memory” where model sizes are more akin to “long term memory”. The larger the context size, the more a model is able to understand beyond the scope of it’s original model training in one go.

    • AggressivelyPassive@feddit.de
      link
      fedilink
      arrow-up
      3
      ·
      11 months ago

      That is a pretty wild assumption. There’s absolutely no reason, why a larger model wouldn’t produce drastically better results. Maybe not next month, maybe not with this architecture, but it’s almost certain that they will grow.

      This has hard “256kb is enough” vibes.

        • AggressivelyPassive@feddit.de
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          11 months ago

          Actual understanding of the prompts, for example? LLMs are just text generators, they have no concepts of what’s being the words.

          Thing is, you seem to be completely uncreative or rather deny the designers and developers any creativity if you just assume “now we’re done”. Would you have thought the same about Siri ten years ago? “Well, it understands that I’m planning a meeting, AI is done.”

          • TacoEvent@lemmy.zip
            cake
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            I see your point. Rereading the OP, it looks like I jumped to a conclusion about LLMs and not AI in general.

            My takeaway still stands for LLMs. These models have gotten huge with little net gain on each increase. But a Moore’s Law equivalent should apply to context sizes. That has a long way to go.