• Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    2 months ago

    What we haven’t hit yet is the point of diminishing returns for model efficiency. Small, locally run models are still progressing rapidly, which means we’re going to see improvements for the everyday person instead of just for corporations with huge GPU clusters.

    That in turn allows more scientists with lower budgets to experiment on LLMs, increasing the chances of the next major innovation.

    • CeeBee@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      2 months ago

      Exactly. We’re still very early days with this stuff.

      The next few years will be wild.