• HeckGazer@programming.dev
    link
    fedilink
    English
    arrow-up
    48
    ·
    7 months ago

    Oh ez, that’s only 17 orders of magnitude!

    If we managed an optimistic pace of doubling every year that’d only take… 40 years. The last few survivors on desert world can ask it if it was worth it

    • Eiim@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 months ago

      Rather amusing prediction that despite the obscene amount of resources being spent on AI compute already, it’s apparently reasonable to expect to spend 1,000,000x that in the “near future”.

  • DumbAceDragon@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    39
    ·
    edit-2
    7 months ago

    What these people don’t realize is you’re never gonna get AGI by just feeding a machine an infinite amount of raw data.

      • David Gerard@awful.systemsM
        link
        fedilink
        English
        arrow-up
        24
        ·
        7 months ago

        There might actually be nothing bad about the Torment Nexus, and the classic sci-fi novel “Don’t Create The Torment Nexus” was nonsense. We shouldn’t be making policy decisions based off of that.

        wild

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        17
        ·
        7 months ago

        Yes, we know (there are papers about it) that for LLMs every increase of capabilities we need exponentially more data to train it. But don’t worry, we only consumed half the worlds data to train LLMs, still a lot of places to go ;).

    • Naz@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      20
      ·
      7 months ago

      Interesting. I recall a phenomenon by which inorganic matter was given a series of criterion and it adapted based on changes from said environment, eventually forming data which it then learned from over a period of millions of years.

      It then used that information to build the world wide web in the lifetime of a single organism and cast doubt on others trying to emulate it.

      But I see your point.

        • swlabr@awful.systems
          link
          fedilink
          English
          arrow-up
          27
          ·
          7 months ago

          Not me dawg, I am highly non linear (pls donate to my gofundme for spinal correction)

          • zbyte64@awful.systems
            link
            fedilink
            English
            arrow-up
            14
            ·
            edit-2
            7 months ago

            My dog does linear algebra everytime he pees on a fire hydrant so that he only pees for the exact amount of time needed. Similarly, when I drain my bath tub, it acts as a linear algebra machine that calculates how long it takes for the water to drain through a small hole.

            Is this a fun way to look at the world that allows us to more readily build computational devices from our environment? Definitely. Is it useful for determining what is intelligence? Not at all.

              • froztbyte@awful.systems
                link
                fedilink
                English
                arrow-up
                11
                ·
                7 months ago

                Where the fuck was the insult? Wild

                You’re the one making incoherent illogical driveby comments, clown

              • Soyweiser@awful.systems
                link
                fedilink
                English
                arrow-up
                10
                ·
                edit-2
                7 months ago

                Yes, and that was a stupid argument unrelated to the point made that evolution used this raw data to do things, thus raw data in LLMs will lead to AGI. You just wanted debate points for ‘see somewhere there is data in the process of things being alive’. Which is dumb gotcha logic which drags all of us down and makes it harder to have normal conversations about things. My reply was an attempt to make you see this and hope you would do better.

                I didn’t call you stupid, I called the argument stupid, but if the shoe fits.

                E: the argument from the person before you ‘evolution was created us with a lot of data and then we created the internet’ is also silly of course, as if you just go ‘well raw data created evolution’ then no matter how we would get AGI (say it is build out of post quantum computers in 2376) this line of reasoning would say it comes from raw data, so the whole conversation devolves into saying nothing.

                • froztbyte@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  10
                  ·
                  7 months ago

                  No no see, since everything is information this argument totally holds up. That one would need to categorize and order it for it to be data is such a silly notion, utterly ridiculous and unnecessary! Just throw some information in the pool and stir, it’ll evolve soon enough!

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      17
      ·
      7 months ago

      You don’t understand, after we invent god AGI all our problems are solved. Now step into the computroniuminator, we need your atoms for more compute.

    • someacnt_@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 months ago

      Yeah I don’t see why people are so blind on this. Computation is energy-intensive, and we are yet to optimize it for the energy. Yet, all the hopes…

      • DivineDev@kbin.run
        link
        fedilink
        arrow-up
        12
        ·
        7 months ago

        We do optimize, it’s just that when you decrease the energy for computations by half, you just do twice the computations to iterate faster instead of using half the energy.

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    25
    ·
    7 months ago

    that looks like someone used win9x mspaint to make a flag, fucked it up, and then fucked it up even more on the saving throw

  • Wirlocke@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    12
    ·
    7 months ago

    I feel like the current Machine Learning gold rush is amazing from a technical perspective, but I feel like a lot of technophiles miss the real potential.

    What we have is the first crickety engines of this technology. We’re not building futurism masterpieces with the equivalent to a steam engine.

    We have great new tools that we can use to further understand and optimize what we built, instead of just throwing more and more compute on top of our first design.

    The human brain uses a light bulb of power, so we know we are massively inefficient. And recent research like MAMBA shows that there’s still more improvements to make.

    And Anthropic’s Mech Interp shows there’s ways to better understand the neural networks to improve performance without relying solely on a “black box”.

    The tech has great potential, but the massive server farms being dedicated to it now is just crypto style overhype and fear of missing out.

    • sinedpick@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      7 months ago

      did you even experience a single conscious thought while writing that? what fucking potential are you referring to? generating reams of scam messages and Internet spam? automating the only jobs that people actually enjoy doing? seriously, where is the thought?