Am I the only one getting agitated by the word AI (Artificial Intelligence)?

Real AI does not exist yet,
atm we only have LLMs (Large Language Models),
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).

Imo AI is just a marketing buzzword,
created by rich capitalistic a-holes,
who already invested in LLM stocks,
and now are looking for a profit.

          • Thorny_Insight@lemm.ee
            link
            fedilink
            arrow-up
            3
            arrow-down
            2
            ·
            10 months ago

            I don’t understand what you’re even trying to ask. AGI is a subcategory of AI. Every AGI is an AI but not every AI is an AGI. OP seems to be thinking that AI isn’t “real AI” because it’s not AGI, but those are not the same thing.

            • BlanketsWithSmallpox@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              3
              ·
              10 months ago

              AI has been colloquially used to mean AGI for 40 years. About the only exception has been video games, but most people knew better than thinking the Goomba was alive.

              At what point, did AI get turned into AGI.

      • Pipoca@lemmy.world
        link
        fedilink
        arrow-up
        9
        arrow-down
        1
        ·
        10 months ago

        One low hanging fruit thing that comes to mind is that LLMs are terrible at board games like chess, checkers or go.

        ChatGPT is a giant cheater.

        • Hotzilla@sopuli.xyz
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          10 months ago

          GPT3 was cheating and playing poorly, but original GPT4 played already in level of relatively good player, even in mid game (not found in the internet, do require understanding the game, not just copying). GPT4 turbo probably isn’t so good, openai had to make it dummer (read: cheaper)

          • Pipoca@lemmy.world
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            10 months ago

            Three year olds aren’t all that smart, but they learn in a way that ChatGTP 3 and ChatGPT 4 don’t.

            A 3 year old will become a 30 year old eventually, but ChatGPT 3 just kinda stays ChatGPT3 forever. LLMs can be trained offline, but we don’t really know if that converges to some theoretical optimum at some point and how far away from the best possible LLM we are.

      • esserstein@sopuli.xyz
        link
        fedilink
        arrow-up
        10
        arrow-down
        3
        ·
        10 months ago

        Be generally intelligent ffs, are you really going to argue that llms posit original insight in anything?

      • doctorcrimson@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        2
        ·
        edit-2
        10 months ago

        So basically the ability to do things or learn without direction for tasks other than what it was created to do. Example, ChatGPT doesn’t know how to play chess and Deep Blue doesn’t write poetry. Either might be able to approximate correct output if tweaked a bit and trained on thousands, millions, or billions of examples of proper output, but neither are capable of learning to think as a human would.

      • Thorny_Insight@lemm.ee
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        10 months ago

        Artificial intelligence might be really good, perhaps even superhuman at one thing, for example driving a car but that same competence doesn’t apply over variety of fields. Your self-driving car can’t help with your homework. With artificial general intelligence however, it does. Humans posses general intelligence; we can do math, speak different languages, know how to navigate social situations, know how to throw a ball, can interpret sights, sounds etc.

        With a real AGI you don’t need to develop different versions of it for different purposes. It’s generally intelligent so it can do it all. This also includes writing its own code. This is where the worry about intelligence explosion origins from. Once it’s even slightly better than humans at writing its code it’ll make a more competent version of itself which will then create even more competent version and so on. It’s a chain reaction which we might not be able to stop. After all it’s by definition smarter than us and being a computer; also million times faster.

        Edit: Another feature that AGI would most likely, though not neccessarily posses is consciousness. There’s a possibility that it feels like something to be generally intelligent.

          • Thorny_Insight@lemm.ee
            link
            fedilink
            arrow-up
            1
            arrow-down
            2
            ·
            10 months ago

            I’ve heard expers say that GPT4 displays signs of general intelligence so while I still wouldn’t call it an AGI I’m in no way claiming an LLM couldn’t ever become generally intelligent. Infact if I were to bet money on it I think there’s a good chance that this is where our first true AGI systems will originate from. We’re just not there yet.

            • Cethin@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 months ago

              It isn’t. It doesn’t understand things like we think of with intelligence. It generates output that fits a recognized input. If it doesn’t recognize the input in some form it generates garbage. It doesn’t understand context and it doesn’t try to generalize knowledge to apply to different things.

              For example, I could teach you about gravity, trees, and apples and ask you to draw a picture of an apple falling from a tree and you’d be able to create a convincing picture of what that would look like even without ever seeing it before. An LLM couldn’t. It could create a picture of an apple falling from a tree based on other pictures of apples falling from trees, but not from just the knowledge of an apple, a tree, and gravity.

      • Cethin@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        I wrote this for another reply, but I’ll post it for you too:

        It doesn’t understand things like we think of with intelligence. It generates output that fits a recognized input. If it doesn’t recognize the input in some form it generates garbage. It doesn’t understand context and it doesn’t try to generalize knowledge to apply to different things.

        For example, I could teach you about gravity, trees, and apples and ask you to draw a picture of an apple falling from a tree and you’d be able to create a convincing picture of what that would look like even without ever seeing it before. An LLM couldn’t. It could create a picture of an apple falling from a tree based on other pictures of apples falling from trees, but not from just the knowledge of an apple, a tree, and gravity.