Over half of all tech industry workers view AI as overrated::undefined

  • Boozilla@lemmy.world
    link
    fedilink
    English
    arrow-up
    72
    ·
    1 year ago

    I think it will be the next big thing in tech (or “disruptor” if you must buzzword). But I agree it’s being way over-hyped for where it is right now.

    Clueless executives barely know what it is, they just know they want it get ahead of it in order to remain competitive. Marketing types reporting to those executives oversell it (because that’s their job).

    One of my friends is an overpaid consultant for a huge corporation, and he says they are trying to force-retro-fit AI to things that barely make any sense…just so that they can say that it’s “powered by AI”.

    On the other hand, AI is much better at some tasks than humans. That AI skill set is going to grow over time. And the accumulation of those skills will accelerate. I think we’ve all been distracted, entertained, and a little bit frightened by chat-focused and image-focused AIs. However, AI as a concept is broader and deeper than just chat and images. It’s going to do remarkable stuff in medicine, engineering, and design.

    • bassomitron@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      1
      ·
      1 year ago

      Personally, I think medicine will be the most impacted by AI. Medicine has already been increasingly implementing AI in many areas, and as the tech continues to mature, I am optimistic it will have tremendous effect. Already there are many studies confirming AI’s ability to outperform leading experts in early cancer and disease diagnoses. Just think what kind of impact that could have in developing countries once the tech is affordably scalable. Then you factor in how it can greatly speed up treatment research and it’s pretty exciting.

      That being said, it’s always wise to remain cautiously skeptical.

      • dustyData@lemmy.world
        link
        fedilink
        English
        arrow-up
        25
        arrow-down
        2
        ·
        1 year ago

        I’s ability to outperform leading experts in early cancer and disease diagnoses

        It does, but it also has a black box problem.

        A machine learning algorithm tells you that your patient has a 95% chance of developing skin cancer on his back within the next 2 years. Ok, cool, now what? What, specifically, is telling the algorithm that? What is actionable today? Do we start oncological treatment? According to what, attacking what? Do we just ask the patient to aggressively avoid the sun and use liberal amounts of sun screen? Do we start a monthly screening, bi-monthly, yearly, for how long do we keep it up? Should we only focus on the part that shows high risk or everywhere? Should we use the ML every single time? What is the most efficient and effective use of the tech? We know it’s accurate, but is it reliable?

        There are a lot of moving parts to a general medical practice. And AI has to find a proper role that requires not just an abstract statistic from an ad-hoc study, but a systematic approach to healthcare. Right now, it doesn’t have that because the AI model can’t tell their handlers what it is seeing, what it means, and how it fits in the holistic view of human health. We can’t just blindly trust it when there’s human lives in the line.

        As you can see, this seems to be relegating AI to a research role for the time being, and not on a diagnosing capacity yet.

        • randon31415@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          1 year ago

          There is a very complex algorithm for determining your risk of skin cancer: Take your age … then add a percent symbol after it. That is the probability that you have skin cancer.

    • agent_flounder@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Like you say, “AI” isn’t just LLMs and making images. We have previously seen, for example, expert systems, speech recognition, natural language processing, computer vision, machine learning, now LLM and generative art.

      The earlier technologies have gone through their own hype cycles and come out the other end to be used in certain useful ways. AI has no doubt already done remarkable things in various industries. I can only imagine that will be true for LLMs some day.

      I don’t think we are very close to AGI yet. Current AI like LLMs and machine vision require a lot of manual training and tuning. As far as I know, few AI technologies can learn entirely on their own and those that do are limited in scope. I’m not even sure AGI is really necessary to solve most problems. We may do AI “ala carte” for many years and one day someone will stitch a bunch of things together, et voila.

      • Boozilla@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        Thanks.

        I’m glad you mentioned speech. Tortoise-TTS is an excellent text to speech AI tool that anyone can run on a GPU at home. I’ve been looking for a TTS tool that can generate a more natural -sounding voice for several years. Tortoise is somewhat labor intensive to use for now, but to my ear it sounds much better than the more expensive cloud-based solutions. It can clone voices convincingly, too. (Which is potentially problematic).

        • agent_flounder@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Ooh thanks for the heads up. Last time I played with TTS was years ago using Festival, which was good for the time. Looking forward to trying Tortoise TTS.

      • thedeadwalking4242@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Honestly I believe AGI is currently a compute resource problem less than a software problem. A paper came out awhile ago showing that individual neurons in the human brain displayed behavior like decently sized deep learning models. If this is true the number of nodes required for artificial neural nets to even come close to human like intelligence maybe astronomically higher then predicted.

        • NightAuthor@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          That’s my understanding as well, our brain is just an insane composition of incredibly simple mechanisms. Its compositions of compositions of compositions ad nauseam. We are manually simulating billions of years of evolution, using ourselves as a blueprint. We can get there… it’s hard to say when we’ll get there, but it’ll be interesting to watch.

          • thedeadwalking4242@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            Exactly, plus human consciousness might not be the most effective way to do it, might be easier less resource intensive ways.