The allergy, called alpha-gal syndrome, came to light a little over a decade ago.

  • over_clox@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    I feel for you and anyone suffering with a meat allergy, but I dunno how much I’d trust AI for any serious purposes after seeing the garbage it can spit out.

    Seriously, I’ve managed to get AI to write me instructions on how to inflate a phone and how to shave alligator hair. Rather that say “I’m sorry, that doesn’t make any sense, but here are some related topics”, instead it literally wrote out actual instructions for that nonsense LOL!

    So yeah, I have no reason to trust AI for anything serious, it’s about an ignorant joke of a language model is all it really adds up to.

    • Empyreus@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      That’s specifically for a LLM which would probably not be the best AI base for medical uses.

      • apemint@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        People still don’t understand that AI is an all encompassing term like “tool” and not a single thing.

        Just like we use thousands of vastly different and specialized tools, in a decade we’ll be surrounded by medical AI, engineering AI, accounting AI, design AI, research AI, life coaching AI, etc.

        Right now we have a few LLMs and generative AIs, but that’s like having a pen and a spray gun.
        Of course you wouldn’t ask any of them for a medical diagnosis.

    • wahming@kbin.social
      link
      fedilink
      arrow-up
      8
      ·
      1 year ago

      In a use case like this, AI would be less about a final diagnose and more about getting the doctor or patient pointed in the right direction, especially with rare cases that few doctors are aware of. You no longer need to visit a hundred specialists in the hope of finding the one person who’s seen something similar to your case before.

      • The Pantser@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Agree in this case AI is just WebMD symptom checker but with the ability to take in infinite data points and narrow it down with prompting questions and hopefully being able to upload images for further diagnosis.

    • Troy@lemmy.ca
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      Yeah, I’m not talking about a language model AI. But rather something like the stuff the insurance companies are using to assess risk – they take a lot of data in and cluster them together. Humans are sometimes really bad at recognizing patterns if you don’t have enough data. A pattern that goes: “oh, all these people in this region with this specific digestive problem spatially maps to this insect” is the sort of thing ML should be good at. But where it will be really good is in turning proteins into diagnosis: “if this protein is detected in the blood in an general scan, combined with symptoms, then diagnose X” – right now you only get tested for the things the doctor orders. Even more promising yet: with enough data, the AI should figure out which proteins actually do specific functions in the body, which will advance the research side (see, for example, Alphafold).

    • conciselyverbose@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Pattern recognition is something modern techniques are very good at.

      ChatGPT isn’t that. It also isn’t intelligent and doesn’t know anything. It’s basically a jacked up parrot blindly throwing words together.

    • DaSaw
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      In this application, AI would really just be a fancy search engine the doctor can use to look for things he doesn’t already know about.