• TheFriar@lemm.ee
      link
      fedilink
      arrow-up
      12
      arrow-down
      1
      ·
      1 month ago

      Sure we can. If it gives you bad information because it can’t differentiate between a joke a good information…well, seems like the blame falls exactly at the feet of the AI.

      • kate@lemmy.uhhoh.com
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 month ago

        Should an LLM try to distinguish satire? Half of lemmy users can’t even do that

        • KevonLooney@lemm.ee
          link
          fedilink
          arrow-up
          9
          ·
          1 month ago

          Do you just take what people say on here as fact? That’s the problem, people are taking LLM results as fact.

        • BakerBagel
          link
          fedilink
          arrow-up
          4
          ·
          1 month ago

          It should if you are gonna feed it satire to learn from

        • ancap shark@lemmy.today
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          1 month ago

          If it’s being used to give the definite answer of a search, so it should. If it can, than it shouldn’t be used for that