• JoeyJoeJoeJr@lemmy.ml
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    3
    ·
    10 months ago

    You are falling into a common trap. LLMs do not have understanding - asking it to do things like convert dates and put them on a number line may yield correct results sometimes, but since the LLM does not understand what it’s doing, it may “hallucinate” dates that look correct, but don’t actually align with the source.

    • Byter@lemmy.one
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Thank you for calling that out. I’m well aware, but appreciate your cautioning.

      I’ve seen hallucinations from LLMs at home and at work (where I’ve literally had them transcribe dates like this). They’re still absolutely worth it for their ability to handle unstructured data and the speed of iteration you get – whether they “understand” the task or not.

      I know to check my (its) work when it matters, and I can add guard rails and selectively make parts of the process more robust later if need be.