When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered.

The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted.

But why did Copilot hallucinate these terrible and false accusations?

  • TheFriar@lemm.ee
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    3
    ·
    2 months ago

    So you don’t think these massive megacompanies should be held responsible for making disinformation machines? Why not?

    • futatorius@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Yeah, all these systems do is worsen the already bad signal/noise ratio in online discourse.

      • medgremlin
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 months ago

        Which is why, in many cases, there should be liability assigned. If a self-driving car kills someone, the programming of the car is at least partially to blame, and the company that made it should be liable for the wrongful death suit, and probably for criminal charges as well. Citizens United already determined that corporations are people…now we just need to put a corporation in prison for their crimes.

        • futatorius@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          2 months ago

          If a self-driving car kills someone, the programming of the car is at least partially to blame

          No, it is not. It is the use to which the system has been put that is the point at which blame can be assigned. That is what should be verified and validated. That’s where some person is signing on the dotted line that the system is fit for use for that particular purpose.

          I can write a simplistic algorithm to guide a toy drone autonomously. So let’s say I GPL it. If an airplane manufacturer then drops that code into an airliner, and fail to test it correctly in scenarios resembling real-life use of that plane, they’re the ones who fucked up, not me.

      • futatorius@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        No liability should apply while coding. When that code is deployed for use, there should be liability if it is unfit for its intended use. If your AI falsely denies my insurance claim, your ass should be on the line.