A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

    • Whirling_Cloudburst@lemmy.world
      link
      fedilink
      English
      arrow-up
      43
      ·
      edit-2
      7 months ago

      Unfortunately it does need pointing out. Back when I was in college, professors would need to repeatedly tell their students that the real world forensics don’t work like they do on NCIS. I’m not sure as to how much thing may or may not have changed since then, but based on American literacy levels being what they are, I do not suppose things have changed that much.

        • Whirling_Cloudburst@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          7 months ago

          Its certainly similar in that CSI played a role in forming unrealistic expectations in student’s minds. But. Rather than expecting more physical evidence in order to make a prosecution, the students expected magic to happen on computers and lab work (often faster than physically possible).

          AI enhancement is not uncovering hidden visual data, but rather it generates that information based on previously existing training data and shoe horns that in. It certainly could be useful, but it is not real evidence.

    • Stopthatgirl7@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      33
      ·
      7 months ago

      Yes. When people were in full conspiracy mode on Twitter over Kate Middleton, someone took that grainy pic of her in a car and used AI to “enhance it,” to declare it wasn’t her because her mole was gone. It got so much traction people thought the ai fixed up pic WAS her.

      • Mirshe@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        7 months ago

        Don’t forget people thinking that scanlines in a news broadcast over Obama’s suit meant that Obama was a HOLOGRAM and ACTUALLY A LIZARD PERSON.

    • lole@iusearchlinux.fyi
      link
      fedilink
      English
      arrow-up
      23
      ·
      7 months ago

      I met a student at university last week at lunch who told me he is stressed out about some homework assignment. He told me that he needs to write a report with a minimum number of words so he pasted the text into chatGPT and asked it about the number of words in the text.

      I told him that every common text editor has a word count built in and that chatGPT is probably not good at counting words (even though it pretends to be good at it)

      Turns out that his report was already waaaaay above the minimum word count and even needed to be shortened.

      So much about the understanding of AI in the general population.

      I’m studying at a technical university.

    • Altima NEO@lemmy.zip
      link
      fedilink
      English
      arrow-up
      19
      ·
      7 months ago

      The layman is very stupid. They hear all the fantastical shit AI can do and they start to assume its almighty. Thats how you wind up with those lawyers that tried using chat GPT to write up a legal brief that was full of bullshit and didnt even bother to verify if it was accurate.

      They dont understand it, they only know that the results look good.

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        7 months ago

        The layman is very stupid. They hear all the fantastical shit AI can do and they start to assume its almighty. Thats how you wind up with those lawyers that tried using chat GPT to write up a legal brief that was full of bullshit and didnt even bother to verify if it was accurate.

        Especially since it gets conflated with pop culture. Someone who hears that an AI app can “enhance” an image might think it works like something out of CSI using technosmarts, rather than just making stuff up out of whole cloth.

    • douglasg14b@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      7 months ago

      Of course, not everyone is technology literate enough to understand how it works.

      That should be the default assumption, that something should be explained so that others understand it and can make better, informed, decisions. .

      • ItsMeSpez@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        It’s not only that everyone isn’t technologically literate enough to understand the limits of this technology - the AI companies are actively over-inflating their capabilities in order to attract investors. When the most accessible information about the topic is designed to get non-technically proficient investors on board with your company, of course the general public is going to get an overblown idea of what the technology can do.

      • dual_sport_dork 🐧🗡️@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 months ago

        And people who believe the Earth is flat, and that Bigfoot and the Loch Ness Monster exist, and there are reptillians replacing the British royal family…

        People are very good at deluding themselves into all kinds of bullshit. In fact, I posit that they’re better even at it than learning the facts or comprehending empirical reality.

    • melpomenesclevage@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      7 months ago

      Its not actually worse than eyewitness testimony.

      This is not an endorsement if AI, just pointing out that truth has no place in a courtroom, and refusing to lie will get you locked in a cafe.

      Too good, not fixing it.