• antidote101@lemmy.world
    link
    fedilink
    arrow-up
    67
    arrow-down
    2
    ·
    7 months ago

    To think this is what companies are trying to get away with whilst the technology is still flawed enough to be caught. As it gets more accurate in what it can create we’re going to have less of a realistic understanding of reality.

  • halcyoncmdr@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    7
    ·
    7 months ago

    There’s no reason to even hide this. If there’s no photo to use there in the past they would have had an artist rendition, this is no difference.

    A disclaimer that it is AI generated like an artist rendition would not deter from the impact at all.

    • adam_y@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      ·
      edit-2
      7 months ago

      I think it is a question of representation.

      If they say what this is, then fine, if they don’t then its a problem.

      The reason being that an artist rendition is almost clearly an artist rendition, whereas ai imagery can look cannily like an actual photograph, and therefore present itself as a primary document.

      The problem with misrepresenting, whether deliberately or accidentally, primary documentation is that this is supposed to be a documentary, one of the few show types where fact and accuracy (should) matter.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    11
    ·
    7 months ago

    This is the best summary I could come up with:


    Netflix has used what strongly appears to be AI-generated or -manipulated images in a recent documentary about a murder-for-hire plot involving a woman named Jennifer Pan that took place in Canada back in 2010.

    The streaming service used the photos to illustrate her “bubbly, happy, confident, and very genuine” personality, as high school friend Nam Nguyen described her.

    The images that appear around the 28-minute mark of Netflix’s “What Jennifer Did,” have all the hallmarks of an AI-generated photo, down to mangled hands and fingers, misshapen facial features, morphed objects in the background, and a far-too-long front tooth.

    Needless to say, using generative AI to describe a real person in a true-crime documentary is bound to raise some eyebrows.

    But resorting to the tech to generate pictures of a real person, especially of somebody who’s still in jail and will only be eligible for parole around 2040, should raise some alarm bells.

    This isn’t inventing a fictional narrative for the sake of entertainment — this is tinkering with the fabric of reality itself to manipulate a true story that actually happened.


    The original article contains 221 words, the summary contains 181 words. Saved 18%. I’m a bot and I’m open source!

  • stevedidwhat_infosec@infosec.pub
    link
    fedilink
    arrow-up
    12
    arrow-down
    2
    ·
    edit-2
    7 months ago

    I’ve said this since day one - we need a reliable way to identify AI generated content

    If we fail to separate the two, or create safeguards like this, we’re in a lot more trouble than the destruction of the job market would be. And that’s saying something.

    “Put it back in the box” isn’t a solution.

    Banning the technology isn’t a solution.

    We must face it for what it is, put our heads together, and create the solution.

    Like we always have.

    • SomeGuy69@lemmy.world
      link
      fedilink
      arrow-up
      14
      ·
      edit-2
      7 months ago

      If you ever create a reliable tool to identify AI images, you automatically provide learning data for AI to generate images that get past the AI detection.

    • GBU_28@lemm.ee
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      7 months ago

      Unfortunately, an arms race has begun.

      Said tool could be used to train new ai to avoid it.

    • CommanderCloon@lemmy.ml
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      7 months ago

      You don’t understand that tech; when making an AI model, you do code both a generator of whatever it is you want to make, as well as a “detector” which tells you whether or not the result is convincing.

      Then you change the genertor slightly based of the results of the “detector”

      You do that a few million times and then you have a correct AI model, the quality of which is dependant on both the quantity of training and the “detector”.

      If someone comes up with a really strong “detector”, they will do work as intended for a few days/weeks, and then AIs will come on the market which will be able to fool the detector

      • stevedidwhat_infosec@infosec.pub
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        7 months ago

        If trained and written several different kinds of AI including neural nets and LLMs.

        This isn’t even close to how LLMs work, let alone how AI works.

        You’re literally describing how to overfit model data which is the exact opposite of what you want to do.

        Do everyone else a favor next time and don’t try to armchair.

        • CommanderCloon@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          7 months ago

          I don’t know which kinds of AIs you’ve worked on but my description (although using the incorrect terms) is certainly valid. I’ve described how GANs work, I’m not pulling this out of thin air 🤷‍♂️

          The generative network generates candidates while the discriminative network evaluates them. The contest operates in terms of data distributions. Typically, the generative network learns to map from a latent space to a data distribution of interest, while the discriminative network distinguishes candidates produced by the generator from the true data distribution. The generative network’s training objective is to increase the error rate of the discriminative network (i.e., “fool” the discriminator network by producing novel candidates that the discriminator thinks are not synthesized (are part of the true data distribution)).

          Wikipedia

          So yes, whichever method you design which allows the product of an AI to be detected can be used by a discriminative network for a GAN, which defeats the purpose of designing the method to begin with

          • stevedidwhat_infosec@infosec.pub
            link
            fedilink
            arrow-up
            3
            ·
            7 months ago

            Apologies for the ignorant comment, while GANs have lost popularity in favor of Diffusion models, they’re still used more or less.

            Been having a really shit day and I took it out on you - that wasn’t fair

  • Renegade@infosec.pub
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    7 months ago

    Purely speculation but, I wonder if this is a case of having some old, very low quality photos and trying to enhance and upscale them for the show.

    • Jimmycrackcrack@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      7 months ago

      I’ve done that for broadcast before. Sadly it barely made any difference, but I felt it was at least just a little better than nothing and made it at least possible to sorta see what was supposed to be going on in the low quality source images and those images were the only ones that seemed to exist of the thing we were showing.