• kakes@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    4
    ·
    edit-2
    11 months ago

    Kinda makes sense, right?

    The AI images are a representation of what an AI thinks a human “should” look like, so when another AI (likely trained on a similar dataset) tries to classify them, the AI images will more closely fit what it expects a human to look like.

      • kakes@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        11 months ago

        Ahh, you’ll be unsurprised to hear I didn’t actually read the paper. Thanks for correcting me.

        That said, I still generally stand by my comment. While that makes this finding much more interesting, it does also make sense that the AI faces look like what our brains recognize as human.

        • Zeth0s@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          11 months ago

          It does make sense indeed. But it also means AI has become very good in matching our expectations. We have reached a level of very good AI

    • RecallMadness@lemmy.nz
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Exactly. The AIs job is to generate humanness. The things that don’t look human get discarded, the things that have strong human indicators get kept. Oh look, the AI did its job. Shocked pikachu.

      The white thing is probably just a case of biased training data. Which is going to be a problem across all AIs. I wouldn’t be surprised if in 5-10 years (if the fad lasts longer than NFTs lmao) we find out the ‘AIs’ have all been fed biased data as yet another means of large corporations controlling the narrative of the population.