Hi! I am a former VFX artist for a few [adultswim] shows who has been using generative imagery for about a year now. My take is very nuanced without a cheering endorsement or condemnation of Ai imagery. If anyone has questions or comments, please post them below. Thanks for watching.

  • Something_Complex@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Considering the amount of bad uses you can give to said images, what kind of safeguards can we put in place to stop it from being abuse.

    Preferably ones that aren’t invasive to our privacy. After all the better the images get the less we will be able to believe in, unless we see wit our own eyes.

      • Something_Complex@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Of course, let’s say in a while you cannot trust a video of someone committing homicide, simply because it could be fake.

        The opposite is also truth, you can shatter the public’s trust in an individual, without him actually saying or doing any of the things he’s accused off

        • Pro75357@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          OK Thanks - So you are asking about protections from misinformation- deepfakes and such.

          As technology improves, it may be downright impossible to tell real from fake with our own eyes- at which point what is “proof” becomes blurry. It will become “this is why we can’t have nice things…” where innocents are at risk of harm (non-AI art getting rejected from competitions because it looks like AI art), and bad actors more often get away with shenanigans. Hopefully we’re smart enough to figure out ways to avoid that kind of future.

          However, I don’t think restricting the technology itself, through legislation or otherwise, would be practical nor would it be very effective. Forgery and deception are age-old concepts, and people aren’t going to stop trying to cheat/lie/steal. Some people (VFX artists?) can probably already make a believable fake homicide. And just look at all the fake UFO footage out there- we don’t really need AI to deceive people, it’s just that AI makes it more accessible- and perhaps now within reach for some lowlife that needs to cheat to be successful in life. And, most countries already have laws in place against fraud, forgery, and libel- things that hurt others. It would be very difficult to regulate “misinformation” though, because it can overlap with legitimate uses such as art and entertainment.

          Of course, it would be nice to have only “Ethical” AI - and this is what you are starting to see in the commercial space, but it is pretty easy to bypass these restrictions (not endorsing this, just an example of a quick search result). Also, not all AI systems will even bother trying to be ethical, and once the technology is more accessible bad actors could just make their own AI systems from scratch. I also think any attempt at restriction through legal means would significantly hinder legitimate research in the field and slow progress on what may be our best chance at overcoming humanity’s biggest challenge (climate change, etc.).

          I like to think of AI as an extension of the human intellectual tool set - so let’s not treat it like guns or drugs (physical things) but rather like libraries or the internet. Regulated to a practical extent, yes, but not really restricted with regards to what it can do. The fact that the internet was not highly regulated or highly-controlled during it’s inception is a major part of why it is the amazing global network we have today.

    • simple@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      what kind of safeguards can we put in place to stop it from being abuse

      Realistically, zero. Stable Diffusion is open source for better or worse and that means people can tailor it for their needs.