Google’s AI-driven Search Generative Experience have been generating results that are downright weird and evil, ie slavery’s positives.

  • Caveman@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    To repeat something another guy on lemmy said.

    Making AI say slavery is good is the modern equivalent of writing BOOBS on a calculator.

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    If it’s only as good as the data it’s trained on, garbage in / garbage out, then in my opinion it’s “machine learning,” not “artificial intelligence.”

    Intelligence has to include some critical, discriminating faculty. Not just pattern matching vomit.

    • samus12345@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 years ago

      We don’t yet have the technology to create actual artificial intelligence. It’s an annoyingly pervasive misnomer.

    • profdc9@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      Unfortunately, people who grow up in racist groups also tend to be racist. Slavery used to be considered normal and justified for various reasons. For many, killing someone who has a religion or belief different than you is ok. I am not advocating for moral relativism, just pointing out that a computer learns what is or is not moral in the same way that humans do, from other humans.

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      I’ve worked with software engineers for 25 years and they come in all stripes. It’s not a blue state thing or red state thing. They are all over the world, many having immigrated somewhere. There’s absolutely no guarantee that a genius programmer is even a moderately decent human being. Those things just don’t correlate.

    • Dark_Lords_Servant@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      Chances are as about anything else. But I am not sure what that has to with AI. It’s being fed things from the internet for a reason and good luck changing any of the information to your whim.

  • Stoneykins [any]@mander.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 years ago

    There needs to be like an information campaign or something… The average person doesn’t realize these things say what they think you want to hear, and they are buying into hype and think these things are magic knowledge machines that can tell you secrets you never imagined.

    I mean, I get the people working on the LLMs want them to be magic knowledge machines, but it is really putting the cart before the horse to let people assume they already are, and the little warnings that some stuff at the bottom of the page are inadequate.

    • fsmacolyte@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      I mean, on the ChatGPT site there’s literally a disclaimer along the bottom saying it’s able to say things that aren’t true…

      • Flambo@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 years ago

        people assume they already are [magic knowledge machines], and the little warnings that some stuff at the bottom of the page are inadequate.

        You seem to have missed the bottom-line disclaimer of the person you’re replying to, which is an excellent case-in-point for how ineffective they are.

      • stopthatgirl7@kbin.socialOP
        link
        fedilink
        arrow-up
        0
        ·
        2 years ago

        Unfortunately, people are stupid and don’t pay attention to disclaimers.

        And, I might be wrong, but didn’t they only add those in recently after folks started complaining and it started making the news?

        • fsmacolyte@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 years ago

          I feel like I remember them being there since January of this year, which is when I started playing with ChatGPT, but I could be mistaken.

    • TheRealKuni@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      I had a friend who read to me this beautiful thing ChatGPT wrote about an idyllic world. The prompt had been something like, “write about a world where all power structures are reversed.”

      And while some of the stuff in there made sense, not all of it did. Like, “in schools, students are in charge and give lessons to the teachers” or something like that.

      But she was acting like ChatGPT was this wise thing that had delivered a beautiful way for society to work.

      I had to explain that, no, ChatGPT gave the person who made the thing she shared what they asked for. It’s not a commentary on the value of that answer at all, it’s merely the answer. If you had asked ChatGPT to write about a world where all power structures were double what they are now, it would give you that.

  • lolcatnip@reddthat.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 years ago

    If you ask an LLM for bullshit, it will give you bullshit. Anyone who is at all surprised by this needs to quit acting like they know what “AI” is, because they clearly don’t.

    • Hamartiogonic@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      I always encourage people to play around with Bing or chatGPT. That way they’ll get a very good idea how and when an LLM fails. Once you have your own experiences, you’ll also have a more realistic and balanced opinions about it.

  • Dark_Lords_Servant@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 years ago

    So the AI provided factual information and they did not like that because ‘slavery bad, therefore there was no benefit to it.’ There were benefits to slavery, mainly for the owners. US had a huge cotton export at one point, with the fields being worked by slaves.

    But also a very few slaves did benefit, like being able to work a job that taught them very useful skills, which let them buy their own freedom, as they were able to earn money from it. Of course being a slave in the first place would be far better, but when you are one already, learning a skill that makes you earn your freedom and get a job afterwards is quite the blessing. Plus for a few individuals it might’ve been living in such terrible conditions, that being forced to work while getting fed might’ve not been so bad…

  • livus@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    2 years ago

    Obviously it doesn’t “think” any of these things. It’s just a machine repeating back a plausible mimicry.

    What does scare me though is what google execs think.
    They will be tweaking it to remove obvious things like praise of Hitler, because PR, but what about all the other stuff?

    Like, most likely it will be saying things like what a great guy Masaji Kitano was for founding Green Cross and being such an experimental innovator, and no one will bat an eye because they haven’t heard of him.

    As we outsource more and more of our research and fact checking to machines, errors in knowledge are going to be reproduced and reinforced. Like how Cinderella now has “glass” slippers.

  • SqueezeMeMacaroni@thelemmy.club
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 years ago

    The basic problem with AI is that it can only learn from things it reads on the Internet, and the Internet is a dark place with a lot of racists.

  • webghost0101@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 years ago

    A bit of a nitpick but it was technically right on that one thing….

    Hitler was an “effective” leader…. Not a good or a moral one but if he had not been as successful creating genocide then i doubt he be more than a small mention in history.

    Now a better ai should have realized that giving him as an example was offensive in the context.

    In an educational setting this might be more appropriate to teach that success does not equal morally good. Sm i wish more people where aware off.

      • webghost0101@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 years ago

        Shooting someone is an effective way to get to get to the townhall if the townhallbuilding is also where the police department and jail are.

        Effective =/= net postive

        Hitler wanted to kill jews and used his leadership position to make it happen, soldiers and citizens blindly followed his ideology, millions died before he was finally stopped.

        Calling him not effective is an insult to the horrid damage caused by the holocaust. But i recognize your sincerity and i see we are not enemies. So let us not fight.

        I dont need to reform the image of nazis and hitlers. Decent people know they are synonymous to evil and hatred and they should be.