Google’s AI-driven Search Generative Experience have been generating results that are downright weird and evil, ie slavery’s positives.
To repeat something another guy on lemmy said.
Making AI say slavery is good is the modern equivalent of writing
BOOBS
on a calculator.If it’s only as good as the data it’s trained on, garbage in / garbage out, then in my opinion it’s “machine learning,” not “artificial intelligence.”
Intelligence has to include some critical, discriminating faculty. Not just pattern matching vomit.
Scathing and accurate when your point is made about people too.
We don’t yet have the technology to create actual artificial intelligence. It’s an annoyingly pervasive misnomer.
And the media isn’t helping. The title of the article is “Google’s Search AI Says Slavery Was Good, Actually.” It should be “Google’s Search LLM Says Slavery Was Good, Actually.”
Yup, “AI” is the current buzzword.
Hey, just like blockchain tech!
Unfortunately, people who grow up in racist groups also tend to be racist. Slavery used to be considered normal and justified for various reasons. For many, killing someone who has a religion or belief different than you is ok. I am not advocating for moral relativism, just pointing out that a computer learns what is or is not moral in the same way that humans do, from other humans.
You make a good point. Though humans at least sometimes do some critical thinking between absorbing something and then acting it out.
Not enough. Not enough.
Whoa there… Slavery was great! For the enslaver.
John Brown would like to know your location
so it’s a little bit conservative big deal
deleted by creator
I’ve worked with software engineers for 25 years and they come in all stripes. It’s not a blue state thing or red state thing. They are all over the world, many having immigrated somewhere. There’s absolutely no guarantee that a genius programmer is even a moderately decent human being. Those things just don’t correlate.
There are a surprising amount of furries in IT and dev positions.
Could be worse. If dogfuckers have to exist, then I’d rather have them working with cold, unfeeling machines.
Furries ≠ zoophiles.
Chances are as about anything else. But I am not sure what that has to with AI. It’s being fed things from the internet for a reason and good luck changing any of the information to your whim.
deleted by creator
just wait until those articles can be written by LLMs
There needs to be like an information campaign or something… The average person doesn’t realize these things say what they think you want to hear, and they are buying into hype and think these things are magic knowledge machines that can tell you secrets you never imagined.
I mean, I get the people working on the LLMs want them to be magic knowledge machines, but it is really putting the cart before the horse to let people assume they already are, and the little warnings that some stuff at the bottom of the page are inadequate.
I mean, on the ChatGPT site there’s literally a disclaimer along the bottom saying it’s able to say things that aren’t true…
people assume they already are [magic knowledge machines], and the little warnings that some stuff at the bottom of the page are inadequate.
You seem to have missed the bottom-line disclaimer of the person you’re replying to, which is an excellent case-in-point for how ineffective they are.
Unfortunately, people are stupid and don’t pay attention to disclaimers.
And, I might be wrong, but didn’t they only add those in recently after folks started complaining and it started making the news?
I feel like I remember them being there since January of this year, which is when I started playing with ChatGPT, but I could be mistaken.
I had a friend who read to me this beautiful thing ChatGPT wrote about an idyllic world. The prompt had been something like, “write about a world where all power structures are reversed.”
And while some of the stuff in there made sense, not all of it did. Like, “in schools, students are in charge and give lessons to the teachers” or something like that.
But she was acting like ChatGPT was this wise thing that had delivered a beautiful way for society to work.
I had to explain that, no, ChatGPT gave the person who made the thing she shared what they asked for. It’s not a commentary on the value of that answer at all, it’s merely the answer. If you had asked ChatGPT to write about a world where all power structures were double what they are now, it would give you that.
deleted by creator
If you ask an LLM for bullshit, it will give you bullshit. Anyone who is at all surprised by this needs to quit acting like they know what “AI” is, because they clearly don’t.
I always encourage people to play around with Bing or chatGPT. That way they’ll get a very good idea how and when an LLM fails. Once you have your own experiences, you’ll also have a more realistic and balanced opinions about it.
So the AI provided factual information and they did not like that because ‘slavery bad, therefore there was no benefit to it.’ There were benefits to slavery, mainly for the owners. US had a huge cotton export at one point, with the fields being worked by slaves.
But also a very few slaves did benefit, like being able to work a job that taught them very useful skills, which let them buy their own freedom, as they were able to earn money from it. Of course being a slave in the first place would be far better, but when you are one already, learning a skill that makes you earn your freedom and get a job afterwards is quite the blessing. Plus for a few individuals it might’ve been living in such terrible conditions, that being forced to work while getting fed might’ve not been so bad…
Obviously it doesn’t “think” any of these things. It’s just a machine repeating back a plausible mimicry.
What does scare me though is what google execs think.
They will be tweaking it to remove obvious things like praise of Hitler, because PR, but what about all the other stuff?Like, most likely it will be saying things like what a great guy Masaji Kitano was for founding Green Cross and being such an experimental innovator, and no one will bat an eye because they haven’t heard of him.
As we outsource more and more of our research and fact checking to machines, errors in knowledge are going to be reproduced and reinforced. Like how Cinderella now has “glass” slippers.
Sounds like the bot has been training on Florida public education and Prager U content.
You need slash now instead of “and”
Must have gone to school in Florida.
The basic problem with AI is that it can only learn from things it reads on the Internet, and the Internet is a dark place with a lot of racists.
What if someone trained an LLM exclusively on racist forum posts. That would be hilarious. Or better yet, another LLM trained with conspiracy BS conversations. Now that one would be spicy.
It turns out that Microsoft inadvertently tried this experiment. The racist forum in question happened to be Twitter.
LOL, that was absolutely epic. Even found this while digging around.
deleted by creator
Thanks. Great video. Had a lot of fun watching it again.
Here is an alternative Piped link(s): https://piped.video/efPrtcLdcdM?si=ZLQO4xcHx_6pWpcZ
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
Humanity without the vineer of filter of irl consequences
A bit of a nitpick but it was technically right on that one thing….
Hitler was an “effective” leader…. Not a good or a moral one but if he had not been as successful creating genocide then i doubt he be more than a small mention in history.
Now a better ai should have realized that giving him as an example was offensive in the context.
In an educational setting this might be more appropriate to teach that success does not equal morally good. Sm i wish more people where aware off.
Shooting someone is an effective way to get to get to the townhall if the townhallbuilding is also where the police department and jail are.
Effective =/= net postive
Hitler wanted to kill jews and used his leadership position to make it happen, soldiers and citizens blindly followed his ideology, millions died before he was finally stopped.
Calling him not effective is an insult to the horrid damage caused by the holocaust. But i recognize your sincerity and i see we are not enemies. So let us not fight.
I dont need to reform the image of nazis and hitlers. Decent people know they are synonymous to evil and hatred and they should be.