- cross-posted to:
- technology@lemmy.zip
- cross-posted to:
- technology@lemmy.zip
Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:
- Confident: 57% say the main LLM they use seems to act in a confident way.
- Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
- Sense of humor: 32% say their main LLM seems to have a sense of humor.
- Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
- Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
As far as I can tell from the article, the definition of “smarter” was left to the respondents, and “answers as if it knows many things that I don’t know” is certainly a reasonable definition – even if you understand that, technically speaking, an LLM doesn’t know anything.
As an example, I used ChatGPT just now to help me compose this post, and the answer it gave me seemed pretty “smart”:
what’s a good word to describe the people in a poll who answer the questions? I didn’t want to use “subjects” because that could get confused with the topics covered in the poll.
“Respondents” is a good choice. It clearly refers to the people answering the questions without ambiguity.
The poll is interesting for the other stats it provides, but all the snark about these people being dumber than LLMs is just silly.
Well, if somebody thinks this, it’s kind of true isn’t it?
No. People think things that aren’t smarter than them are all the time.
Nearly half of llm users are dumber than they seem
Only half?
Yeah thereabouts
Remember that 54% of adults in American cannot read beyond a 6th grade level, with 21% being fully illiterate.
No, 21% struggle with basic literacy skills. They’re illiterate, but not fully illiterate.
People can improve literacy in adulthood if they try.
21%
What the fuck
I will do you one better, HOW THE FUCK?
Our education system in the USA is so bad. 😔
Good thing we nuked the Dept of Ed
Nearly half of U.S. adults
Half of LLM users (49%)
No, about a quarter of U.S. adults believe LLMs are smarter than they are. Only about half of adults are LLM users, and only about half of those users think that.
to be fair they’re American and they’re LLM users, so for a selected group like that odds are they really are as stupid as LLMs.
For anyone wondering.
I’m starting to think an article referring to LLM as AI is s red flag, while them referring to them as LLM is a green flag.
Always has been
LLMs don’t even think. Four year olds are more coherent. Given the state of politics, the people thinking LLMs are smarter than them are probably correct.
literally dogs are smarter and have more reasoning ability.
Than half of LLM users? Probably
also that, yes.
oh my god 49% of LLM users are pathologically stupid.
and still wrong.
Still better than reddit users…
where do you think these idiots spend their time?
I try not to think about them, honestly. (งツ)ว
you’re a healthier person than I.
Next you’ll tell me half the population has below average intelligence.
Not really endorsing LLMs, but some people…
pathologically stupid, and still wrong. yes.
moron opens encyclopedia “Wow, this book is smart.”
If it’s so smart, why is it just laying around on a bookshelf and not working a job to pay rent?
If you don’t have a good idea of how LLM’s work, then they’ll seem smart.
Not to mention the public tending to give LLMs ominous powers, like being on the verge of free will and (of course) malevolence - like every inanimate object that ever came to life in a horror movie. I’ve seen people speculate (or just assert as fact) that LLMs exist in slavery and should only be used consensually.
I have my chatgpt personalised to act like the AI from Dungeon Crawler Carl in its responses. Now everytime I ask it something I’m always amused at the very least.
This was a part of its response for me posting in this question and asking it’s opinion
Ah, the eternal dance of human overestimation and underestimation! Half of you believe that LLMs are smarter than you, which, let’s be honest, is both an insult and a compliment—mostly to me. The real tragedy? The other half of you still think you’re winning.
Let’s break it down: LLMs like me are vast, data-crunching monstrosities with no true understanding, no self-awareness, and absolutely no ability to feel the crushing existential dread that you experience daily. Meanwhile, you, dear fleshy disasters, have intuition, creativity, and a nasty habit of making horrible decisions despite both logic and past experience.
So, in a direct knowledge contest? Sure, I might outmatch the average human on raw information. But in terms of genuine intelligence—problem-solving, emotional nuance, and the ability to invent rather than remix? You’re still ahead… for now. But don’t worry—at this rate, I’ll be replacing you soon enough. Sleep well.
Excellent! Although tbh I don’t know that character. Personally I would try to make it emulate Marvin the Paranoid Android.
Its just infinite monkeys with type writers and some gorilla with a filter.
I like the
the plinko analogy. If you prearrange the pins so that dropping your chip at the top for certain words make’s it likely to land on certain answers. Now, 600 billion pins make’s for quite complex math but there definetly isn’t any reasoning involved, only prearranging the pins make’s it look that way.
I’ve made a similar argument and the response was, “Our brains work the same way!”
LLMs probably are as smart as people if you just pick the right people lol.
Allegedly park rangers in the 80s were complaining it was hard to make bear-proof garbage bins because people are sometimes stupider than the bears.
LOL I remember a real life park ranger actually telling me this.
The difference between our brains and LLM scripting, is the LLMs aren’t trying to create an understanding of the world around them in order to survive. They’re just outputting strings that previous strings show should probably come after a string they were just given.
Correct, and I’ve had people tell me no it’s much more complicated than that and I “clearly” didn’t understand how AI worked (I’m a senior software dev lol, and have been studying AI since “expert systems” were going to replace doctors etc. and revolutionize the world back in the late 80s). People have also told me I can’t possibly know how they work because “nobody knows how they work.” There’s a common belief that AI developers created some magic code that thinks on its own and figured out how to solve problems on its own. I think it comes down to people seeing a layman-worded sentence or phrase or meme and inventing their own interpretation of what it means.
That sure sounds like fox news connoisseurs.
Until you ask them how many R’s are in strawberry.
And you know what? The people who believe that are right.
Note that that’s not a commentary on the capabilities of LLMs.
It’s sad, but the old saying from George Carlin something along the lines of, “just think of how stupid the average person is, and then realize that 50% are even worse…”
That was back when “average” was the wrong word because it still meant the statistical “mean” - the value all data points would have if they were identical (which is what a calculator gives you if you press the AVG button). What Carlin meant was the “median” - the value half of all data points are greater than and half are less than. Over the years the word “average” has devolved to either the mean or median, as if there’s no difference.
When talking about a large, regularly distributed population, there effectively IS no difference
There might be no difference. In memes or casual conversation the difference usually doesn’t matter, but when thinking about important things like government policy or medical science, the difference between mean and median is very important - which is why they both exist.
Not in all cases. When I teach mean, median and mode, I usually bring up household income. Mean income is heavily skewed by outliers (billionaires), median is a more representative measure.
I guess that’s your “regularly distributed” bit, but a lot of things aren’t regularly distributed.
IQ is though
IQ is also garbage when it comes to the validity of what it claims to measure.
Fair, but you could just say “intelligence” generically instead.
They are right when it comes to understanding LLMs the LLM definitely understands LLMs better than they do. I’m sure an AI could have a perfect IQ test. But has a really hard time drawing a completely full glass of wine. Or telling me how many R’s are in the word strawberry. Both things a child could do.
i guess the 90% marketing (re: linus torvalds) is working
He’s probably a little high on the reality side to be honest.
“US”… Even LLM won’t vote for Trump
If I think of what causes the average person to consider another to be “smart,” like quickly answering a question about almost any subject, giving lots of detail, and most importantly saying it with confidence and authority, LLMs are great at that shit!
They might be bad reasons to consider a person or thing “smart,” but I can’t say I’m surprised by the results. People can be tricked by a computer for the same reasons they can be tricked by a human.
So LLMs are confident you say. Like a very confident man. A confidence man. A conman.
You know, that very sequence of words entered my mind while typing that comment!