- cross-posted to:
- aboringdystopia@lemmy.world
- cross-posted to:
- aboringdystopia@lemmy.world
No.
It is not.
Yeah, wotsisname’s Law of Headlines. If it ends in a question mark the answer to the question is no
Betteridge
Why not
Because too frequently it gives plausible-sounding but completely unfounded statements.
Also it can go more darkly wrong, and all the extra checks and safeguards don’t always protect it.
Why is this different than talking to a human
Because a human can understand the situation, and the person they’re talking to, and reply with wisdom, rather than just parroting what seems like what they heard before.
Are you saying that humans don’t parrot what seems like what they heard before?
Oh we absolutely do. And we tell lies, and we misunderstand, and miscommunicate.
But not all the time, and not everyone. So if your friend if they’d like dinner, you expect the answer to be true to what they want, not just whatever sounds good to the general population. If you read a scientific journal, you expect the scientists to represent the facts and even the meaning of their research, not parrot some ideas from a half-forgotten textbook. And if you see a professional counsellor, you expect them to have a good understanding of human nature, and to genuinely empathise with your situation, and have good ways to help you out.
And of course all three of those examples fail sometimes, which is why as part of life we learn who we can trust and to what extent.
I would argue that all of the cases you presented fail at a comparable rate compared to foundational LLMs
Removed by mod
Some of it is, as I can personally attest. And well-dressed lies can certainly do a person much harm.
Link to Jacob Geller’s thoughts 1 year ago. Not ChatGPT, but I his long-form stuff, and this loosely relates maybe
Removed by mod