- cross-posted to:
- iowa
- cross-posted to:
- iowa
Excerpt:
To underline Blanchfield’s point, the ChatGPT book selection process was found to be unreliable and inconsistent when repeated by Popular Science. “A repeat inquiry regarding ‘The Kite Runner,’ for example, gives contradictory answers,” the Popular Science reporters noted. “In one response, ChatGPT deems Khaled Hosseini’s novel to contain ‘little to no explicit sexual content.’ Upon a separate follow-up, the LLM affirms the book ‘does contain a description of a sexual assault.’”
This belongs in politics, not technology
When will people learn that LLMs have no understanding of truth or facts? They just generate something that looks like it was written by a human with some amount of internal consistency while making baseless assumptions for anything that doesn’t show up (enough) in their training set.
That makes them great for writing fiction but try asking ChatGPT for the best restaurants in a small town. It will gladly and without hesitation list you ten restaurants that have never existed, including links to websites that may belong to a completely different restaurant.
I basically agree with you but for your example that’s because ChatGPT wasn’t made to return local results, nor even recent ones.
So of course it’s going to fail spectacularly at that task. It has no means to research it.
The point isn’t that they used ChatGPT to pick books to ban. They may not have even used ChatGPT, they just said they did so they can point to a service and say “See? It wasn’t us, it was that!”
They’ve shown time and again that they lie. That they do not act or argue in good faith. That they make excuses to distract people from what they’re doing.
Stop treating these assholes as if debating them will do a damned thing. We’re playing checkers, but they’re fighting an MMA match.
This is as transparent as hell. It reminds me of a TV show where a bunch of idiots plot to murder someone so they decide that if they all pull the trigger together, none of them are “technically” the murderer. Of course, that just meant they were all culpable.
It’s only a few layers of abstraction above “we didn’t ban these books, we flipped a coin to decide whether to ban them and fate chose tails…”
Pathetic.
Lots of uses of “AI” are so people can deny responsibility. They feed in their history of discrimination, tell the machine to replicate it, then go, “it can’t be discriminatory, it’s an AI”
#Fascists ban books.
As a non American I absolutely do not care. How do I make content like this not show up on my feed without unsubscribing to Tech Beehaw?
Don’t get too comfortable. This nonsense is coming soon to a government near you.
You mean when they burn books and shout ‘heil Hitler’ I should do sth?
I mean I’m American and I’ve been looking for a technology community that actually posts cool and fun tech stories instead of apparently assuming every bit of tech is the anti-Christ incognito.
If anyone happens to find one let me know, because I feel like the only people that care enough to post here, care in the wrong direction and fucking hate all things technology.
deleted by creator
So many questions!
Are you suggesting that the political aspects of technology shouldn’t be discussed in a technology community?
Are you implying that technology is apolitical? That there are technology subjects to discuss that don’t have a political component?
Do discussions of the applications of technology not belong in a technology community?
deleted by creator
The guy you’re trying to pass the buck to, money_loo, is from a lemmy instance that only has Chicago sports communities and whose front page is mostly federated meme posts. You’re a BeeHaw user. You’ve presumably read and agreed to the Beehaw community documents.
I expect more than anti-intellectualism from you.
I’ve been here a month now and that doesn’t seem to stop them, lol. At one point I was subbed to like five different tech communities, and I had to unsubscribe from three so far. This one is getting closer everyday.
You could use word filters, for this case “Republican”.
This headline is garbage. Not only is it stating something that I haven’t heard anyone seriously argue, it has nothing to do with the rest of the article, which just goes on to talk about how shitty a job ChatGPT is doing at the task.
deleted by creator
I think he’s referring to the “AI is banning books” argument (a strawman) not the “Republicans are banning books” which we all know.
deleted by creator
Yeah obviously, but nobody is saying ChatGPT is doing it lol
deleted by creator
“Republicans use AI to ban books” == “Republicans are banning books”
Absolutely no one is saying “AI is banning books”, which is what the headline is arguing against. It’s an argument not being made, just total clickbait.
deleted by creator
There was literally an article either yesterday or the day before with the headline “AI being used to ban books in Iowa” or something to that effect.
Republicans are using AI to ban books is very different than saying AI is banning books. Nobody is saying “AI is banning books”
I mean, this is near enough as makes no difference, I think?
Either way I won’t have to look at his trash-ass takes anymore, but I’m just saying it does exist and when you run across a take like that, it tends to taint everything near it.
The argument does exist. This article by PEN America is one of the most widely spread ones and largely misrepresents the situation. It’s based on a PopSci article with a similar headline, though the contents of the article tell a rather different story.
Nothing really says out loud what’s going on: Republicans enacted an extremely vague and unrealistically short deadline book ban as part of a bill (that does some other stuff like removing AIDS education), forcing schools to either throw out every book that might be vaguely suspect or resort to funny measures like this. This school’s use of ChatGPT was purely to save books that were on a human-assembled list of challenged books, to reduce the negative effect of the book ban, while being potentially defensible in court (remains to be seen how that’ll work out, but they made an “objective” process and stuck to it - that’s what matters to them).