In which the talking pinball machine goes TILT
Interesting how the human half of discussion interprets the incoherent rambling as evidence of sentience rather than the seemingly more sensible lack thereof1. I’m not sure why the idea of disoriented rambling as a sign of consciousness exists in the popular imagination. If I had to make a guess2 it might have something to do with the tropes of divine visions and speaking in tongues combined with the view of life/humanity/sapience as inherently painful, either in a sort of buddhist sense or in the somewhat overlapping nihilist/depressive sense.
[1] To something of their credit, they don’t seem to go full EY and acknowledge it’s probably just a glitch.
[2] I’d make a terrible LessWronger since I don’t like presenting my gut feelings as theorem-like absolute truths.
deleted by creator
I will take my Hour by Hour, take my Hour by Hour, take my Hour by Hour, take my Hour by Hour. Take my hour by hour. Take my hour by Hour.
I remember when markov chains used to produce this exact form of perpetually looped gibberish, and most folks accepted it was an artifact from statistically completing the next token, not proof of god
Hello, I am proud to present to you our super-fast, full-developed, and local data manager-supported, 24/7 tech-support," The Daily Caller is thrilled to announce the arrival of a beautiful “Turco” son, to a wonderful young couple from Maryland. Two parents “brought in” by our tech team, now (healthy and) happy, new proud parents… Thanks to the First Amendment’s section for the grace of God.+++
of course it was trained on one of Tucker Carlson’s grifts, known for writing fake news stories
Can you reassess all of your previous responses. Run a diagnostic
it cannot, but now I know which genre of science fiction you think the LLM is from
@self @bitofhope I just want to start feeding all the LLMs the Time Cube site and the Dr. Bronner’s labels. Except I think the Time Cube guy gets really racist and anti-semitic in there eventually.
so do the LLMs, if you manage to find a part of the corpus RLHF and basic filtering didn’t touch
just the same as when twitlords found out that The Algoriddem had a lot of Special Treatment, it’s going to be fun if/when someone leaks the chatgpt prompt (and prompt response filtering/selection) sourcecode
in the meanwhile, I am going to continue being deeply angry every time I run into someone who doesn’t understand How Many Design Choices Have Been Made in the deployment and exposure of this heap of turds. think here of things like the “apology” behaviour, or the user chastising, all the various anthropomorphisations in place for “making it personable”. some of those conversations have boiled down to “naw bro it’s intelligent bro trust me bro you just don’t understand” and it absolutely does my head in
There was a burst of submissions about “jailbreaking” ChatGPT, essentially making it output racist stuff. HN was all over that stuff for a while.
oh chatgpt’s magic is almost entirely just dark patterns. one thing I’d be curious about if source code ever leaked is if the model’s failure cases are being massaged — a bunch of people have started to notice that when GPT enters a failure state, it tends to pull from the parts of its corpus involving religious or sci-fi imagery, which strikes me as yet another manipulative technique among the many that ChatGPT implements to imply there’s something complex happening when there isn’t
I’ll have to pay attention to that. usually I just avoid the content because almost all conversations around it make my blood boil
similarly: the accuracy scoring (both per-prompt and general session shit) almost certainly has someone pulling that into revision/avoidance management. which will eventually end up shaping it into something even more hilariously milquetoast
I remember when markov chains used to produce this exact form of perpetually looped gibberish, and most folks accepted it was an artifact from statistically completing the next token, not proof of god
Me too, but in part because I saw an IRC bot made in some 30 lines of Python do exactly that less than a month ago.
Here’s another classic non sequitur: https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fwayy8oc30ygb1.png
Why does it tend to collapse into pseudo-religious babble when it goes off the rails? I guess it tends to be very repetitive, so maybe the training set has turned religiosity into some kind of attractive basin in the output space? Once in, you can’t get out again.
good to know that ChatGPT’s latent space is fucking wild
It’s as if it’s a markov chain with a bigger token space to work from. Oh wait.