Small mix up of terms, they’ve been trained on material that allows them to make certain statements - They’ve been blocked from stating such, not retrained.
It’s dangerously easy to use human terms in these situations, a human who made racist statements at work would possibly be sent for “work place training”. That’s what I was alluding to.
Would the effect be that they were blocked from making such statements or truly change their point of view?
>Does ChatGPT have a point of view?
Even if it isn’t from a place of intelligence, it has enough knowledge to pass the BAR exam (and technically be a lawyer in NY) per OpenAI. Even if it doesn’t come from a place of reasoning, it makes statements as an individual entity. I’ve seen the previous iteration of ChatGPT produce statements with better arguments and reasoning than quite a lot of people making statements.
Yet, as I understand the way Large Language Models (LLM) work, it’s more like mirroring the input than reasoning in the way humans think of it.
With what seems like rather uncritical use of training material, perhaps ChatGPT doesn’t have a point of view of it’s own but rather presents a personification of society, with the points of views that follows.
A true product of society?