By June, “for reasons that are not clear,” ChatGPT stopped showing its step-by-step reasoning.
You must log in or # to comment.
My personal pet theory is that a lot of people were doing work that involved getting multiple LLMs in communication. When those conversations were then used in the RL loop we start seeing degradation similar to what’s been in the news recently with regards to image generation models.
Can you link an example of what you mean by the problems in image generation models?
I believe this is the paper that got everybody talking about it recently: https://arxiv.org/pdf/2307.01850.pdf
It’s going to get worse once people stop creating content for the machine.