In the context of AI, people tend to use “grok” to describe what can sometimes happen if you overtrain the living shit out of a model and it somehow goes from being trained appropriately and generalizing well -> overfitted and only working well on the training data, with shit performance everywhere else -> somehow working again and generalizing, even better than before. Example in a paper: https://arxiv.org/abs/2201.02177
OpenAI really wants a monopoly and are trying to present themselves as a “safe” AI company while also lobbying for regulation of “unsafe” AI companies (everyone else, and especially open-source development). So pretty much half of all manhours spent on developing models at OpenAI seem to be directed towards stopping it from generating anything that will get them the wrong kind of press. Sometimes, they are moderately successful at doing this, but someone always eventually finds a way to get something on the level of “gender reveal 9/11” out of their models.
Elon owned OpenAI at some point but sold it because, as we all know, he makes a lot of extremely poor financial decisions.
trained appropriately and generalizing well -> overfitted and only working well on the training data, with shit performance everywhere else -> somehow working again and generalizing, even better than before
That’s fascinating, I’ve never heard of that before.
In the context of AI, people tend to use “grok” to describe what can sometimes happen if you overtrain the living shit out of a model and it somehow goes from being trained appropriately and generalizing well -> overfitted and only working well on the training data, with shit performance everywhere else -> somehow working again and generalizing, even better than before. Example in a paper: https://arxiv.org/abs/2201.02177
OpenAI really wants a monopoly and are trying to present themselves as a “safe” AI company while also lobbying for regulation of “unsafe” AI companies (everyone else, and especially open-source development). So pretty much half of all manhours spent on developing models at OpenAI seem to be directed towards stopping it from generating anything that will get them the wrong kind of press. Sometimes, they are moderately successful at doing this, but someone always eventually finds a way to get something on the level of “gender reveal 9/11” out of their models.
Elon owned OpenAI at some point but sold it because, as we all know, he makes a lot of extremely poor financial decisions.
That’s fascinating, I’ve never heard of that before.