I kind of wonder if this whole movement of rationalists believing they can “just” make things better than people already in the field comes from the contracting sense that being rich and having an expensive educational background may in fact be less important than having background experience and situational context in the future, two things they loath?
I think a big part of the AI religion is a rejection of expertise.
Like, YCombinator was saying they wanted AI weather models. Currently, weather models are physics based. They’re programmed by people who have spent their lives studying entropy and fluid dynamics. But YCombinator figures they could do a better job by feeding a bunch of weather data into an LLM.
Weather models are limited by the fact that they’re modeling a chaotic system and the prediction is only as good as the initial state data. There’s pretty good evidence that current models are asymptotically approaching what is possible given available initial state information. Ycombinator says that the real goal of an LLM-based approach isn’t exactly to be better than the physics based approach but to be more efficient because it’s “just” pattern matching. But weather models are run on a few supercomputers around the world owned by governments and the most important ones are distributed for free. What’s more, computational efficiency is not something LLMs are known for.
It really seems like the whole thing is about owning the physicists. Being able to say, “I replaced you with a computer program.”
I kind of wonder if this whole movement of rationalists believing they can “just” make things better than people already in the field comes from the contracting sense that being rich and having an expensive educational background may in fact be less important than having background experience and situational context in the future, two things they loath?
Somehow, the same guy will lecture you for an hour about Chesterton’s Fence when it comes to “traditional values.”
I think a big part of the AI religion is a rejection of expertise.
Like, YCombinator was saying they wanted AI weather models. Currently, weather models are physics based. They’re programmed by people who have spent their lives studying entropy and fluid dynamics. But YCombinator figures they could do a better job by feeding a bunch of weather data into an LLM.
Weather models are limited by the fact that they’re modeling a chaotic system and the prediction is only as good as the initial state data. There’s pretty good evidence that current models are asymptotically approaching what is possible given available initial state information. Ycombinator says that the real goal of an LLM-based approach isn’t exactly to be better than the physics based approach but to be more efficient because it’s “just” pattern matching. But weather models are run on a few supercomputers around the world owned by governments and the most important ones are distributed for free. What’s more, computational efficiency is not something LLMs are known for.
It really seems like the whole thing is about owning the physicists. Being able to say, “I replaced you with a computer program.”