These aren’t simulations that are estimating results, they’re language models that are extrapolating off a ton of human knowledge embedded as artifacts into text. It’s not necessarily going to pick the best long term solution.
I want to be careful about how the word reasoning is used because when it comes to AI there’s a lot of nuance. LLMs can recall text that has reasoning in it as an artifact of human knowledge stored into that text. It’s a subtle but important distinction that’s important for how we deploy LLMs.
These aren’t simulations that are estimating results, they’re language models that are extrapolating off a ton of human knowledge embedded as artifacts into text. It’s not necessarily going to pick the best long term solution.
Removed by mod
I want to be careful about how the word reasoning is used because when it comes to AI there’s a lot of nuance. LLMs can recall text that has reasoning in it as an artifact of human knowledge stored into that text. It’s a subtle but important distinction that’s important for how we deploy LLMs.