- cross-posted to:
- books@lemmy.ml
- cross-posted to:
- books@lemmy.ml
My title might be a bit hyperbolic, but stuff like this worries me. I love to read and I love reading on a kindle. This has been going on for a while, but it has now reached absurd levels.
As in actual world, providing context to physics of things, providing logical association/evaluation, and so go on. It is basically something that supposed to help LLM get closer to understanding the “world” rather than just spewing out whatever the training dataset give it. It does have a direct implication for technical writing, because with stronger understanding of the things you wanted to write about in technical writing, LLM with World Model would basically auto-fill that.
This is something that the researchers are pretty much all hand on deck working on to create.
One example of the research involving this
Thanks! I’ll take the time to go through this paper when I’m more awake, I really appreciate the link.