Am I the only one getting agitated by the word AI (Artificial Intelligence)?
Real AI does not exist yet,
atm we only have LLMs (Large Language Models),
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).
Imo AI is just a marketing buzzword,
created by rich capitalistic a-holes,
who already invested in LLM stocks,
and now are looking for a profit.
Artificial intelligence might be really good, perhaps even superhuman at one thing, for example driving a car but that same competence doesn’t apply over variety of fields. Your self-driving car can’t help with your homework. With artificial general intelligence however, it does. Humans posses general intelligence; we can do math, speak different languages, know how to navigate social situations, know how to throw a ball, can interpret sights, sounds etc.
With a real AGI you don’t need to develop different versions of it for different purposes. It’s generally intelligent so it can do it all. This also includes writing its own code. This is where the worry about intelligence explosion origins from. Once it’s even slightly better than humans at writing its code it’ll make a more competent version of itself which will then create even more competent version and so on. It’s a chain reaction which we might not be able to stop. After all it’s by definition smarter than us and being a computer; also million times faster.
Edit: Another feature that AGI would most likely, though not neccessarily posses is consciousness. There’s a possibility that it feels like something to be generally intelligent.
Removed by mod
I’ve heard expers say that GPT4 displays signs of general intelligence so while I still wouldn’t call it an AGI I’m in no way claiming an LLM couldn’t ever become generally intelligent. Infact if I were to bet money on it I think there’s a good chance that this is where our first true AGI systems will originate from. We’re just not there yet.
It isn’t. It doesn’t understand things like we think of with intelligence. It generates output that fits a recognized input. If it doesn’t recognize the input in some form it generates garbage. It doesn’t understand context and it doesn’t try to generalize knowledge to apply to different things.
For example, I could teach you about gravity, trees, and apples and ask you to draw a picture of an apple falling from a tree and you’d be able to create a convincing picture of what that would look like even without ever seeing it before. An LLM couldn’t. It could create a picture of an apple falling from a tree based on other pictures of apples falling from trees, but not from just the knowledge of an apple, a tree, and gravity.