I think you’re kind of underselling how good current LLMs are at mimicking human speech. I can foresee them being fairly hard to detect in the near future.
That wasn’t my intention with the wonky autocorrect sentence. The point of that was to point out LLMs and my auto correct equally have no idea what words mean.
What does it mean to “have an idea what words mean”?
LLMs clearly have some associations between words - they are able to use synonyms, they are able to explain words, they are able to use words correctly. How do you determine from the outside whether they “understand” something?
We understand a tree to be a growing living thing, an LLM understands a tree as a collection of symbols. When they create output they don’t decide that one synonym is more appropriate than another, it’s chosen by which collection of symbols is more statistically likely.
Take for example attempting to correct GPT, it will often admit fault yet not “learn” from it. Why not? If it understands words it should be able to, at least in that context, no longer output the incorrect information yet it still does. It doesn’t learn from it because it can’t. It doesn’t know what words mean. It knows that when it sees the symbols representing “You got {thing} wrong” the most likely symbols to follow represent “You are right I apologize”.
That’s all LLMs like GPT do currently. They analyze a collection of symbols (not actual text) and then output what they determine to be most likely to follow. That causes very interesting behavior, you can talk to it and it will respond as if you are having a conversation.
We understand a tree to be a growing living thing, an LLM understands a tree as a collection of symbols.
No, LLMs understand a tree to be a complex relationship of many, many individual numbers. Can you clearly define how our understanding is based on something different?
When they create output they don’t decide that one synonym is more appropriate than another, it’s chosen by which collection of symbols is more statistically likely.
What is the difference between “appropriate” and “likely”? I know people who use words to sound smart without understanding them - do they decide which words are appropriate, or which ones are likely? Where is the border?
Take for example attempting to correct GPT, it will often admit fault yet not “learn” from it. Why not? If it understands words it should be able to, at least in that context, no longer output the incorrect information yet it still does. It doesn’t learn from it because it can’t.
This is wrong. If you ask it something, it replies and you correct it, it will absolutely “learn” from it for this session. That’s due to the architecture, but it refutes your point.
It doesn’t know what words mean. It knows that when it sees the symbols representing “You got {thing} wrong” the most likely symbols to follow represent “You are right I apologize”.
So why can it often output correct information after it has been corrected? This should be impossible according to you.
That’s all LLMs like GPT do currently. They analyze a collection of symbols (not actual text) and then output what they determine to be most likely to follow. That causes very interesting behavior, you can talk to it and it will respond as if you are having a conversation.
Aaah, the old “stochastic parrot” argument. Can you clearly show that humans don’t analyse inputs and then output what they determine to be most likely to follow?
If you’d like, we can move away from the purely philosophical questions and go to a simple practical one: given some system (LLMs, animals, humans) how do I figure out whether the system understands? Can you give me concrete steps I can take to figure out if it’s “true understanding” or “LLM level understanding”? Your earlier approach (tell it when it’s incorrect) was wrong. Do you have an alternative? If not, how is this not a “god of the gaps” argument?
So why can it often output correct information after it has been corrected? This should be impossible according to you.
It generally doesn’t. It apologizes then will output exactly, very nearly the same thing as before, or something else that’s wrong in a brand new way. Have you used GPT before? This is a common problem, it’s part of why you cannot trust anything it outputs unless you already know enough about the topic to determine it’s accuracy.
No, LLMs understand a tree to be a complex relationship of many, many individual numbers. Can you clearly define how our understanding is based on something different?
And did you really just go “nuh huh its actually in binary”? I used the collection of symbols explanation as that’s how OpenAI describes it so I thought it was a safe to just skip all the detail. Since it’s apparently needed and you’re unlikely to listen to me there’s a good explanation in video form created by Kyle Hill. I’m sure many other people have gone and explained it much better than I can so instead of trying to prove me wrong which we can keep doing all day go learn about them. LLMs are super interesting and yet ultimately extremely primative.
It generally doesn’t. It apologizes then will output exactly, very nearly the same thing as before, or something else that’s wrong in a brand new way. Have you used GPT before? This is a common problem, it’s part of why you cannot trust anything it outputs unless you already know enough about the topic to determine it’s accuracy.
Hallucinations are different from in-context learning. I’ve seen a number of impressive examples of this, enough that you should provide evidence that it generally doesn’t work. There are a bunch of papers on this topic, surely at least one would support your thesis?
And did you really just go “nuh huh its actually in binary”?
No, that is literally how knowledge is stored inside of neural networks. Plenty of papers have shown that the learning process is actually mostly about compression, since you distill the patterns of training data into smaller size data. This means that LLMs actually have concepts of things (which again has been shown independently, e.g. with Otello). These concepts are themselves stored as relationships between large amounts of numbers - that’s how NNs work.
I also fully understand how the tokenization process works and what the mentioned “symbols” are. Please explain what this has to do with anything. The model sees text in specific chunks as an optimisation, what does this change?
I’m a big boy who has already implemented his own LLMs from the group up, so feel free to skip any simplifications and tell me exactly, in detail, what you mean.
I think you’re kind of underselling how good current LLMs are at mimicking human speech. I can foresee them being fairly hard to detect in the near future.
That wasn’t my intention with the wonky autocorrect sentence. The point of that was to point out LLMs and my auto correct equally have no idea what words mean.
Yes and my point is that it doesn’t matter if they know what they mean, just that it has the appearance that they know what they mean.
What does it mean to “have an idea what words mean”?
LLMs clearly have some associations between words - they are able to use synonyms, they are able to explain words, they are able to use words correctly. How do you determine from the outside whether they “understand” something?
We understand a tree to be a growing living thing, an LLM understands a tree as a collection of symbols. When they create output they don’t decide that one synonym is more appropriate than another, it’s chosen by which collection of symbols is more statistically likely.
Take for example attempting to correct GPT, it will often admit fault yet not “learn” from it. Why not? If it understands words it should be able to, at least in that context, no longer output the incorrect information yet it still does. It doesn’t learn from it because it can’t. It doesn’t know what words mean. It knows that when it sees the symbols representing “You got {thing} wrong” the most likely symbols to follow represent “You are right I apologize”.
That’s all LLMs like GPT do currently. They analyze a collection of symbols (not actual text) and then output what they determine to be most likely to follow. That causes very interesting behavior, you can talk to it and it will respond as if you are having a conversation.
No, LLMs understand a tree to be a complex relationship of many, many individual numbers. Can you clearly define how our understanding is based on something different?
What is the difference between “appropriate” and “likely”? I know people who use words to sound smart without understanding them - do they decide which words are appropriate, or which ones are likely? Where is the border?
This is wrong. If you ask it something, it replies and you correct it, it will absolutely “learn” from it for this session. That’s due to the architecture, but it refutes your point.
So why can it often output correct information after it has been corrected? This should be impossible according to you.
Aaah, the old “stochastic parrot” argument. Can you clearly show that humans don’t analyse inputs and then output what they determine to be most likely to follow?
If you’d like, we can move away from the purely philosophical questions and go to a simple practical one: given some system (LLMs, animals, humans) how do I figure out whether the system understands? Can you give me concrete steps I can take to figure out if it’s “true understanding” or “LLM level understanding”? Your earlier approach (tell it when it’s incorrect) was wrong. Do you have an alternative? If not, how is this not a “god of the gaps” argument?
It generally doesn’t. It apologizes then will output exactly, very nearly the same thing as before, or something else that’s wrong in a brand new way. Have you used GPT before? This is a common problem, it’s part of why you cannot trust anything it outputs unless you already know enough about the topic to determine it’s accuracy.
And did you really just go “nuh huh its actually in binary”? I used the collection of symbols explanation as that’s how OpenAI describes it so I thought it was a safe to just skip all the detail. Since it’s apparently needed and you’re unlikely to listen to me there’s a good explanation in video form created by Kyle Hill. I’m sure many other people have gone and explained it much better than I can so instead of trying to prove me wrong which we can keep doing all day go learn about them. LLMs are super interesting and yet ultimately extremely primative.
Hallucinations are different from in-context learning. I’ve seen a number of impressive examples of this, enough that you should provide evidence that it generally doesn’t work. There are a bunch of papers on this topic, surely at least one would support your thesis?
No, that is literally how knowledge is stored inside of neural networks. Plenty of papers have shown that the learning process is actually mostly about compression, since you distill the patterns of training data into smaller size data. This means that LLMs actually have concepts of things (which again has been shown independently, e.g. with Otello). These concepts are themselves stored as relationships between large amounts of numbers - that’s how NNs work.
I also fully understand how the tokenization process works and what the mentioned “symbols” are. Please explain what this has to do with anything. The model sees text in specific chunks as an optimisation, what does this change?
I’m a big boy who has already implemented his own LLMs from the group up, so feel free to skip any simplifications and tell me exactly, in detail, what you mean.