Dude states that GPT has a world model, based on a few prompts:
Making predictions and dealing with counter-factuals
Demonstrating reasonable predictions about scenarios that it could not have encountered in its training data, thus providing evidence its not ‘just looking things up’ or regurgitating correlations. (Or at least that the line between regurgitating correlations and intelligence is unclear!)
My answer:
He is taking the idea of "regurgitation of training data" literally, which is wrong, kind of strawmen argument.
The output of transformer based LLMs can be novel and designed to be such. But, they can't "think" in other way then combining patterns they learned from the training data.
The idea of "not having the word model" should be understood as not having the deep structure, however LLMs clearly have the surface structure.
This is clearly demonstrated by the fact that LLMs struggle with reasoning: arithmetics, finding a shortest path in graph, or just counting characters in the prompt.
LLMs have the knowledge on how to do all these things, if instructed to, they will explain how. But they have no real-world, sensory model of a number or a graph, thus they have problems actually performing these tasks.
Humans have non-linguistic, deep models of numbers. And, given they also know the concepts and algorithms (that LLMs do know), they are able to do very well.
Also, regarding several prompts the author was lucky to get reasonable answers for. That's not how we can draw conclusions about LLMs abilities. You should have a dataset with a lot of questions or scenarios and have a quantitative measure of LLM performance.
#llm #ml #general_semantics #nlp
Comments (1)
I think next generation would have deep structure. Smth. like generate, verify, create check-list for external source/execution units (web, calculator, memory units, etc.), regenerate with new results. May be with several iterations.