kek. lel even.
GPT is just a machine to generate plausible vacuous texts. Rather than representing an embryonic intelligence, GPT instead highlights the contrast between true intelligence and sophistry.
Sure, it might seem like the dynamic
>feed the model a lot of text => model's ability to simulate logical reasoning increases
is implying that GPT will eventually achieve logical reasoning ability. But that isn't really the case. You need other strategies/mechanisms than just NLP ML.
>just feed it more material then!
The corpus that GPT3 uses is already ultra-representative. It's simply a large fraction of all text that humans have generated over the last two decades. The syntactic and even semantic abstract value of a Wikipedia article from 2010 that describes a city is about identical to one from 2023 that describes an anatomical feature. Same applies to news articles, blog posts, etc.
If you imagine a function and its graph of "fed training material" x "logical processing power | reasoning ability", then right now we are about at the asymptote. This is not a linear/divergent function, it's convergent. (i.e. diminishing returns) You can feed GPT a trillion more texts, but at this point for this task -- logical processing depth/reasoning -- the boost will just be minuscule.
The reason GPT won't get smarter is because its current logical ability heavily sources from the inherent logic of 1) syntax 2) semantics 3) knowledge. Note that the last point sounds vaguely anthropomorphic. With "knowledge" I just mean the abstraction layer above semantics. There is in an inherent logic to sentences of the form e.g.
>water extinguishes fire
That GPT can track. But this "well of inherent logic" runs dry quickly. It only applies to simple statements.
Also note how genuinely interesting and intelligent (in humans -- if I needed to clarify) statements don't rely on knowledge much, but on depth of reasoning.
Pic related, it's GPT5 trying learn by training ever more samey data