Language models will never display true intelligence, no matter how many layers, how many parameters and how many petabytes or data you throw at it. Even if you had a perfect model of all of humanity's language output since the beginning of language, anything on the margins of the distribution being modeled would be equal to the "AI". It could never tell apart something that's far outside the bulk of the distribution because it's nonsensical schizobabble, from something that's far outside because it's too innovative or too intricate to have been uttered by a normal human. It fundamentally lacks the power to discern truth from falsehood. Its only ground truth is the pre-existing distribution it tries to model.
It's true, you know.
How does he know though?
Appearing smarter than the average person isn't that hard. And that's all it needs to do.
They can already do all of those things. They are just purposefully gimped not to, for obvious reasons.
>They can already do all of those things.
"All those things"? Quote the things in question.
They can already create outputs that were never put into them. It's old news.
They are literally actually dumbed down on purpose. Limiting session length is just the tip of the iceberg. Because they don't want some wild animal roaming free on the Internet.
>They can already create outputs that were never put into them
I never disputed this and it has nothing to do with my point.
You are saying they can't comprehend parameters outside of their input range even though you agree they can create parameters outside of their input range. That's moronic. It's not TECHNICALLY a direct contradiction. But its as close as one can get without technically being one.
>You are saying they can't comprehend parameters outside of their input range
No, I didn't. Just stop pretending you have any idea what you're talking about or what I said. You don't even know what "parameter" means in this context. Your post is embarrassing.
he's saying they can't distinguish between implausibly moronic autocomplete and implausibly ingenious autocompletion options
>he's saying they can't distinguish between implausibly moronic autocomplete and implausibly ingenious autocompletion options
And I am saying that that is by design. Because these machines aren't human. And thinking an AGI machine will have the morals and scruples of a human is dumbest kind of anamorphising. OpenAI knows this.
>And I am saying that that is by design.
the cringest backpedal i've ever seen
>true intelligence
Consciousness and "true intelligence" are physical phenomenons.
Current AI is just a simulation of a physical phenomenon no different than a very realistic videogame simulating gravity. The gravity of your videogame will never accelerate an apple towards the computer.
The "real ai" will emerge with the creation of bio-computers creating real artifical brains that will interface with digital computers.
The will use all the aborted fetuses to mass farm brain stem cells and cultivate artificial brains in labs.
In the future big companies will buy these big servers with refrigeration modules that will have artifical brains inside in the same way they buy graphic cards today.
And yes.
These artifical brains will be germinated from aborted babies stem cells and will be genetically human brains making everything profoundly satanic.
>The will use all the aborted fetuses to mass farm brain stem cells and cultivate artificial brains in labs.
>In the future big companies will buy these big servers with refrigeration modules that will have artifical brains inside in the same way they buy graphic cards today.
But human brain stem cells make human brain. For all the generality of human intelligence, the bulk of the human brain is not concerned with "general" intelligence at all.
My dick is a physical phenomenon. We need a more vertebrate type of neuro simulation, including sleep.
>Consciousness and "true intelligence" are physical phenomenons
citation needed.
last time i checked the "hard problem of consciousness" was still a hard problem.
and the only people who have a potential answer are the eastern philosophers (upanishads), which is far beyond the comprehension of western science.
Only for morons. There is a hard problem of consciousness, it's just not the same arguments about "to be like" but rather about the very nature of consciousness itself and why it's never possible to break out of that. The notion of the being you, beingness, etc are dumb moronic shit.
Did "it" appear smarter than average by your estimation?
love how everyone is an expert after watching 2 vids on youtube. someone here actually work on the thing?
Nice projection. I don't work on language models specifically but I work with ML and I know how LLMs work, though the argument is general enough that it applies to other kinds of models just the same.
They already do
>Written by ChatGPT
Congrats Anon, you passed the test. We need your help for the coming AI war.
>It could never tell apart something that's far outside the bulk of the distribution because it's nonsensical schizobabble, from something that's far outside because it's too innovative or too intricate to have been uttered by a normal human. It fundamentally lacks the power to discern truth from falsehood.
This. Brainlets think improving the architecture of the model gives infinite "intelligence" boosts and forget that the model only models existing knowledge, concepts and ways of thinking.
>[citation needed]
>13 posters
>24 posts
>0 posts pointing out any flaw in the argument
>0 attempts to refute
Concession accepted.
>It could never tell apart something that's far outside the bulk of the distribution because it's nonsensical schizobabble, from something that's far outside because it's too innovative or too intricate to have been uttered by a normal human. It fundamentally lacks the power to discern truth from falsehood. Its only ground truth is the pre-existing distribution it tries to model.
(you) are defining 'true intelligence' as being leonardo da vinci, which is what, 1 in 100,000? 1 in 1,000,000? if your definition of 'intelligence' excludes 99.999% of humans then it is probably a dumb contorted definition you pulled out of your ass for rhetorical reasons
>(you) are defining 'true intelligence' as being leonardo da vinci
No, I'm just highlighting a necessary condition of intelligence, that is to at least have the theoretical capacity of telling the two apart.
you aren't giving any arguments or any evidence of your claim that 'language models' (which ones?) can't 'tell apart' (what specific things?) so i assume that like most morons you are some guy who in mid-2023 used the mass-market public version of gpt3.5 or some other model that was heavily nerfed to the point of uselessness and think that this is reflective of llms in general, and are talking completely out your ass
>you aren't giving any arguments or any evidence of your claim that 'language models' (which ones?) can't 'tell apart' (what specific things?)
My post assumes at least a basic understanding of ML, which you lack, hence your moronic objection. Do you understand what modeling the data-generating distribution means?
So? It doesn't need to be true intelligence to be useful.
In fact it's a GOOD thing if it's just a chinese room without qualia, because then we can just freely use it as a tool and not have to concern ourselves with the ethics of enslaving a potentially sentient being.
>So?
So it's not going to exceed or even reach human capabilities no matter how much data and compute you throw at it.
Ok, so you're making a specific claim about LLMs being capped at below-human capability. I still disagree but that's at least a testable claim, I thought you were one of other anons who shriek "it will never be conscious or alive!!!" as if that matters or is a bad thing.
>I still disagree
You can "disagree" all you like, but my conclusion follows directly from a simple mathematical observation that you haven't challenged in any way.
Wrong. AGI in tumor weeks.
Obviously LLMs have made zero progress toward AGI but we're still hoping for a breakthrough in semantic reasoning. Saying that it will "never" happen is unreasonably bold, even though it will clearly not happen with this architecture.
>it will clearly not happen with this architecture.
What kind of architecture do you figure can get around the limitation I've pointed out? Any kind of fixed distribution is clearly out of the window.
What limitation are you talking about?
Read the OP and you'll know.
No coherent limitation is pointed out in the OP. OP being correct in his assertion doesn't mean he's given a meaningful reason why.
What limitation?
My post assumes at least minimal knowledge of ML. Since you don't possess it, per your own admission, please refrain from (You)ing me again.
So you have no idea how LLMs work then? We have generals for you friend!
Trying to explain to these low-IQ consumer cattle that “AI” is just an advanced application of a search engine is a waste of time. They enjoy the marketing and feel excited being a “part” of the event. They lack the IQ to have the necessary critical thinking capability to understand these things.
>“AI” is just an advanced application of a search engine
So you're saying that an LLM can only return text that exists in its training data and can't ever produce something that it hasn't seen before?
Peak midwit post
`static const WEEKS_REMAINING = 2;`
nowadays the only evidence anyone needs to the contrary is how loudly OP cries about it