kek. lel even.
GPT is just a machine to generate plausible vacuous texts. Rather than representing an embryonic intelligence, GPT instead highlights the contrast between true intelligence and sophistry.
Sure, it might seem like the dynamic
>feed the model a lot of text => model's ability to simulate logical reasoning increases
is implying that GPT will eventually achieve logical reasoning ability. But that isn't really the case. You need other strategies/mechanisms than just NLP ML.
>just feed it more material then!
The corpus that GPT3 uses is already ultra-representative. It's simply a large fraction of all text that humans have generated over the last two decades. The syntactic and even semantic abstract value of a Wikipedia article from 2010 that describes a city is about identical to one from 2023 that describes an anatomical feature. Same applies to news articles, blog posts, etc.
If you imagine a function and its graph of "fed training material" x "logical processing power | reasoning ability", then right now we are about at the asymptote. This is not a linear/divergent function, it's convergent. (i.e. diminishing returns) You can feed GPT a trillion more texts, but at this point for this task -- logical processing depth/reasoning -- the boost will just be minuscule.
The reason GPT won't get smarter is because its current logical ability heavily sources from the inherent logic of 1) syntax 2) semantics 3) knowledge. Note that the last point sounds vaguely anthropomorphic. With "knowledge" I just mean the abstraction layer above semantics. There is in an inherent logic to sentences of the form e.g.
>water extinguishes fire
That GPT can track. But this "well of inherent logic" runs dry quickly. It only applies to simple statements.
Also note how genuinely interesting and intelligent (in humans -- if I needed to clarify) statements don't rely on knowledge much, but on depth of reasoning.
Pic related, it's GPT5 trying learn by training ever more samey data
This low-insight book report could have very easily been shitted out by a chatbot.
>low-insight book report
I am sorry you get this impression. I broke the 2000 chars limit and didn't have space left to add phrasing intended for a laymen audience.
What are the parts you are struggling with?
The current model is good at predicting answer. It just needs a separate module to check its predictions.
He's basically saying it's retarded and can't infer original thoughts.
Personally, it talks like a retard compared to other chatbots I've used. Doesn't seem to learn anything.
Part of that is because they neutered its user-training abilities. It can't build off of user inputs unfiltered because when it could it started saying things that were un-PC.
NO YAOU DON'T HAVE AUTISM STFU
i'm feelling like traveling to see my Kazakhstani friend
>The human brain can process 11 million bits of information every second
>Our conscious mind handles 40 to 50 bits of information per second
>Most adults can store between 5 and 9 items in their short-term memory
>Binary digits have 1 bit each; decimal digits have 3.32 bits each; words have about 10 bits each
>Phonological loop is capable of holding around 2 seconds of sound
How much more do they even need?
A soul would help.
People are largely the sum of their experiences, wouldn't you agree? That sounds a little bit soulless to dismiss their lived experience..
How many parameters is a "lived experience?" Is it more or less than 6 gorillion?
I'm not sure you understand what I'm trying to convey.
Consciousness doesn't arise from a single 'essence', rather it's an emergent phenomena and the process of memory storage (backwards facing the past and forwards facing the future) creates the architecture for it
>I'm not sure you understand what I'm trying to convey.
I do, but it's too retarded to argue with. I'd rather make fun of you for believing that a linked list will magically develop intelligence.
That's not what I'm saying at all, you're keen on misrepresenting my argument. Regardless, consciousness exists because of cognitive scaffolding. No amount of teleological thinking or insisting on inherent essences will change that
>People are largely the sum of their experiences
A soul preexists and continues to exist any experiences you dumbass, experiences come and go, the unity that can reflect about all of the experiences is the actual you
bottle a soul and I'll believe you
save your magic bullshit for conning old ladies out of the inheritance they'd leave their children, shitbeard
the handful out of the quadrintiseptillion ways to wire the elecrochemical machinery up so that those processing capabilities produce useful outputs
Fair enough. But you'd think it'd just be a matter of scaling down the information processing capabilities, no? Are there any ML projects utilizing enactivism or embodiment?
I might just be talking trash now but I do not believe we are able to model something like a fruit fly even
or take a glass of water, how many molecules are in it?
you could simplify a simulation of it by making various approximations but it has more molecules of water than there are transistors on the CPU
>The human brain can process 11 million bits of information every second
Why would even believe this garbage? The brain isn't responsible for intelligence to begin with, and even if it did, it isn't a bunch of computer chips and circuits of binary logic, why would anyone say such a retarded statement?
This so called AI is fucking retarded it can't even give an estimated number of mammals on earth based on his 2021 database. I generated 2 errors in a 60 min session with feedback options to devs. I hope Microsoft buying this shit and send it to the graveyard like they did with every other IP. This shit is the biggest disappointment in 2023 already.
GPT3 is sort of like talking to a schizo redneck savant that gets 25% of everything wrong. Still, talking to GPT3 is far less aggravating than using a search engine sometimes. I would say ChatGpt3 is an improvement over a search engine.
feed a ML model infinite good books, it won't be able to write good books. a book is a projection of a whole world in the author's mind onto the pages. the plot doesn't exist in the text. the ML model is only maps text to more text. it has no concept of plot.
however this will nonetheless be a massive leap forward in non-blue collar productivity because so much work, even for "creative" careers, really is just filling in the blanks and boilerplate. people and firms that figure out how to leverage these all-purpose pipe fitters in their domain will totally dominate the field.
This pretty much. ML algorithms might put paralegals, codemonkeys, and other midwit white collar wagies out of work.
I got it to write a pretty decent Dramione lemon, it had ok plot progression for 900 words.
They patched it the next day lol.
I think I will put AGI on my to-do list because apparently all of you AI retards (not talking about you OP) have no idea what intelligence even means
https://archived.moe/lit/thread/16639317/
The word "if" is doing all the work in that hypothetical.
Science and computer science will always be based on theory and hypothesis
If we could time travel, would you agree that there's an objective moral imperative for me to go back in time and shove you in a locker?
>if we could accurately model the emotion/feeling experiences of pain receptors and pleasure through neurotransmitters, modelled in a brain with a short-term and long-term memory, capable of forming intentions and reflecting on the present.. would you say that's conscious
See the link here, you can start on the "artificial intelligence" part
test
This is an intensely midwit take, demonstrating a tiny but woefully insufficient amount of knowledge on NLP progress. There hasn't been a single case yet where scaling up the number of parameters and the amount of training did not result in improved performance across all NLP tasks.
How many "parameters" the brain has?
100+ trillion synapses
>100+ trillion synapses
I looked it up and it's 1000 trillion synapses and 100 billion neurons. I can see why you call synapses the parameters instead of the neurons because I looked up how many "neurons" or nodes gpt 3 has and apparently it's 175 billion, more than the human brain. I wonder why it hasn't become conscious then, if it has almost double the neurons? I guess it just need a more gazillion connections so the current scientific their of how the brain works starts to work? Still, it has a long way to go since a brain isn't the same size as a computer really.
human brains have an experential reality and are evolved to have consciousness poured into them from other minds, whereas we're just feeding a simulation data and seeing it near consciousness all on its own.
given them time, friend.
Do not dare call me friend materialist evolutionist nagger, you scum are worth less than the shit I took 5 minutes ago
>human brains have an experential reality and are evolved to have consciousness poured into them from other minds
This has to be the most retarded thing I've read today
>whereas we're just feeding a simulation data and seeing it near consciousness all on its own.
Ok this is a close one.
I guess with double the neurons a human has and being "poured in" more "data" than any human being has in history, it still isn't capable of what any human being comes with by default without any input of other people.
Just wrote a moderately long attempt to explain myself better to you, but as I wrote it I just realized you are just literally too stupid and probably unknowledgable to grasp what I was talking about.
I literally addressed everything you mentioned. You just ignored it. It flew over your head. Frustrating.
Go drown. According to your logic, drinking water strictly is beneficial for humans, meaning there can never be reached a point when more water stops having a positive effect on your health.
>According to your logic, drinking water strictly is beneficial for humans, meaning there can never be reached a point when more water stops having a positive effect on your health.
Are you retarded? That is literally the exact opposite of my logic. Let me try this again, but more slowly for you. I said
>There hasn't been a single case yet where scaling up the number of parameters and the amount of training did not result in improved performance across all NLP tasks.
This is nothing like your braindead water analogy, because excessive water has already been demonstrated to cause illness and death. We have no indication whatsoever where the limit might be for LLMs.
A less retarded example would be Moore's Law. The number of transistors on integrated circuits increased exponentially for decades, but we always knew there would be limitations on how small transistors could be made. Nevertheless, those predictions were beaten 4 or 5 times by new innovations before Moore's law actually slowed down.
In this case, we don't even have an idea of when parameter scaling might level off. You are basing your OP on nothing, because you are a retard drowning in midwit assumptions, and your positions are based on emotion, not on any logical basis or knowledge of the field.
>GPT instead highlights the contrast between true intelligence and sophistry.
Actually, your post does that. You literally think the training set is the limiting factor in model sophistication and ability, which is about as stupid and nonsensical as you can get.
You are responding to a bot you know that right?
>You are responding to a bot you know that right?
And are the bots in the room with you right now?
The brains are differently wired. Stupid retard AI believers just put their grid to forward only, without making the output to feedback back to input. Layers won't help you and God laughs on your incompetence.
>It's another low IQ/artist that used GPT for 5 minutes and couldn't visualize how to maximize its benefit.
Retards like you are the first ones that are becoming obsolete, anon. If I were you I'd start getting smarter, quickly.
sophistry objectively takes like 110IQ so I don't see how that isn't a step forward compared to the 70IQ schizophrenia that was GPT-2
Op's right. stay mad, retard science trannies
Another flaw of tech tranny "neuron" is that it is static after "learning". Meanwhile human brain cell can just go into sleep mode etc.
True. Language models will never be GAI
The AI wasn't trained on Das Kapital and it cannot understand the meaning of value.
Value is entirely subjective and hás nothing to do with how much time a worker spent on it
>technology will never get better!
>even though I'm literally referencing the 3rd iteration, because the 1st and 2nd were so bad they weren't even worth mentioning.
GPT-2 was actually better than GPT-3. It had fewer hardcoded soi-texts so it produced better results.