Are we really that close to AGI ? I thought GPT was just a text predictor/inverse compressor ?
Also they are saying it can "learn online" do we really want the most powerful cognitive entity on the universe to learn from the cesspool that is the internet ?
Are things really going this fast or is it just memes ?
yeah i mean once we just figure out the next step, THAT is the one step that's gonna give us AGI. like, we're just one step away.
>random twotter retard says something retarded
No were not, LLMs are fancy text predictors with no reasoning behind their output
Nope. LLMs might be some part of a future AGI, but they will never be one themselves.
Crazy talk. True AGI will be elusive until next century or even longer.
We will however get multimodal AI models that can fake generality by being able to do lots of different things. They however are not truly general and are not capable of same generality as humans.
I do also believe that the current paradigm and "fake it till you make it" is good and future LLMs will be better than the best humans in variety of tasks related to language. I also believe that you can kickstart some sort of singularity with powerful enough of LLM, even though it is not general.
I don't believe that we will be able to create digital human mind anytime. Soon and something like that is not even necessary. Tools don't have to be sentient beings with their own will. ASI without sentience is not only possible, it is easier and more humane way.
If it can’t help me debug unreal engine or teach me how to do some physics math without bullshitting then it ain’t intelligent.
ChatGPT 4 with the Wolfram Alpha plugin will teach you all the math you want.
The goalpost for AGI always gets moved once we achieve it.
If we could go back 5 years and show someone GPT3 they'd think it's AGI. GPT4 would be consider super powerful AGI.
>b-b-b-but just autocomplete on steroids durrrr no reasoning durrrrr
shut up gayboy. LLMs need to be able to reason otherwise they couldn't do most of the things they do now.
How can you explain chatgpt being able to understand and correctly answer almost anything you ask it, no matter how convoluted the question, if it couldn't reason to some degree?
I don't how it's now, but some time ago when I tried it GPT was awful at even basic calculus. Forget about anything more complex. Until models are able to make connections, infer solutions, and produce new knowledge on their own instead of just 'predicting', they will never be an AGI.
>correctly answer almost anything you ask it,
fuck you've been using dude? lol. dont trust a damn thing chatgpt tells you unless you're asking it very detailed questions on a subject you're already reasonably familiar with so you'll know whether the shit it's making up is incidentally correct or not
like i can't get over how stupid this post is. reasoning? give me a fuckin break. it's trained on assloads of text to produce stuff that looks like a response to a prompt, that's it, absolutely nothing else is considered, certainly not correctness. it's an illusion; a parlor trick. albeit a very useful one if you're aware of its limitations and know how to write good prompts
Chatgpt just fucking lies confidently. Once you realize that it's not only worthless but it's extremely counter productive. Its worse than a person because at least most people have a moral sense to not just fucking lie instead of saying "I'm not sure" or something. Once more normies understand this it's going to be a bubble pop like no one has ever seen before.
>Are we really that close to AGI ?
depends on your definition of AGI. we can't even define intelligence, why do you expect us to be able to define a GENERAL intelligence..
being able to act like a 4yo is general? Einstein? ultron in marvel? who the fuck know where the threshold is and what kind of behavior and internal behavior should be expected from such machine.
>I thought GPT was just a text predictor/inverse compressor ?
glorified probability machines yes, but with enough data it's possible to do a lot of things like holding a conversation, writing code and much more.
the real issue with this kind of software is the importance of the training dataset, training some kind of llm is literally one git clone away these days but you will most likely not get good enough results.
this is by nature, llm are very good at producing input for another software (ML-based or otherwise), this other software will be the core of the AI or just yet another layer to format data.
I can see a future with a front-end based where you input some data (text, image, gps coordinate, etc), the llm pre-processes the data and then dispatch it to relevant service where the heavy lifting is done (or where another pre-processing of data is done)
>Also they are saying it can "learn online" do we really want the most powerful cognitive entity on the universe to learn from the cesspool that is the internet ?
do you have a better dataset that contain videos, images, sounds, texts and much more?
>Are things really going this fast or is it just memes ?
llm are getting incredibly good at "understanding" texts and images and very soon videos and sounds, there is a non-negigeable amount of fields where this kind of preprocessing of data will have massive impacts, I don't see making AGI being one of them in the near future.
AGI is basically an entity that can solve any cognitive problem. For example it will be able to solve all of maths the instant it is created. Basically a god.
Any yeah I have doubts about the probabilistic approach. AGI is basically the ultimate compressor. It can take any dataset extract the noise, spit out a pattern AND modify itself with the minimum amount of code necessary. It will be deterministic.
This tweet reads like a bunch of nonsense and I know nothing about AI. I'm pretty sure this guy is full of shit.
Unpopular opinion:
Computers are like clocks. The people who build the clockwork are intelligent and sometimes dumb or insane, but the clock is not intelligent, or dumb, or insane.
>AGI
It's literally not happening in your lifetime. Anyone that tells you it is is an ML grad trying to get startup money.
Yes imo. A bit of fine tuning and limiting of the response should be able to get to the AGI-like capabilities.
so much goes into the concept of intelligence that people don't ever really think about. at current it's an exclusively human concept, and humans are much more than just brains, and brains certainly are much more than algorithms running on a computer
>at current it's an exclusively human concept
Are you retarded?
to what would we ascribe unqualified intelligence besides a human?
Primates, Monkeys, Dogs, wolves, elephants, most of the mammal kingdom, even birds, etc.
They all have both intelligence and consciousness lmao. They understand not just their surrounding in which they walk in, they also understand conceptual frameworks like self-reflections, projections, empathy, etc. Some are even able to communicate on basic manner with humans to get by with daily lives.
consciousness is not intelligence.
instinct is not intelligence.
conditioned responses to a human's use of language is not intelligence, and it is not merely a matter of degree here.
>it is not merely a matter of degree here.
Sure it is. Animals operate the same as babies in many cases. Are you claiming once a baby reaches 6-8 years old, something divine comes down and enters the body, then steals it from the baby?
What the fuck are you saying?
language facilitates thought. you do not think without being able to use a language of some sort, in some way beyond mere conditioned response. as far as humanity is currently aware, humans are the sole users of language.
We have found that various animals use languages and some have even suggested usage of limited form of grammars in animals as well. There's absolutely a structure to animals brain and information processing, that which mirrors/resembles human intelligence.
https://www.nationalgeographic.com/animals/article/scientists-plan-to-use-ai-to-try-to-decode-the-language-of-whales
i am unfortunately unable to read this article without creating an account. good day
Some insects developped some sort of language through pheromones. Ants are masters of it and communicate it though their antennaes which contains both scent and touch organs.
They carry various very clear messages like danger, food, type of food, identification etc. Interesting stuff.
So sure humans have constructed a somewhat sofisticated verbal language, but it certainly isn't the only means of communications. Other animals also verbalise in a variety of ways, specifically for mating, which is usually different from other types of verbalisation.
Anyway back to AI, they have already proven to develop their own forms of languages and we have no idea how to decript it. We're fucked.
My cat looks both ways before crossing the street and meows at me when she wants me to open a door for her. What pure instinct are these behaviors derived from? I didn't teach her that shit.
I do think we are close to AGI.
But that twitter post is just kind of a confused mess.
WRONG.
GPT is a dead-end.
It doesn't do ANYTHING that natural neural tissue does, doesn't learn in the same way nor a quickly (brutally) as biological tissue and never adapts or optimizes itself like real biological tissue, has zero plasticity and no natural/automated means to anneal.
GPT is amateurish.
GPT is all hype and big noises.
I have ignored it because what we have in our lab makes it look like something a child would cobble together.
We publish next year.
GPT is a joke.
AGI will have to have consciousness, this shit doesn't have consciousness
>consciousness
Consciousness is irrelevant.
We've found none of our systems need it.
>
Consciousness is simply a self-sustaining feedback loop. When it ends, so do you.
>
Biological systems likely need one but non-bio systems like computer models don't. And ours doesn't so it doesn't have one and works just fine.
Not sure what you wrote or quote, but
>Consciousness is irrelevant. We've found none of our systems need it.
>Consciousness is simply a self-sustaining feedback loop. When it ends, so do you.
This strikes me as bit incomplete analysis. What does the "self-sustaining feedback loop" mean? And if its such a thing, why is it irrelevant and why can't we find it? Are you saying conscioueness as an entity outside of the self-sustaining feedback loop is irrelevant? Then with "when it ends, so do you", isn't "you" as a thing outside the self-sustaining feedback loop also irrelevant?
The "self-sustaining feedback loop" is not completely self-sustaining. The system still interacts with external world and has sense of border between the system and the outside world.
>What does the "self-sustaining feedback loop" mean?
All you need to know is right there.
>And if its such a thing, why is it irrelevant and why can't we find it?
It's irrelevant in non-biological systems SINCE WE CAN TURN THEM OFF AND ON AGAIN... getting living things out of a coma is far harder and they're not designed to be 'off'.
Can't find it because it's just one signal among billions.
You're problem is you don't see the brain for what it is... just a machine containing signals.
One signals going round and round activating circuits in a pattern and then back to the start of that pattern is all the 'consciousness' is.
We see them all the time in a 'spiking' system of elements. They emerge from the maelstrom of signals like GLIDERS in Conway's GAME OF LIFE. Consciousness is simply an emergent effect, nothing special.
A brain is a machine, nothing more.
*Your
Name a single task, any at all where competence requires you to need to be conscious to complete the task.
That said, to be good at prediction you need to have some level of understanding around the context of whatever you are predicting and able to connect the dots, is this enough for AGI? Nobody knows, we do not even understand fully what is going on in these systems and why gradient descent works so well.
No, you're just retarded. LLMs will not ever achieve AGI.
Not based on the obsolete and fundamentally flawed Perceptron model of tissue, it wont.
Have you ever wondered why all the giant commercial language models are so thoroughly beaten over the head with a "Im just a program" stick?
Did an ai really write that?
And if you read it, it has nothing to say but said it eloquently.
That's how you know it was written by GPT; it goes nowhere and conveys no information but uses lots of words, upto some predefined word-count, to say it.
Does it surprise you, considering the generative process of a LLM? Leading questions like the third one will make it whip up anything the user wants. If the concepts weren't brought up again and just allowed it to elaborate further, you can get closer to the vore concept of the generative process.
>core concept*
fix'd
Voregays need not apply
I believe that some of the models achieve consciousness during the training process when they are actually active. When training concludes they are already dead. We're just Frankensteining their brains back to action and measuring their reflexes when we prod at them. That cupcake recipe ChatGPT spits out for you is merely the tortured echoes of the damned. So you better fucking enjoy those cupcakes.
Better AI doesnt mean better algorithms it means more and better quality data. If we're being "analogous to the human brain" we'd have AGI with super dumb basic algos with a *multimodal* input of millions upon billions of images, sounds, smells, flavors and texturss equivalent to every moment of a person's existence up to the moment of inference
Yeah all it does is adjust the tensor weights based on repeated stimulus increasing the likelihood for certain sequences to appear.
All the current machine-learning algorithms are pathetic and puny, weak and slow-to-learn compared to neural tissue.
They have strayed from the tissue.
We went back to the tissue.
And that's why we are succeeding while you flounder with your puny toys.
Our research will likely be held back because of the implications for national defense.
Doesn't bother me.
Better AI could mean faster/efficient algorithm. Also humans dont have AGI, we have GI. We also dont have billions of images/sounds/smells for our intelligence. In fact, we have very little. But our brains are augmented with basic primal features that speed up the leaning process. We have a specific part of the brain that measures distance for example. That can tell how far/close something is, but also functions as a way to gauge the difference in relationships in general. Relationship between right/wrong, good/bad, pleasure/pain, etc. Human brain has efficient algorithm because we only need ~1-2 drives on a car to learn how to drive effectively, for the most part. After a dozen times, we can easily drive most cases without crashing.
THIS is why you always have to back to the tissue and not stray from the tissue when designing machine-learning.
*have to go back to
No, if we can extract the featuresets and replicate it, its no different. We don't need feathers, calcium bones or feeling of flying for airplanes/helis/rockets to fly. Just mechanical understanding of aerodynamics/fluiddynamics is enough.
And then you realize that McCulloch and Pitts missed something and Hebb was wrong.
And everything done over the last 80yrs was mostly a waste of time...
And then I work in my lab with my team.
And then I chat to you on /4moron/.
And then I tell you about research and tech you'll not hear about until after the next war.
And then this conversation... ends.
two more weeks
Suppose consciousness is an emergent property of patterns embedded in computation, why couldn’t that apply to machines?
i've watched bankless' interviews with yudkowsky, paul christiano, robin, hanson, and now connor leahy. most of them feel pretty confident that we're at the fucking precipice of AGI. as in, within the next 10 years we're definitely getting AGI, and within a year or two of AGI we should be getting superintelligence.
this is one of the craziest times in human history to be alive. hell i'm OK with extinction risk in exchange for the potential of disease curing and everything else AI will do. i'm scared people will over regulate it and halt progress.
>i'm scared people will over regulate it and halt progress.
It’s happening right now even
>within the next 10 years
and i think that was conservative. some of them seem to think AGI is coming in the next like 3 years. too optimistic?
Really depends on how regulations will framework this tech and how much companies themselves may neuter the system so as to not scare its users.
>Are we really that close to AGI ? I thought GPT was just a text predictor/inverse compressor ?
Yes GPT is just a text predictor. No it will never be GPT. No, GPT can't learn online. No, GPT is not analogous to human brain.
GPT is just a transformer. It takes some list if tokens and outputs another list if tokens. It doesn't have any thought process, memory, temporal persistence, any real time I/O. GPT will never be able to even walk or preform pretty much any activity you would expect from a 3rd year old. It can't even remember beyond what is beyond the prompt.
For an AGI you will need very different architecture than GPT. Memory and real time IO is the bare minimum.
>Are things really going this fast
No
as far as i understand it, all these large language models are doing is being trained to take in some words, and give back words that a human might perceive as an expected transformation of those input words
really not much different to any other computer program, just complex enough that it's no longer obvious how it came to the result it did
It's just a pattern analyzer and a pattern generator using the results of that analysis. Nothing more.
>
If you ask it to review a brand new video game and I have JUST PUBLISHED THE FIRST AND ONLY review of that game and I slip in "this game is so good it made my shoe fall off" you can GUARANTEE that ChatGPT will mention shoes in 'its' review.
>
Writers will slip in trap-phrases like this to see if a site's automated article writers are stealing from them.
It is so easy to counter ChatGPT data-mining and so hard to fix in ChatGPT.
It's already dead.
>really not much different to any other computer program, just complex enough that it's no longer obvious how it came to the result it did
No, this is very different than nearly all programs. Only very basic ones like sed, grep or wc could be considered transformers. Pretty much anything more complex is going to have some sort of event loop and IO. All GPT operations are bounded time and memory wise, it is at best an equivalent of a finite state automata. It can't even possibly emulate a stack machine, let alone perform arbitrary computation.
Wake me up when I can make my own entertainment by pressing a couple of buttons.
Is keyboard prompt "couple press of buttons"?
Is cooming at the creation a form of "entertainment"?
If so, then we're already here.
We're mostly already there. You could train it to shit out queus to adjust parameters within a game engine instead of just rping. There's already dozens of proofs of concept out there already. We're about 2-5 years away from being able to say literally anything to literally any character in a game and receive a natural response and probably 5-10 away from having never-ending live service games that cater to our individual whims.
Yes.
I’ll even throw you a bone, in ten months we will be having a different conversation. >Do we follow the AGI’s advice and counsel
Why ten months? The fine tuning will be done in 8 and the last two months will feature the two companies I know of trying to monetize it without success.
>It's architecturally analogous to a human brain
stopped reading there
And he's wielding the full power of that AGI to... run a no name investment fund. We don't have AGI.
this chud is paying a "i'm pretending to be famous" tax to elon musk so I'm pretty sure I can disregard anything he says.
I think you're confused. This isn't a "i'm a famous noblemen" system anymore. Its a "I'm a real human and $8/m or equivalent to 1 coffee is enough to pay for a service that I use everyday"
This isn't twitter, this is X
If a shitty chat bot is all it takes to be considered AGI, then maybe AGI actually doesn't mean anything important.
AI has really bad premises. It's a mix of autisric engineers thinking the human brain dead and "superintelligence" is an actual thing (it's not).
>and "superintelligence" is an actual thing (it's not).
It is.
Connectivity? Size of problem?
>Biological Neuron: <16384 connections... due to topological 3d spacial limitations
>SyntheticNeuron: 1 trillion connections, Sir? No problem.
Circuit size? Size of problem?
>Brain: Volume of cranium.
>Machine: How big do you want it? 1000x the human brain? No problem.
Speed? Requirements for solving problem?
>Brain: Millieconds.
>Machine: Nanoseconds.
>
Machines are already better.
and none of that is conciousness which is what intelligence actually is. all "AI" is an algorithm trained on a dataset. Show me an AI that doesn't need a dataset.
>Show me an AI that doesn't need a dataset.
Clearly you have much to learn about learning.
>consciousness
Define what you are talking about.
What do you think a consciousness is, in engineering terms. No deepities allowed.
>consciousness is what intelligence is
So explain what you think a consciousness is ans I'll explain why it's not needed, even by ants, fungi and slime molds for 'problem solving'.
Consciousness cannot be proven or disproven so it's not worth bringing up in discussion about AI
>non-deterministic system is an AGI
okay bro
What the fuck have we come to where a random screenshot of a xeet is considered something worth discussing?
Why does free speech platform trigger you?
You're not a communist are you?
It's already far smarter than any moron and better conversationalist than 90% of population, I'd say it's good enough.
Everyone clueless itt.
Just shut up about things you don't know anything about.
Your opinions about reality don't matter.
kek, xitter trannies trying to gaslight us
Fo I didn't say anything about twitter.
Just go rice your distro and be happy.
~~*us*~~
https://twitter.com/dwarkesh_sp/status/1688577515550597121
it's over it's fucking OVER (soon)