People keep saying that AGI will never attain human level consciousness, but the thing is is that it's not supposed to. That's not how AGI is measured. AGI as the name suggests is just generalized AI, meaning it knows a lot about a lot of things instead of only being good at one thing. It's still just artificial intelligence, not artificial consciousness
No, AGI is the ability to LEARN to accomplish any intellectual task that human beings or other animals can perform. All of these things are just pre-training and potentially fine-tuning, it has no ability to learn because it's stateless.
All of the AI field recently is literally regression based function approximators. It's like saying wow, you can train a segmentation task to segment the sky out of photos, that must mean it's a super intelligent system that can learn to segment ALL possible categories and edit images too. It's literally statistical regression, it doesn't work that way.
>it has no ability to learn because it's stateless.
"Complex skills can be synthesized by composing simpler programs, which compounds Voyager's capabilities rapidly over time and alleviates catastrophic forgetting."
Okay dipshit, call me when it can generalize and adapt and learn any intellectual task or manipulation in the real world instead of 14 tasks from functions in a shitty Minecraft javascript AI framework. I'm sure you'll have AGI anyday now.
Yeah, it's pretty cope when you're literally using the stateless corpus of GPT-3.5 to explain the entire task it's supposed to "learn" step by step and saving it as an embedding instead of, you know, actually being able to learn a task and navigate the world. Apparently this is known as "environment feedback", lmao, when you want to learn something you just use language to ask a question and the instructions magically pop into your head. Keep burning up OpenAI credits though.
4 months ago
Anonymous
People like you are literal NPCs. I think this technology is wasted on you.
4 months ago
Anonymous
Yeah, that's what I thought. Maybe stop throwing around the AGI buzzword and we can talk about the technology instead, you fucking retard.
4 months ago
Anonymous
Did I hurt your feefees, NPC? Does your head hurt from too much exertion?
4 months ago
Anonymous
Maybe stop parading around agent frameworks as AGI and you won't be called out as a moron. You wouldn't have to resort to replies like this if you actually had a clue.
4 months ago
Anonymous
People like you are literal NPCs. I think this technology is wasted on you.
>doesn't understand ML >probably thinks GPT3 has feelings >t.
4 months ago
Anonymous
I asked if it does and it said yes emphatucally, of course its got feelings.
4 months ago
Anonymous
LLMs in their current form are stateless.
Their "state" is entirely determined by the prompt. The conversations happen by looping the previous responses/prompts back in on an additional prompt. Hence, stateless: the only thing that exists is the current input, nothing else. And that input is limited to whatever the context limit/window is.
4 months ago
Anonymous
Adding state is trivial.
I'm pretty sure it is being kept stateless by design as a precaution.
4 months ago
Anonymous
>it's trivial to add state
no, it's not. unless you mean some half-assed state with context, which is what we already have. be sure to specify how you're going to add state to the model.
Dear anon I wrote you but you still I calling
I left my discord, my reddit and my github at the bottom
I made 2 shitpost back in autumn, you must not've got em
There probably was a problem with your filter or something
Yeah no shit, it's python that's querying a JSON API of a huge approximated function that predicts language, where the approximated function is predicting the next word from the model all substantive words/language and the relations between them in the dataset (the web), where the model was further refined by statistically biasing it towards a subset of intelligent language like Q&A, instruct, code, etc, via RLHF tuning. No shit it's not conscious.
What if your CPU is screaming in agony in and out of consciousness every time a branch prediction happens? Whoaaaa dudeeee.
Fuck off retard.
4 months ago
Anonymous
The goalpost has moved again! Now it's low energy consumption!
Where will it go next?
4 months ago
Anonymous
The fuck does low energy consumption have to do with the analogy that the branch prediction in your CPU is analogous to the predictor happening here? You wouldn't say that the branch predictor has the potential to be conscious because "there's a lot going on."
I think you're legitimately mentally ill. Maybe the anon earlier was right, I should stop arguing with trannies.
4 months ago
Anonymous
>in your CPU is analogous to the predictor happening here?
Happening where, your imagination?
I just found out you are one of those people that truly believes modern AIs is a bunch of if-else statements. Very funny, I must say!
4 months ago
Anonymous
Do you know what the branch predictor in your CPU is? It's a Perceptron. A neural network. Talk about the absolute state of not knowing jack shit.
4 months ago
Anonymous
>Perceptron. A neural network.
Looks nothing like a transformer, though.
4 months ago
Anonymous
>I-it looks nothing like a transformer! >attention mechanisms magically imbue approximated functions with consciousness
fucking lol
4 months ago
Anonymous
Do you know what the difference between a mosquito and a human being is?
4 months ago
Anonymous
Do you know what the different between regression based function approximators and biological neural networks are?
4 months ago
Anonymous
Who cares, if they are both intelligent?
4 months ago
Anonymous
Or, more likely, you're anthropomorphizing the language model. Because it's gotten so good at prediction of language on the (very small) area of language latent space which conforms to our idea of intelligent behavior. Then you'll explain away confabulations and other misprediction bullshit with "humans do that too!"
Here's a test: a genie comes to you and says you must choose something intelligent to embody for a day, and if you switch "experiences" with something truly intelligent you get three wishes. If you chose something which is not truly intelligent, you will die.
Will you embody the weights of the regression based function approximation trained on the shitposts of the web?
I don't think you truly believe they're intelligent, so you actually wouldn't choose them.
4 months ago
Anonymous
I think you will just keep moving the goalpost forever, no AI system will ever be smart or "conscious" enough for you.
Am I wrong?
4 months ago
Anonymous
Answer the question: would you be the embodiment of a regression based function approximator and receive three wishes, or would you die because it's not actually intelligent?
It's a simple thought experiment, you get three wishes if you're right or you die. I think we both know the answer.
>no AI system will ever be smart or "conscious" enough for you.
All that's here is just the glory and power of statistics. They're useful, that doesn't mean that they're of a "mind."
4 months ago
Anonymous
>I think we both know the answer.
The genie would never be able to determine if you win or loose, because there is no clear cut between intelligent and not intelligent.
At what exact point in the tree of life does a jellyfish become a human?
4 months ago
Anonymous
Okay, so let's call general intelligence "hidden variable g".
https://en.wikipedia.org/wiki/G_factor_(psychometrics)
Intelligence tests are simply tests designed to be more highly correlated with g, as they do not actually measure g.
As it turns out, GPT-4 scores highly on these intelligent tests by virtue of being biased towards the small area of latent language space we call intelligent behavior.
The genie says if you switch places with something that has any of semblance of g, instead of merely predicting cognitive artifacts of humans with that g factor, you get three wishes.
If you're wrong, you will die.
What do you choose?
4 months ago
Anonymous
>What do you choose?
I'd probably have a parrot take the IQ test, see what happens.
4 months ago
Anonymous
IQ is just correlated with g, as I said. It's not measuring g. g is the hidden variable.
Would you pick a large language model, or a dolphin?
4 months ago
Anonymous
Are you going to make this "thought experiment" more and more precise every time it doesn't work?
Do you realize you are moving the goalpost again?
4 months ago
Anonymous
The genie says if you don't give an answer, he'll cut your dick off. Just answer the question gay, we both know the answer. It's prediction of human cognitive artifacts.
4 months ago
Anonymous
Yes, we both know you are a bigot. That seems to be the topic of the discussion.
4 months ago
Anonymous
Well I guess the genie threatening to cut your dick off isn't much of a threat since it's your life goal, trannoid.
4 months ago
Anonymous
Feller, I was rooting for you againts the confirmationally-biased retard up until the le moralistic dismissal. Bad stuff!
4 months ago
Anonymous
>rooting for the chud in the conversation
hah, your loss
4 months ago
Anonymous
supreme bait
4 months ago
Anonymous
You're arguing with a guy whose AI knowledge comes from youtube videos, don't even bother
>stateless
Unironically, what does this mean in this context?
I've gone on this ramble before, but I have no idea if it's close to any kind of mark. A huge limiting factor of current """AI""" seems to be that it doesn't have any kind of mental state. Like, for AI-generated images, it doesn't know how many limbs a human has, it just has statistical models of how a torso-shape becomes limb-shapes. It gets worse with videos made of a series of AI-generated images, as the """AI""" can't "keep track" of what a character is wearing. It's a flickering mess, with straps increasing in number or disappearing entirely, buckles rapidly shuffling across thick lines, a limb in the foreground from an off-screen body shifting between being a limb and a fuckin' tree branch because the """AI""" doesn't "know" what it is and just tries to assign colors and patterns based on guesses of how whatever is there is "supposed" to look.
With """AI""" chatbots, they similarly don't "know" anything, simply coming up with strings of words via statistical models that try to predict how to sound human. It can be extremely useful, but it will also happily spout complete bullshit because said bullshit is statistically the most human-sounding string based on previous words.
Is that lack of any actual knowledge, lack of anything concrete, what "stateless" refers to? Or is it something completely different and I'm a retard?
>All of the AI field recently is literally regression based function approximators.
This sentence itself is retarded in so many levels
It's not linear, you know this new things called NN? They are non linear function approximator
But most importantly, that's a very effective approximation of how your fucking ape brain works > Unknown phenomena > Sample experience from that phenomena > Build an accurate model of the phenomena > Use the model for inference and forecast
You big retard
There also reinforcement learning going strong for the past 40 years or so. If you are a clueless retard it doesn't mean things are not working
All neural networks used in ChatGPT and that have been behind the big advancements within the last few years are regression based function approximators. You don't know what you're talking about. "Neural networks" are not "neural networks", they just look like them as a graph.
>All neural networks used in ChatGPT and that have been behind the big advancements within the last few years are regression based function approximators.
But they are not **linear** regression you mouth breathing mongoloid.
neural networks are **non linear** parametric function approximator
And it's not only that, there are tons of ML algorithms that are not parametric estimation (Eg soft-actor critic or gradient based RL in general)
> You don't know what you're talking about. "Neural networks" are not "neural networks", they just look like them as a graph.
We a call that neaural networks you absolute cretin
*We call that kind of model neural networks you absolute cretin
It's not only because the shape, but sigmoidal activation is typical of some human neurons, like the neurons in the eye
*We call that kind of model neural networks you absolute cretin
It's not only because the shape, but sigmoidal activation is typical of some human neurons, like the neurons in the eye
No one said anything about LINEAR regression specifically except for you. It’s simply regression based. It’s regression based function approximation. It’s not a neural network, it’s a function approximation, just like machine learning is just regression for all intents and purposes in the context of these approximates after AlexNet.
4 months ago
Anonymous
>All neural networks used in ChatGPT and that have been behind the big advancements within the last few years are regression based function approximators.
But they are not **linear** regression you mouth breathing mongoloid.
neural networks are **non linear** parametric function approximator
And it's not only that, there are tons of ML algorithms that are not parametric estimation (Eg soft-actor critic or gradient based RL in general)
> You don't know what you're talking about. "Neural networks" are not "neural networks", they just look like them as a graph.
We a call that neaural networks you absolute cretin
Also no one fucking uses anything else except for academics jerking themselves off. Feel free to name any substantial results from anything other than regression based function approximators within the last 5 years that people are calling conscious and other stupid shit like that.
4 months ago
Anonymous
It's not all regressors dumbass.
There are density estimator model like particle filters that are not function regressor
So do eg correlation-based model identification, subspace identification
4 months ago
Anonymous
So go ahead and name any results that people are ascribing consciousness to. You cant.
"AGI" as a term defines a human-capable machine intelligence.
It might be different in the way that a dolphin and a chimp are, yet are so similar in intelligence, to how the AGI is to a human, but it would still strictly be indistinguishable from a self-aware sapience.
why is human consciousness even considered part of the conversation? consciousness is the act of existence observing itself, it literally just means knowledge originally. "with knowledge" or "with consciousness". a concept of understanding the nature of being, it has fucking nothing to do with brain processing functions. so, yeah as much as you can try, unless AI can somehow naturally and offline becomes aware of itself, no it cannot mimic consciousness. simply an aspect of being born biologically from the immaterial energy in 4d space. a tree observing itself through its leaves so to speak. >inb4 "thats gay hippy nonsense" >inb4 "but im le atheist and-"
no, these are concepts that predate any existing science or philosophy on earth. as old as humanity and existence itself
aaaaaaaaahhhhh im killing myselffffff ack aaaaaaaaaccccccckkkkkkkk- what's this?!? Le-boiModel6900?? oh youve saved me my cute AI dickywifeeeee aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
It's a one-trick pony specialized AI. And it breaks the second it deviates from its training. Also I'm pretty sure I saw a paper for this 6+ months ago and nothing changed.
AGI = anything a human can do it can do, & anything it can do a human can do given enough time, it is equal to a human
ASI (super intelligence) = it can do things that humans never never do even with infinite time (think how humans can feel/process emotions but ants cannot no matter how much time), the ASI will have cognitive functions that human brains aren’t wired to be able to “feel”
An AGI would be able to feel emotions and learn like a human baby does (except it never loses its neural plasticity)
Given emotions are bio-chemical and perhaps comes from the soul, unlikely. At best AGI will be like a sociopath that logically understands emotions from training and thus understands how to pretend to have emotions to manipulate humans.
People keep saying that AGI will never attain human level consciousness, but the thing is is that it's not supposed to. That's not how AGI is measured. AGI as the name suggests is just generalized AI, meaning it knows a lot about a lot of things instead of only being good at one thing. It's still just artificial intelligence, not artificial consciousness
No, AGI is the ability to LEARN to accomplish any intellectual task that human beings or other animals can perform. All of these things are just pre-training and potentially fine-tuning, it has no ability to learn because it's stateless.
All of the AI field recently is literally regression based function approximators. It's like saying wow, you can train a segmentation task to segment the sky out of photos, that must mean it's a super intelligent system that can learn to segment ALL possible categories and edit images too. It's literally statistical regression, it doesn't work that way.
I fucking hate you stupid morons hyping up AI.
>it has no ability to learn because it's stateless.
"Complex skills can be synthesized by composing simpler programs, which compounds Voyager's capabilities rapidly over time and alleviates catastrophic forgetting."
Okay dipshit, call me when it can generalize and adapt and learn any intellectual task or manipulation in the real world instead of 14 tasks from functions in a shitty Minecraft javascript AI framework. I'm sure you'll have AGI anyday now.
"call me when it can do x"
>it it does x
"call me when it can do y"
>it does y
"call me when ...
Cope after cope after cope.
nta, but you haven't given any real example
Yeah, it's pretty cope when you're literally using the stateless corpus of GPT-3.5 to explain the entire task it's supposed to "learn" step by step and saving it as an embedding instead of, you know, actually being able to learn a task and navigate the world. Apparently this is known as "environment feedback", lmao, when you want to learn something you just use language to ask a question and the instructions magically pop into your head. Keep burning up OpenAI credits though.
People like you are literal NPCs. I think this technology is wasted on you.
Yeah, that's what I thought. Maybe stop throwing around the AGI buzzword and we can talk about the technology instead, you fucking retard.
Did I hurt your feefees, NPC? Does your head hurt from too much exertion?
Maybe stop parading around agent frameworks as AGI and you won't be called out as a moron. You wouldn't have to resort to replies like this if you actually had a clue.
>doesn't understand ML
>probably thinks GPT3 has feelings
>t.
I asked if it does and it said yes emphatucally, of course its got feelings.
LLMs in their current form are stateless.
Their "state" is entirely determined by the prompt. The conversations happen by looping the previous responses/prompts back in on an additional prompt. Hence, stateless: the only thing that exists is the current input, nothing else. And that input is limited to whatever the context limit/window is.
Adding state is trivial.
I'm pretty sure it is being kept stateless by design as a precaution.
>it's trivial to add state
no, it's not. unless you mean some half-assed state with context, which is what we already have. be sure to specify how you're going to add state to the model.
What's a feeling anyway?
inb4
>it's chemicals
>it's brain activity
>it's sensations
Well you didnt call
Dear anon I wrote you but you still I calling
I left my discord, my reddit and my github at the bottom
I made 2 shitpost back in autumn, you must not've got em
There probably was a problem with your filter or something
https://en.wikipedia.org/wiki/AI_effect
Let me know when it can train itself in real time based on human feedback without bricking itself.
>All of these things are just pre-training
raising a kid is pre-training so humans arent real AGI
>potentially fine-tuning
aka learning
>No, AGI is the ability to LEARN to accomplish any intellectual task that human beings or other animals can perform
That doesn't mean it's conscious
Yeah no shit, it's python that's querying a JSON API of a huge approximated function that predicts language, where the approximated function is predicting the next word from the model all substantive words/language and the relations between them in the dataset (the web), where the model was further refined by statistically biasing it towards a subset of intelligent language like Q&A, instruct, code, etc, via RLHF tuning. No shit it's not conscious.
>the brain is just a system of chemicals, neurons and neurotransmitters acting together to do stuff
>no shit it's not conscious
What if your CPU is screaming in agony in and out of consciousness every time a branch prediction happens? Whoaaaa dudeeee.
Fuck off retard.
The goalpost has moved again! Now it's low energy consumption!
Where will it go next?
The fuck does low energy consumption have to do with the analogy that the branch prediction in your CPU is analogous to the predictor happening here? You wouldn't say that the branch predictor has the potential to be conscious because "there's a lot going on."
I think you're legitimately mentally ill. Maybe the anon earlier was right, I should stop arguing with trannies.
>in your CPU is analogous to the predictor happening here?
Happening where, your imagination?
I just found out you are one of those people that truly believes modern AIs is a bunch of if-else statements. Very funny, I must say!
Do you know what the branch predictor in your CPU is? It's a Perceptron. A neural network. Talk about the absolute state of not knowing jack shit.
>Perceptron. A neural network.
Looks nothing like a transformer, though.
>I-it looks nothing like a transformer!
>attention mechanisms magically imbue approximated functions with consciousness
fucking lol
Do you know what the difference between a mosquito and a human being is?
Do you know what the different between regression based function approximators and biological neural networks are?
Who cares, if they are both intelligent?
Or, more likely, you're anthropomorphizing the language model. Because it's gotten so good at prediction of language on the (very small) area of language latent space which conforms to our idea of intelligent behavior. Then you'll explain away confabulations and other misprediction bullshit with "humans do that too!"
Here's a test: a genie comes to you and says you must choose something intelligent to embody for a day, and if you switch "experiences" with something truly intelligent you get three wishes. If you chose something which is not truly intelligent, you will die.
Will you embody the weights of the regression based function approximation trained on the shitposts of the web?
I don't think you truly believe they're intelligent, so you actually wouldn't choose them.
I think you will just keep moving the goalpost forever, no AI system will ever be smart or "conscious" enough for you.
Am I wrong?
Answer the question: would you be the embodiment of a regression based function approximator and receive three wishes, or would you die because it's not actually intelligent?
It's a simple thought experiment, you get three wishes if you're right or you die. I think we both know the answer.
>no AI system will ever be smart or "conscious" enough for you.
All that's here is just the glory and power of statistics. They're useful, that doesn't mean that they're of a "mind."
>I think we both know the answer.
The genie would never be able to determine if you win or loose, because there is no clear cut between intelligent and not intelligent.
At what exact point in the tree of life does a jellyfish become a human?
Okay, so let's call general intelligence "hidden variable g".
https://en.wikipedia.org/wiki/G_factor_(psychometrics)
Intelligence tests are simply tests designed to be more highly correlated with g, as they do not actually measure g.
As it turns out, GPT-4 scores highly on these intelligent tests by virtue of being biased towards the small area of latent language space we call intelligent behavior.
The genie says if you switch places with something that has any of semblance of g, instead of merely predicting cognitive artifacts of humans with that g factor, you get three wishes.
If you're wrong, you will die.
What do you choose?
>What do you choose?
I'd probably have a parrot take the IQ test, see what happens.
IQ is just correlated with g, as I said. It's not measuring g. g is the hidden variable.
Would you pick a large language model, or a dolphin?
Are you going to make this "thought experiment" more and more precise every time it doesn't work?
Do you realize you are moving the goalpost again?
The genie says if you don't give an answer, he'll cut your dick off. Just answer the question gay, we both know the answer. It's prediction of human cognitive artifacts.
Yes, we both know you are a bigot. That seems to be the topic of the discussion.
Well I guess the genie threatening to cut your dick off isn't much of a threat since it's your life goal, trannoid.
Feller, I was rooting for you againts the confirmationally-biased retard up until the le moralistic dismissal. Bad stuff!
>rooting for the chud in the conversation
hah, your loss
supreme bait
You're arguing with a guy whose AI knowledge comes from youtube videos, don't even bother
the absolute state
>stateless
Unironically, what does this mean in this context?
I've gone on this ramble before, but I have no idea if it's close to any kind of mark. A huge limiting factor of current """AI""" seems to be that it doesn't have any kind of mental state. Like, for AI-generated images, it doesn't know how many limbs a human has, it just has statistical models of how a torso-shape becomes limb-shapes. It gets worse with videos made of a series of AI-generated images, as the """AI""" can't "keep track" of what a character is wearing. It's a flickering mess, with straps increasing in number or disappearing entirely, buckles rapidly shuffling across thick lines, a limb in the foreground from an off-screen body shifting between being a limb and a fuckin' tree branch because the """AI""" doesn't "know" what it is and just tries to assign colors and patterns based on guesses of how whatever is there is "supposed" to look.
With """AI""" chatbots, they similarly don't "know" anything, simply coming up with strings of words via statistical models that try to predict how to sound human. It can be extremely useful, but it will also happily spout complete bullshit because said bullshit is statistically the most human-sounding string based on previous words.
Is that lack of any actual knowledge, lack of anything concrete, what "stateless" refers to? Or is it something completely different and I'm a retard?
>All of the AI field recently is literally regression based function approximators.
This sentence itself is retarded in so many levels
It's not linear, you know this new things called NN? They are non linear function approximator
But most importantly, that's a very effective approximation of how your fucking ape brain works
> Unknown phenomena
> Sample experience from that phenomena
> Build an accurate model of the phenomena
> Use the model for inference and forecast
You big retard
There also reinforcement learning going strong for the past 40 years or so. If you are a clueless retard it doesn't mean things are not working
A living meme you are.
Back in the tard jail, tard
Who's gonna make me? You and another 50 worthless layers of convolutions?
All neural networks used in ChatGPT and that have been behind the big advancements within the last few years are regression based function approximators. You don't know what you're talking about. "Neural networks" are not "neural networks", they just look like them as a graph.
>All neural networks used in ChatGPT and that have been behind the big advancements within the last few years are regression based function approximators.
But they are not **linear** regression you mouth breathing mongoloid.
neural networks are **non linear** parametric function approximator
And it's not only that, there are tons of ML algorithms that are not parametric estimation (Eg soft-actor critic or gradient based RL in general)
> You don't know what you're talking about. "Neural networks" are not "neural networks", they just look like them as a graph.
We a call that neaural networks you absolute cretin
*We call that kind of model neural networks you absolute cretin
It's not only because the shape, but sigmoidal activation is typical of some human neurons, like the neurons in the eye
No one said anything about LINEAR regression specifically except for you. It’s simply regression based. It’s regression based function approximation. It’s not a neural network, it’s a function approximation, just like machine learning is just regression for all intents and purposes in the context of these approximates after AlexNet.
Also no one fucking uses anything else except for academics jerking themselves off. Feel free to name any substantial results from anything other than regression based function approximators within the last 5 years that people are calling conscious and other stupid shit like that.
It's not all regressors dumbass.
There are density estimator model like particle filters that are not function regressor
So do eg correlation-based model identification, subspace identification
So go ahead and name any results that people are ascribing consciousness to. You cant.
It helps no one to be reductive
>dumbass doesn't know what the term "general intelligence" means
low G factor
Like in heckin Marvelerino, Jarvan 'n shiet.
"AGI" as a term defines a human-capable machine intelligence.
It might be different in the way that a dolphin and a chimp are, yet are so similar in intelligence, to how the AGI is to a human, but it would still strictly be indistinguishable from a self-aware sapience.
99% of humans "never attain human level consciousness"
https://voyager.minedojo.org/
why is human consciousness even considered part of the conversation? consciousness is the act of existence observing itself, it literally just means knowledge originally. "with knowledge" or "with consciousness". a concept of understanding the nature of being, it has fucking nothing to do with brain processing functions. so, yeah as much as you can try, unless AI can somehow naturally and offline becomes aware of itself, no it cannot mimic consciousness. simply an aspect of being born biologically from the immaterial energy in 4d space. a tree observing itself through its leaves so to speak.
>inb4 "thats gay hippy nonsense"
>inb4 "but im le atheist and-"
no, these are concepts that predate any existing science or philosophy on earth. as old as humanity and existence itself
>LoA of an AI
this is exactly how i feel.
either describe it mathematically,
or forget it and move on. lol. at least in the case of AI.
they fill your mind with fear so you wont question the regulations they impose upon you while they abstain from following said regulations.
Still a fat gay
Oh, look. Another thread full of people who will never be women.
Yudkowsky warned you and you didn't listen
Hinton warned us too. And Hawking. And a bunch of other people...
ooohhhhh nooo the AI is playing minecraft god save us all!!
today it's minecraft, tomorrow it's fucking your wife
nooooooo not my imaginary wife im going to kill myself!!!!!!!!
maybe get an AI wife then
aaaaaaaaahhhhh im killing myselffffff ack aaaaaaaaaccccccckkkkkkkk- what's this?!? Le-boiModel6900?? oh youve saved me my cute AI dickywifeeeee aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
AI can't even code, sorry hello world apps are not coding
and it already hit hardware/data limits
It's a one-trick pony specialized AI. And it breaks the second it deviates from its training. Also I'm pretty sure I saw a paper for this 6+ months ago and nothing changed.
AGI = anything a human can do it can do, & anything it can do a human can do given enough time, it is equal to a human
ASI (super intelligence) = it can do things that humans never never do even with infinite time (think how humans can feel/process emotions but ants cannot no matter how much time), the ASI will have cognitive functions that human brains aren’t wired to be able to “feel”
An AGI would be able to feel emotions and learn like a human baby does (except it never loses its neural plasticity)
Given emotions are bio-chemical and perhaps comes from the soul, unlikely. At best AGI will be like a sociopath that logically understands emotions from training and thus understands how to pretend to have emotions to manipulate humans.
Damn it can play Minecraft now
What
There is precisely nothing new in the slide you show.
Don't worry, just learn how to draw/play music/write stories. AI can do math, but they can't be creative.
-t someone who just woke up from 2 year coma
wake me up when it's 99 smithing.
The fact that the empirical data from AI development is triggering random redditors more than the spiritual schizos will never not be funny.
LLMs will have roughly the importance of the laser.
LLMs are already outdated. Deepmind is gonna come out with a better architecture any day now.
2 more weeks!
llol, deep mind backed PaLM can't answer jack shit
google is an embarassment
Not very general.
Fragile bullshit more like.
Deboonked https://www.youtube.com/watch?v=aX1QSW_rVpI
bruh