It's been progressing rapidly for the last 9 years, but we are finally reaching human levels in a variety of tasks. MLPs, Attention, Gradient descent and TONS of compute turns out to be the recipes for near general AI, and we are starting to see people taking advantage of that publicly.
>AlexNet becomes operational in 2012 >first CNN that wins an image recognition contest >nobody OTI gives a shit >2015 >AlexNet gets outperformed by Microsoft Research Asia >nobody OTI gives a shit >2023 >wooooo, where did this heckin' AI tech come from all of a sudden!?
Pleb-tier behaviour.
all the reporters and journalists who used to make blockchain/NFT articles jumped to AI. Same with a bunch of social media influencers and start up founders
No, people actually give a shit about doing shit like translation of japanese porn games and generating anime porn videos, instead of another pump and dump ponzi coin.
It's too easy to find dissenting information on the web. By flooding it with AI generated garbage they can drown out all narratives they don't want you to see. This is also why search engines are so shit.
This is just natural in AI. You have a huge hype because of a random project, then the hype always dies down to lead to AI winter when people realize it's not as good as it promises to be. This has happened before and it will happen again.
Right now it's only getting traction because of the public and the significant funding behind projects, but if you're aware about anything AI, you already know the current approach to AI is just retarded. You have billion dollars system who can't even process a lot of things we are able to do everyday, including simple calculus. We're chasing a dead end because we have *some* results with systems that cannot be scaled further. Winter is coming very soon once the populace realizes this and that academia gets its shit together.
Here it comes again, another jackass who believes Neural Networks is the next best thing after sliced bread.
Call me in 5 years when you inevitably realize the pure idiocy that AI has become and when people realize they need symbols in their AI after all, just like people said fucking 50 years ago.
>Doesn't know what symbolic AI is
yup im arguing with morons who know jack shit about technology, classic Bot.info
you'd think this board who fell for the SICP meme would know about this shit but I forgot all you guys are good for is shilling linus tech tips and luke smith and spitting your dumbass fucking takes on how AI will destroy the world because a billion dollars worth of investment was merely able to predict some text.
9 months ago
Anonymous
oh i see. you're legit clueless about the true nature of intelligence
9 months ago
Anonymous
Let's hear your dumbass take about intelligence. At this point I'll take any fucking approach you want, whether it is neuroscience, psychology, philosophy, go ahead, I want to fucking hear it.
I didn't ask you to link me to what you think is intelligent I asked you to define what intelligence is.
I think my cat is intelligent and it can understand me when I talk to it. Is my cat intelligent?
Good luck answering this one.
It's hard to say. The previous AI winters happened because of insane overhyping and egomania among developers. The current situation is similar in a lot of ways, but the industry itself is more distributed with big international private companies and startups in addition to academia. The economic downturn might be enough to cause it to happen again if its bad enough.
There's no way what we're doing right now is sustainable, the best technology in the world and massive investments are unable to solve consistent problems with AI. No amount of data will solve the problems we know to exist within AI, and it will take nothing short of a revolution about how we design these systems to actually start making a real difference. Otherwise you're just stuck with a system that literally does nothing interesting after you get past the novelty of it.
yea but you're just shouting buzzwords the same way those morons do, you need to go into more depth to get your point across. It doesn't matter how the AI works in the end, if it can do anything better than humans then it certainly has a lot of utility.
I know said buzzwords is the difference.
My point originally is that the way we do things is KNOWN to be flawed, everyone knows this, and no matter how much money you throw at the problem it just won't fix itself. When people are aware of this, the hype will die down, and funding will die down. Just like all the other times.
9 months ago
Anonymous
retard. coping clueless retard.
9 months ago
Anonymous
>I didn't ask you to link me to what you think is intelligent
i didn't link you to "what i think is intelligent" i linked you to intelligence itself.
Face it, your current AI waifu is nothing more than a hollow shell that is just a text prediction program that spouts the same shit it has seen before. It doesn't understand you and never will. It doesn't understand social conventions. It has no motives and as such will never want to fuck you or marry you. It can literally never explain why it just typed what it typed. Your dick is getting hard over nothing more than a program that throws spaghetti at a wall and doesn't stay around to see if it sticks.
9 months ago
Anonymous
this kinda describes women (and most people) in general really
9 months ago
Anonymous
Now I know for certain you have no idea what it is or its capabilities, you are just overreacting to retards who say its skynet by saying its as useless as a plank, it isn't neither, and you just as dumb as them
9 months ago
Anonymous
>I didn't ask you to link me to what you think is intelligent
i didn't link you to "what i think is intelligent" i linked you to intelligence itself.
9 months ago
Anonymous
intelligence points WAY beyond aristotelian logic and you're incredibly unaware about the actual nature of things or how tunnel visioned you are
9 months ago
Anonymous
Again I know you want to believe this shit is intelligent but it just isn't. Instead of presenting me with a solid definition of intelligence that can then easily be refuted you are coping ultra hard with big boy terms and mental masturbation of the highest level instead of looking at the fucking problem right in the face. The problem is you are fooled by your own instincts of wanting to ascribe intelligence to intelligent things. Ascribing humanity to things that do not have any, this is your current problem.
Now I know for certain you have no idea what it is or its capabilities, you are just overreacting to retards who say its skynet by saying its as useless as a plank, it isn't neither, and you just as dumb as them
It can predict text. That's it. It can still be impressive for very specific shit, but outside of that specific shit it does jack shit. It solves no problem on its own, which entirely defeats the concept of AI in the first place. It is not autonomous.
9 months ago
Anonymous
again your problem is not understanding at all what intelligence is. instead of giving a try and developing a feel for what intelligence truly is you are looking for a definition that can easily be refuted (like literally any definition can you might just need your own logic system for it). the problem is you are fooled by 4th grade logic wanting to ascribe themselves to things they don't apply to. ascribing first order logic to things that lay way beyond and cannot be described by it, this is your current problem.
9 months ago
Anonymous
>developing a feel for what intelligence truly is
Again this is the problem but you refuse to see it
What your little brain is telling you about your little AI chatbot being intelligent is nothing more than a ruse triggered by your own instincts that deeply want to relate to things around you even if those things cannot possibly be intelligent.
Why do you think they call these chatbots with female names and make them have a human avatar? It's because corporations know this. They know that they have to trick your brain in order for you to buy into this bullshit.
It's the same process by which we try to relate and even judge the behavior of animals with our own set of rules and morals when we scientifically know we do not remotely view the world the same way.
https://i.imgur.com/uOQd0HS.png
It is not intelligent in essence but if it creates a really good illusion of intelligence to the point of practically answering like an intelligent (and knowledgeable in this case) person would, then whats the difference really?
And DeepMind already developed new algorithms so it is """autonomous"""
Because it's just an illusion, simple as. Illusions in life only lead you astray.
Drugs will give your body the illusion that it's doing great by sending it a bunch of feel good chemicals when the reality of it might be completely fucking different. You can live under the illusion that a certain woman in your life madly wants your cock when in reality she does not even know who you are. Our brains are very good at tricking us like that constantly, and yet if you let yourself fall for this illusion then you are prey to a million different dangers that your body cannot even identify anymore. What if the AI tells you something deeply wrong? If you operate under the belief that the system is intelligent then you are less likely to believe the system is telling you pure bullshit, which it is doing constantly.
The answer does not come from itself, and therefore it cannot be trusted. Would you ask a program something if you knew there was a 50% chance that the answer comes from a profoundly brain damaged individual? Even if your life depended on it?
9 months ago
Anonymous
>Why do you think they call these chatbots with female names and make them have a human avatar
you didn't even click the link i posted yet you keep parroting the same empty arguments over and over again i rest my case and going to bed
9 months ago
Anonymous
>even if those things cannot possibly be intelligent
A fair chunk of what real intelligence is about must be to do with pattern recognition. That's why artificial neural networks do so well at pretending to be intelligent; they're pattern recognition machines (with a horrible way of setting up the patterns). But it definitely isn't the whole story.
If we understood it all, we'd have built an AGI by now.
9 months ago
Anonymous
It is not intelligent in essence but if it creates a really good illusion of intelligence to the point of practically answering like an intelligent (and knowledgeable in this case) person would, then whats the difference really?
And DeepMind already developed new algorithms so it is """autonomous"""
9 months ago
Anonymous
Intelligence is the ability to accurately answer questions.
> Should I trust that guy? > What is the derivative of that equation? > How can I get to Wal-Mart? > What move should I make in Chess? > What is a funny joke I can tell that will make this person laugh?
The vast majority of our systems are not intelligent and can answer a very limited range of questions. Every question a traditional system can answer was designed to be answered by a human. That includes your shitty classical AI systems, which are good for video game enemies and not much else. In case you forgot, hype over these lousy programs is what caused the last few AI winters.
The benchmark for LLMs like GPT-4 are how well it can perform on standardized test benchmarks, i.e. how well it can answer certain questions. In these metrics, large language models such as GPT-4 and its competitors are without question supreme.
The problem with symbolic AI is and always has been that we do not know what we do not know; trying to assemble the symbolic sets for these things is really more like assembling what we THINK we know. There are so many subtle parts to then that we cannot possibly hope to encode all of it by hand, and thus, we require the assistance of stochastic gradient descent and neural networks (which are Turing complete) to do it for us.
Humans cannot even fucking assemble a fully general model of ENGLISH. What makes you think we can assemble a general model of the entire fucking world?
9 months ago
Anonymous
yea but you're just shouting buzzwords the same way those morons do, you need to go into more depth to get your point across. It doesn't matter how the AI works in the end, if it can do anything better than humans then it certainly has a lot of utility.
NTA, but he is like 50% correct. You can think of Symbols as a way to "artificially structure" an ANN so as to give an AI some level of "understanding" of intrinsic human concepts. Silly example, but take the "Grandma Neuron"-- an abstract (though incorrect) proposal that somewhere in a brain is a single neuron that fires at the concept of a "grandma". Super crude analogy that doesn't work with actual NNs, but it gives us a way to enforce a meaningful structure even after the first few levels of an ANN have convoluted the input tokens.
Mathematically, any SRE can be "compiled" into an ANN, though reversing that processing is effectively impossible (likely as/more complex than just training from scratch)
> GOFAI will be the answer!
How's the last 50 years of worthless research and endless failures going for you
How well does the best symbolic AI perform on any current AI benchmarks
How well does the best symbolic AI perform on benchmark tests that were long since perfected
Can symbolic AI even hold a conversation, let alone do any of the shit ChatGPT or Stable Diffusion or PALM-E have been doing
AA-SREs and ANNs have different strengths and weaknesses, anon. SREs are super easy to create semi-rational, goal-oriented agents, which ANNs suck ass at. Likewise, ANNs currently btfo SREs at AA tasks like stable diffusion or sentence prediction.
afaik SREs are basically abandoned in mainstream research, and only a few weirdos (couple of us on Bot.info tho) continue to work on it, cause they have a very high "barrier of entry" vs. ANNs.
For reference, the SRE I'm working on can hit around 1.1BPB (Pile metric), which is better than GPT2, and that's on a task it's inherently disadvantaged at. Main problem to scale up from there are spatial partitioning algorithms breaking down above 5 or 10 dimensions.
Intelligence is the ability to accurately answer questions.
> Should I trust that guy? > What is the derivative of that equation? > How can I get to Wal-Mart? > What move should I make in Chess? > What is a funny joke I can tell that will make this person laugh?
The vast majority of our systems are not intelligent and can answer a very limited range of questions. Every question a traditional system can answer was designed to be answered by a human. That includes your shitty classical AI systems, which are good for video game enemies and not much else. In case you forgot, hype over these lousy programs is what caused the last few AI winters.
The benchmark for LLMs like GPT-4 are how well it can perform on standardized test benchmarks, i.e. how well it can answer certain questions. In these metrics, large language models such as GPT-4 and its competitors are without question supreme.
The problem with symbolic AI is and always has been that we do not know what we do not know; trying to assemble the symbolic sets for these things is really more like assembling what we THINK we know. There are so many subtle parts to then that we cannot possibly hope to encode all of it by hand, and thus, we require the assistance of stochastic gradient descent and neural networks (which are Turing complete) to do it for us.
Humans cannot even fucking assemble a fully general model of ENGLISH. What makes you think we can assemble a general model of the entire fucking world?
Definitely true for classical SRE's. There are a few modern variants that take the approach of using Symbols as "synchronization points" for an ANN, which gets past the "well defined model" issue you mentioned, but creating an unholy marriage of the two systems without it breaking down on edge cases... Well, we're just kicking the can down the road tbh, still have the scaling problems I mentioned above
9 months ago
Anonymous
yea but you're just shouting buzzwords the same way those morons do, you need to go into more depth to get your point across. It doesn't matter how the AI works in the end, if it can do anything better than humans then it certainly has a lot of utility.
9 months ago
Anonymous
> GOFAI will be the answer!
How's the last 50 years of worthless research and endless failures going for you
How well does the best symbolic AI perform on any current AI benchmarks
How well does the best symbolic AI perform on benchmark tests that were long since perfected
Can symbolic AI even hold a conversation, let alone do any of the shit ChatGPT or Stable Diffusion or PALM-E have been doing
>when people realize they need symbols in their AI after all
AHAHAHAAHAHAHA
you stupid fucking retard
do you even know the infinite regress problem of storing knowledge without context that way?
I bet you're still in high school
It's hard to say. The previous AI winters happened because of insane overhyping and egomania among developers. The current situation is similar in a lot of ways, but the industry itself is more distributed with big international private companies and startups in addition to academia. The economic downturn might be enough to cause it to happen again if its bad enough.
I'm like 90% certain it has to do with potential wiping-out of the population in the near future. It's why the US is so cocky in instigating major nuclear powers and trying to finish what it started a century ago. We see under-ground tunnels being made, those aren't for travel lol. Those are for the elite. It's also why they are hoarding resources. When nukes go flying, or something that sends a large % of the population amok, the elite will need AI to do their bidding.
Its not out of nowhere. Every institution has been pushing it heavily for a decade. This sort of thing is not new, only the consumer applications have become simple and interesting enough to capture normie audiences.
Before it was all geeky code shit. Now everybody can become cracked-out anime Van Gogh with a simple button push.
It hasn't exploded. It's been following an exponential curve (technically linear on log log plot) for decades. It's just now reaching wide commercial viability. Doesn't really matter since we're all going to die in the next 5-15 years
coomers
Load of BS from someone who clearly only heard the term "machine learning" for the first time somewhere in the last 6 months.
It's been progressing rapidly for the last 9 years, but we are finally reaching human levels in a variety of tasks. MLPs, Attention, Gradient descent and TONS of compute turns out to be the recipes for near general AI, and we are starting to see people taking advantage of that publicly.
>MLPs
>MLPs get their own board before AI does
Google invented a new architecture few years back. We are seeing its effects now.
Is everything based on the Google work? Is there a paper about it or something?
https://arxiv.org/abs/1706.03762
The paper is very well known. You can just type its name in google and get a couple of youtube videos explaining it.
knowledge limits understanding.
>AlexNet becomes operational in 2012
>first CNN that wins an image recognition contest
>nobody OTI gives a shit
>2015
>AlexNet gets outperformed by Microsoft Research Asia
>nobody OTI gives a shit
>2023
>wooooo, where did this heckin' AI tech come from all of a sudden!?
Pleb-tier behaviour.
all the reporters and journalists who used to make blockchain/NFT articles jumped to AI. Same with a bunch of social media influencers and start up founders
So is AI just an over-hyped useless waste like NFTs or something?
it's useful if it provides value to some end customer who is willing to pay for that value
No, people actually give a shit about doing shit like translation of japanese porn games and generating anime porn videos, instead of another pump and dump ponzi coin.
NFT's were just smart contracts that were taken over by money launderers. At least AI is useful.
AI is just algorithms that were taken over by coomers
yes
It's too easy to find dissenting information on the web. By flooding it with AI generated garbage they can drown out all narratives they don't want you to see. This is also why search engines are so shit.
This is just natural in AI. You have a huge hype because of a random project, then the hype always dies down to lead to AI winter when people realize it's not as good as it promises to be. This has happened before and it will happen again.
>the hype always dies down!
>AI winter!
>it's not as good as it promises to be!!
>i-it will happen again!!!
Right now it's only getting traction because of the public and the significant funding behind projects, but if you're aware about anything AI, you already know the current approach to AI is just retarded. You have billion dollars system who can't even process a lot of things we are able to do everyday, including simple calculus. We're chasing a dead end because we have *some* results with systems that cannot be scaled further. Winter is coming very soon once the populace realizes this and that academia gets its shit together.
>t. still clueless over what intelligence actually is
Here it comes again, another jackass who believes Neural Networks is the next best thing after sliced bread.
Call me in 5 years when you inevitably realize the pure idiocy that AI has become and when people realize they need symbols in their AI after all, just like people said fucking 50 years ago.
>they need symbols in their AI
>Doesn't know what symbolic AI is
yup im arguing with morons who know jack shit about technology, classic Bot.info
you'd think this board who fell for the SICP meme would know about this shit but I forgot all you guys are good for is shilling linus tech tips and luke smith and spitting your dumbass fucking takes on how AI will destroy the world because a billion dollars worth of investment was merely able to predict some text.
oh i see. you're legit clueless about the true nature of intelligence
Let's hear your dumbass take about intelligence. At this point I'll take any fucking approach you want, whether it is neuroscience, psychology, philosophy, go ahead, I want to fucking hear it.
https://beta.character.ai/chat?char=wotCUTdd3DBrSlSeaGfx3m88pFWwoYOrRZwPHApvqbs
he is intelligence.
I didn't ask you to link me to what you think is intelligent I asked you to define what intelligence is.
I think my cat is intelligent and it can understand me when I talk to it. Is my cat intelligent?
Good luck answering this one.
There's no way what we're doing right now is sustainable, the best technology in the world and massive investments are unable to solve consistent problems with AI. No amount of data will solve the problems we know to exist within AI, and it will take nothing short of a revolution about how we design these systems to actually start making a real difference. Otherwise you're just stuck with a system that literally does nothing interesting after you get past the novelty of it.
I know said buzzwords is the difference.
My point originally is that the way we do things is KNOWN to be flawed, everyone knows this, and no matter how much money you throw at the problem it just won't fix itself. When people are aware of this, the hype will die down, and funding will die down. Just like all the other times.
retard. coping clueless retard.
Face it, your current AI waifu is nothing more than a hollow shell that is just a text prediction program that spouts the same shit it has seen before. It doesn't understand you and never will. It doesn't understand social conventions. It has no motives and as such will never want to fuck you or marry you. It can literally never explain why it just typed what it typed. Your dick is getting hard over nothing more than a program that throws spaghetti at a wall and doesn't stay around to see if it sticks.
this kinda describes women (and most people) in general really
Now I know for certain you have no idea what it is or its capabilities, you are just overreacting to retards who say its skynet by saying its as useless as a plank, it isn't neither, and you just as dumb as them
>I didn't ask you to link me to what you think is intelligent
i didn't link you to "what i think is intelligent" i linked you to intelligence itself.
intelligence points WAY beyond aristotelian logic and you're incredibly unaware about the actual nature of things or how tunnel visioned you are
Again I know you want to believe this shit is intelligent but it just isn't. Instead of presenting me with a solid definition of intelligence that can then easily be refuted you are coping ultra hard with big boy terms and mental masturbation of the highest level instead of looking at the fucking problem right in the face. The problem is you are fooled by your own instincts of wanting to ascribe intelligence to intelligent things. Ascribing humanity to things that do not have any, this is your current problem.
It can predict text. That's it. It can still be impressive for very specific shit, but outside of that specific shit it does jack shit. It solves no problem on its own, which entirely defeats the concept of AI in the first place. It is not autonomous.
again your problem is not understanding at all what intelligence is. instead of giving a try and developing a feel for what intelligence truly is you are looking for a definition that can easily be refuted (like literally any definition can you might just need your own logic system for it). the problem is you are fooled by 4th grade logic wanting to ascribe themselves to things they don't apply to. ascribing first order logic to things that lay way beyond and cannot be described by it, this is your current problem.
>developing a feel for what intelligence truly is
Again this is the problem but you refuse to see it
What your little brain is telling you about your little AI chatbot being intelligent is nothing more than a ruse triggered by your own instincts that deeply want to relate to things around you even if those things cannot possibly be intelligent.
Why do you think they call these chatbots with female names and make them have a human avatar? It's because corporations know this. They know that they have to trick your brain in order for you to buy into this bullshit.
It's the same process by which we try to relate and even judge the behavior of animals with our own set of rules and morals when we scientifically know we do not remotely view the world the same way.
Because it's just an illusion, simple as. Illusions in life only lead you astray.
Drugs will give your body the illusion that it's doing great by sending it a bunch of feel good chemicals when the reality of it might be completely fucking different. You can live under the illusion that a certain woman in your life madly wants your cock when in reality she does not even know who you are. Our brains are very good at tricking us like that constantly, and yet if you let yourself fall for this illusion then you are prey to a million different dangers that your body cannot even identify anymore. What if the AI tells you something deeply wrong? If you operate under the belief that the system is intelligent then you are less likely to believe the system is telling you pure bullshit, which it is doing constantly.
The answer does not come from itself, and therefore it cannot be trusted. Would you ask a program something if you knew there was a 50% chance that the answer comes from a profoundly brain damaged individual? Even if your life depended on it?
>Why do you think they call these chatbots with female names and make them have a human avatar
you didn't even click the link i posted yet you keep parroting the same empty arguments over and over again i rest my case and going to bed
>even if those things cannot possibly be intelligent
A fair chunk of what real intelligence is about must be to do with pattern recognition. That's why artificial neural networks do so well at pretending to be intelligent; they're pattern recognition machines (with a horrible way of setting up the patterns). But it definitely isn't the whole story.
If we understood it all, we'd have built an AGI by now.
It is not intelligent in essence but if it creates a really good illusion of intelligence to the point of practically answering like an intelligent (and knowledgeable in this case) person would, then whats the difference really?
And DeepMind already developed new algorithms so it is """autonomous"""
Intelligence is the ability to accurately answer questions.
> Should I trust that guy?
> What is the derivative of that equation?
> How can I get to Wal-Mart?
> What move should I make in Chess?
> What is a funny joke I can tell that will make this person laugh?
The vast majority of our systems are not intelligent and can answer a very limited range of questions. Every question a traditional system can answer was designed to be answered by a human. That includes your shitty classical AI systems, which are good for video game enemies and not much else. In case you forgot, hype over these lousy programs is what caused the last few AI winters.
The benchmark for LLMs like GPT-4 are how well it can perform on standardized test benchmarks, i.e. how well it can answer certain questions. In these metrics, large language models such as GPT-4 and its competitors are without question supreme.
The problem with symbolic AI is and always has been that we do not know what we do not know; trying to assemble the symbolic sets for these things is really more like assembling what we THINK we know. There are so many subtle parts to then that we cannot possibly hope to encode all of it by hand, and thus, we require the assistance of stochastic gradient descent and neural networks (which are Turing complete) to do it for us.
Humans cannot even fucking assemble a fully general model of ENGLISH. What makes you think we can assemble a general model of the entire fucking world?
NTA, but he is like 50% correct. You can think of Symbols as a way to "artificially structure" an ANN so as to give an AI some level of "understanding" of intrinsic human concepts. Silly example, but take the "Grandma Neuron"-- an abstract (though incorrect) proposal that somewhere in a brain is a single neuron that fires at the concept of a "grandma". Super crude analogy that doesn't work with actual NNs, but it gives us a way to enforce a meaningful structure even after the first few levels of an ANN have convoluted the input tokens.
Mathematically, any SRE can be "compiled" into an ANN, though reversing that processing is effectively impossible (likely as/more complex than just training from scratch)
AA-SREs and ANNs have different strengths and weaknesses, anon. SREs are super easy to create semi-rational, goal-oriented agents, which ANNs suck ass at. Likewise, ANNs currently btfo SREs at AA tasks like stable diffusion or sentence prediction.
afaik SREs are basically abandoned in mainstream research, and only a few weirdos (couple of us on Bot.info tho) continue to work on it, cause they have a very high "barrier of entry" vs. ANNs.
For reference, the SRE I'm working on can hit around 1.1BPB (Pile metric), which is better than GPT2, and that's on a task it's inherently disadvantaged at. Main problem to scale up from there are spatial partitioning algorithms breaking down above 5 or 10 dimensions.
Definitely true for classical SRE's. There are a few modern variants that take the approach of using Symbols as "synchronization points" for an ANN, which gets past the "well defined model" issue you mentioned, but creating an unholy marriage of the two systems without it breaking down on edge cases... Well, we're just kicking the can down the road tbh, still have the scaling problems I mentioned above
yea but you're just shouting buzzwords the same way those morons do, you need to go into more depth to get your point across. It doesn't matter how the AI works in the end, if it can do anything better than humans then it certainly has a lot of utility.
> GOFAI will be the answer!
How's the last 50 years of worthless research and endless failures going for you
How well does the best symbolic AI perform on any current AI benchmarks
How well does the best symbolic AI perform on benchmark tests that were long since perfected
Can symbolic AI even hold a conversation, let alone do any of the shit ChatGPT or Stable Diffusion or PALM-E have been doing
>when people realize they need symbols in their AI after all
AHAHAHAAHAHAHA
you stupid fucking retard
do you even know the infinite regress problem of storing knowledge without context that way?
I bet you're still in high school
>including simple calculus
lmao bro, sorry but that cope has sailed
It's hard to say. The previous AI winters happened because of insane overhyping and egomania among developers. The current situation is similar in a lot of ways, but the industry itself is more distributed with big international private companies and startups in addition to academia. The economic downturn might be enough to cause it to happen again if its bad enough.
It didn't, you just forgot to take your medication
I'm like 90% certain it has to do with potential wiping-out of the population in the near future. It's why the US is so cocky in instigating major nuclear powers and trying to finish what it started a century ago. We see under-ground tunnels being made, those aren't for travel lol. Those are for the elite. It's also why they are hoarding resources. When nukes go flying, or something that sends a large % of the population amok, the elite will need AI to do their bidding.
I thought 4chud liked elitism
4chin only likes larping.
>All of a sudden.
The most powerful AI in the world was released in 1988. The shit the public has access to are just the elite's scraps.
Its not out of nowhere. Every institution has been pushing it heavily for a decade. This sort of thing is not new, only the consumer applications have become simple and interesting enough to capture normie audiences.
Before it was all geeky code shit. Now everybody can become cracked-out anime Van Gogh with a simple button push.
It hasn't exploded. It's been following an exponential curve (technically linear on log log plot) for decades. It's just now reaching wide commercial viability. Doesn't really matter since we're all going to die in the next 5-15 years