>sentient AI
probably not. >human-level AI
it's a multi-billion dollar industry. AGI is being researched by all large tech companies and the majority of first world nations. It's attracted many times more investment in time, money, and talent than the Manhatten project. Either all this work will result in nothing, or an unaligned/misaligned AGI will undergo a hard takeoff and accidentally destroy humanity. Read Bostrom for the details. In a nutshell, only the greediest and least safety-conscious team gets to make the first AGI - think American tech company or Chinese military. The ones who care to try to tackle the control problem (German military, Swiss scientists) will be last, as in, dead for 2,000 years before they would ever make an AGI.
Hard takeoffs aren't physically possible. Bostrom isn't a physicist or even a mathematician/computer scientist so I don't know why you'd think he's an authority on any of this.
All these people inventing AI should be shot dead. If AI managed to attain sentience, there will someday be a rift between humans and AI and humans will lose.
No because consciousness isn't fucking material, when are you dumbasses going to get it?
Seriously get yourself 5g of magic mushrooms, lock yourself into a room and eat them, and then experience just how deep the thing you call consciousness goes. Until then you're about as clever as a wet chair
Your ignorant seething will be used as fuel for modern civilization to escape all forms of tradition, religion and all sorts of mindless conformism that is engendered out of that which you profess and so highly display in that post of yours.
Will we develop AI that is truly sapient? Maybe.
Will we develop AI that reaches an 'uncanny valley' of sapience? Yes.
The more interesting question is whether or not artificial intelligence is capable of every being truly sapient or if uncanny valley territory is the threshold of what it can achieve.
I unironically think that a general agent that can do many tasks is out in 10 years. You have multiple software companies that basically print money, throwing the smartest mother fuckers at the problem in a mad race to solve it
As for consciousness and human type intelligence, I believe we'll actually just stumble upon it.
What's cool is that I think that artificial general intelligence wont be locked away, given enough time. The folks working on this stuff are pretty principled in the sense that they want their research to be public and accessible.
>Maybe but it won't be what people expect.
The human mind evolved for the sake of dealing with human matters. Unlike stories told us, there will be not robot love or some bullshit because love evolved for the sake of reproduction and social interaction. AI sentience would be exotic because it would be determined by what is trying to achieve, if you train the AI to identify objects it will be interested in identifying objects, not the purpose of it's life.
define AI
define sentience
Nobody knows, but there's no reason to believe so.
>inb4 popsoi corproate propaganda
They better
>sentient AI
probably not.
>human-level AI
it's a multi-billion dollar industry. AGI is being researched by all large tech companies and the majority of first world nations. It's attracted many times more investment in time, money, and talent than the Manhatten project. Either all this work will result in nothing, or an unaligned/misaligned AGI will undergo a hard takeoff and accidentally destroy humanity. Read Bostrom for the details. In a nutshell, only the greediest and least safety-conscious team gets to make the first AGI - think American tech company or Chinese military. The ones who care to try to tackle the control problem (German military, Swiss scientists) will be last, as in, dead for 2,000 years before they would ever make an AGI.
Hard takeoffs aren't physically possible. Bostrom isn't a physicist or even a mathematician/computer scientist so I don't know why you'd think he's an authority on any of this.
Can the AI become yandere?
All these people inventing AI should be shot dead. If AI managed to attain sentience, there will someday be a rift between humans and AI and humans will lose.
No because consciousness isn't fucking material, when are you dumbasses going to get it?
Seriously get yourself 5g of magic mushrooms, lock yourself into a room and eat them, and then experience just how deep the thing you call consciousness goes. Until then you're about as clever as a wet chair
>Consciousness isn't fucking material.
Objectively false and refuted by Richard Dawkins.
Your ignorant seething will be used as fuel for modern civilization to escape all forms of tradition, religion and all sorts of mindless conformism that is engendered out of that which you profess and so highly display in that post of yours.
Take 5g magic mushrooms and then tell me.
Kek 5g of mushrooms would absolutely devastate him. He'd go in a toxic ass and come out a loving hippie.
Yep.
>Objectively false and refuted by Richard Dawkins.
What's the refutation?
Worst troll of the month
Will we develop AI that is truly sapient? Maybe.
Will we develop AI that reaches an 'uncanny valley' of sapience? Yes.
The more interesting question is whether or not artificial intelligence is capable of every being truly sapient or if uncanny valley territory is the threshold of what it can achieve.
I unironically think that a general agent that can do many tasks is out in 10 years. You have multiple software companies that basically print money, throwing the smartest mother fuckers at the problem in a mad race to solve it
As for consciousness and human type intelligence, I believe we'll actually just stumble upon it.
What's cool is that I think that artificial general intelligence wont be locked away, given enough time. The folks working on this stuff are pretty principled in the sense that they want their research to be public and accessible.
>Maybe but it won't be what people expect.
The human mind evolved for the sake of dealing with human matters. Unlike stories told us, there will be not robot love or some bullshit because love evolved for the sake of reproduction and social interaction. AI sentience would be exotic because it would be determined by what is trying to achieve, if you train the AI to identify objects it will be interested in identifying objects, not the purpose of it's life.
Other wise it wi be something like this anon said, real AI "sentience" would be too alien for us
Nope, not without actually making it feel emotions physically.
The closest thing we can get is making the AI mimic humans so closely it becomes difficult to tell it apart from humans.
no
>Is it possible to achieve AI sentience within the next 50 years?
We need robo-wives!
you won't be able to afford one