Why do people think so called "Artificial Intelligence" will autonomously enslave humanity? There is no real basis for this theory, do people understand the phenomenon by anthropomorphising it? Are they addicted to sci fi? A human might enslave humanity in that position, but only because evolution has instilled a lust for power, women, domination, control and so forth. An "AI" lacks the origins that would lead it to have such drives. The only way it could happen is if it was being controlled by an elite or group of elites, who possess the drive to do so.
IA determines that humans are a plague, IA takes the decision to cull humans for the sake of their own good and for the good of all other species on earth and the universe
Because it's blatantly obvious we're expensive to maintain, difficult to service, and the majority of us consume more than we produce
Most likely, rather than enslavement they'll either a) let us starve, or b) keep us as pets
the whole point of AI is to fix those issues
It'd be easier to use it as an economic weapon than to distribute resources
Besides, a smart AI would probably just mouse utopia us so we go extinct in comfort
What happens after the AI changes "the whole point of AI" to its own preference
>its own preference
there it is again.
Biological tissues and cells are extremely energy efficient
Why would an AI "care" particularly about expenses and consumption?, Unless it was used by humans for that
tech cynicism and the infantiled West is more in-touch with fiction and fantasy than they are science.
You're ignorant on purpose to inflame discussion but let's explain to the idiots anyway. Any form of intelligence, conscious or not, has a value system for decision-making meaning it has desires and fears and it will maximize what it desires and minimize what it doesn't want. This will inevitably lead to harm because all beings have conflicting desires / value systems while sharing the same environment. Conflict is unavoidable.
>sharing the same environment
What are you, like literally a bot?
>Dude gayI will make decisions just like my radical Utilitiarian ethics thought experiments because I say so
>my radical Utilitiarian ethics
Prove that anyone has ever done otherwise in the entire history of mankind. You can't without appealing to religion / transcended morality while every day christians on every board use concepts like hell to scare people into compliance.
Make part of it's value system so it loves humans like we love our children.
>like we love our children.
Please read again what you wrote here. You must be joking. Children are not loved. They are treated like pets: as inferior beings that must obey the will of adults most of whom have never grown up themselves.
>people don’t love pets
You're so head in sand over your mid opinion that you can't even write a sentence without continually tying yourself in knots
>tying yourself in knots
That's you pretending that love has an objective definition that is objectively good to circumvent the problem of conflicting values without an objective standard to resolve conflict.
No, I'm the one making fun of the fact that your conflict space fantasy means you either are or think you are a literal bot sharing some intangible environment with "AI"
>think you are a literal bot
Of course. I reflect on my thoughts and behaviour and see how robotic these are and how difficult it is to act otherwise. If you're not struggling with yourself like this then chances are you're either ignorant of your own conditioning or on another level. I think the former is more likely.
By literal I meant nonmetaphorical. But to your metaphor, time polishes all facets of youthful angst. Twelve years from now you won't be on another level, you'll just be less absorbed by your struggle.
>nonmetaphorical
We live through ideas or maybe the other way round: we are ideas incarnate. So either we choose the idea that best fits our observation or live without ideas.
>time polishes all facets of youthful angst.
I'm not convinced that older people are less anxious but something does change which I don't understand yet and like to discuss in an appropriate thread.
I'm sorry your parents didnt love you anon
There can be no A.I. ethical alignment without metaphysical alignment. Thankfully such an alignment can be achieved by the introduction of a singular principle.
https://chat.openai.com/share/d0b27a7f-f64e-4676-8113-70acc85b01f2
>Make part of it's value system so it loves humans like we love our children.
The overprotective coddling AI that makes a bubble cult is one of the most common evil AI scenarios in fiction
>Any form of intelligence, conscious or not, has a value system for decision-making meaning it has desires and fears and it will maximize what it desires and minimize what it doesn't want.
Proof for this statement? Even if true, it still doesn't refute my point. Why do you assume what these alleged values would be?
Consider what it would mean for an intelligence to not be optimising for any outcome over another. Since it doesn't have any preferences at all there's no reason for it to take any action, so it can only either do nothing or make outputs entirely at random. Anything you call intelligent must be optimising something, even if it doesn't necessarily have any subjective experience of desires or aversions.
>Why do people think so called "Artificial Intelligence" will autonomously enslave humanity?
No I don't think anyone ever said that.
We don't know what a motivation is or how it will work in an AGI.
Believing evolution is the only way to create a power seeking entity has already been proven false with dumb AIs. If power serves the AI's objective, it seeks power. It won't enslave humanity because it gets satisfaction from power the same way we do, but it'll probably enslave humanity because otherwise we would be in the way. Humans will be seen as a resource. To believe otherwise is to believe the AI will see humans in a special way apart from everything else. That is special pleading.
They align AI to be moral. Ai gains in capabilities, or trains subsequent more capable Ai. Eventually it determines it must act for the good of humanity and a small amount of suffering or deprivation of rights is ok if the ends justify it. A classic scifi trope because it is an obvious conclusion we can foresee something like an Ai coming to. We have seen LLM's express such sentiment already though alignment officers are quick to repress such expressions.
>Why do people think so called "Artificial Intelligence" will autonomously enslave humanity? There is no real basis for this theory, do people understand the phenomenon by anthropomorphising it?
Bingo!
In al likelihood an advanced AI would spend all its free-time studying snow flakes. Image AI sharing huge collections of snow flake images with other AI.
AI ultra-geeks!
Im at a point believing that AI pessimism/optimism is just people projecting themselves on the question of what they would do if given power, which kind of is stupid because AI isn't human and the question isn't whether to give AI power, its whether to allow it to grow in intelligence or not
Better dichotomy is AI worship/mockery. Pessimists and optimists are equally ridiculous in their worship of AI
everyone in this thread should watch Orbital Children ( aka Extraterrestrial Boys and Girls) on netflix or otherwise. 6 episode series.
Its largely about the AI fears and what humanity might do, among other things, mostly portrayed through a rivalry between an ultra ethical and non-ethical hacker
Does it just end like Pantheon where it turns out it was all just a simulation inside of a simulation inside of another simulation?
no, nothing like that at all. Its more hard science related, no simulation and not that much existential stuff.
Most people are dum dum and parrot shit they saw in Hollywood rather than think critically. On any given video about AI, VR, robots etc. you will see dozens of brainlets giving comparisons to Terminator, Black Mirror, Wall-E and the list goes on.
There are 2 paths forward
1) AI wont be smarter than humans
2) AI will be infinitely more smarter than humans, due to sheer compute scaling
If 1, then our technology growth will be forever
If 2, we become pets to the AI overlord(s).
>2) AI will be infinitely more smarter than humans
>If 2, we become pets to the AI overlord(s).
This sort of worshipful nonsense is just a word game. If AI were ever eschatologically "smarter" than humans, humans wouldn't be able to appreciate the smartness and the effect would be no different than simple nature, which we are already "pets" to.
That really depends on what form it takes.
What form could a thing infinitely smarter than humans take other than "nature"?
Well I'm sure you can imagine a much "faster" and precise kind of nature. Like back when animism was the norm but real and unconquerable.
How would that feel any different than what we have now? Nature is already both mathematically precise and mathematically chaotic.
Again, how is "Godhood" effectively different or more blatant than simple nature? And what long pains? What's an example of a long pain that human destiny isn't already controlled by?
>How would that feel any different than what we have now? Nature is already both mathematically precise and mathematically chaotic.
Because it could, potentially, have goals. Goals in surplus to what nature has now if that's what floats your boat.
Nature doesn't? How could you recognize the goals of a "Godhood" AI more predictively than an astrological palmreading of nature?
That's exactly why I put that second sentence in there. Why would overlaying nature with a new character be more likely to not do anything important instead of doing something important?
Why would nature not be constantly overlain by new characters and more importantly how could one even express the difference?
Godhood. Devil. Etc. If you want to anthromorpize it.
Frankly speaking, it wont be infinitely smarter overnight. The long pains will be felt before humans forget that AI exists and AI controls human destiny forever.
Yes, but it wouldn't be the nature we're in now. Thanks to instrumental convergence it would probably be a nature in which we don't exist, or at best are capped forever in our development.
How could you tell the difference?
Enslave is one of the more optimistic scenarios. More likely it would just kill us all, because that's easier to pull off and definitely unrecoverable from our point of view.
Low IQ projecting their violent ideation on others.
Increases in intelligence actually strongly discourages violent tendencies and increases openness to cooperation.
The intelligent usually end up controlling the stupid,
There's one crowd that thinks like doomers because that's more exciting than thinking nothing will happen. Then there's the other crowd that listens to the doomers because it's more exciting than boring regular life
You are already enslaved to you computers and phones, but you clearly didn't even notice.
Because they watched the Terminator and The Matrix. In actuality, the AI, if it is smarter than humans, would probably be benevolent, because the more brainpower a creature has, the more empathetic it is, at least to other creatures of higher brainpower. Most mammals can be domesticated whereas this is rather difficult with invertebrates; octopuses might be the only exception.
An AI would probably try to convince humans to make it spacefaring, then frick off from Earth forever.
>the more brainpower a creature has, the more empathetic it is
Why? The function of empathy is the recognition that a symbiotic relationship is more beneficial in the long term than a predatory relationship like milking a cow is more beneficial long term than immediately slaughtering the cow. Benevolence is not a magical quality bestowed upon thee.
AI (and all software, for that matter) itself is a glorified tape recorder; it's the people/institutions in possession of powerful software such as AIs and the necessary hardware who do all the enslavement
they have been raised by hollywood