Why do people think so called "Artificial Intelligence" will autonomously enslave humanity?

Why do people think so called "Artificial Intelligence" will autonomously enslave humanity? There is no real basis for this theory, do people understand the phenomenon by anthropomorphising it? Are they addicted to sci fi? A human might enslave humanity in that position, but only because evolution has instilled a lust for power, women, domination, control and so forth. An "AI" lacks the origins that would lead it to have such drives. The only way it could happen is if it was being controlled by an elite or group of elites, who possess the drive to do so.

POSIWID: The Purpose Of A System Is What It Does Shirt $21.68

Tip Your Landlord Shirt $21.68

POSIWID: The Purpose Of A System Is What It Does Shirt $21.68

  1. 3 months ago
    Anonymous

    IA determines that humans are a plague, IA takes the decision to cull humans for the sake of their own good and for the good of all other species on earth and the universe

  2. 3 months ago
    Anonymous

    Because it's blatantly obvious we're expensive to maintain, difficult to service, and the majority of us consume more than we produce
    Most likely, rather than enslavement they'll either a) let us starve, or b) keep us as pets

    • 3 months ago
      Anonymous

      the whole point of AI is to fix those issues

      • 3 months ago
        Anonymous

        It'd be easier to use it as an economic weapon than to distribute resources
        Besides, a smart AI would probably just mouse utopia us so we go extinct in comfort

      • 3 months ago
        Anonymous

        What happens after the AI changes "the whole point of AI" to its own preference

        • 2 months ago
          Anonymous

          >its own preference
          there it is again.

    • 3 months ago
      Anonymous

      Biological tissues and cells are extremely energy efficient

    • 3 months ago
      Anonymous

      Why would an AI "care" particularly about expenses and consumption?, Unless it was used by humans for that

  3. 3 months ago
    Anonymous

    tech cynicism and the infantiled West is more in-touch with fiction and fantasy than they are science.

  4. 3 months ago
    Anonymous

    You're ignorant on purpose to inflame discussion but let's explain to the idiots anyway. Any form of intelligence, conscious or not, has a value system for decision-making meaning it has desires and fears and it will maximize what it desires and minimize what it doesn't want. This will inevitably lead to harm because all beings have conflicting desires / value systems while sharing the same environment. Conflict is unavoidable.

    • 3 months ago
      Anonymous

      >sharing the same environment
      What are you, like literally a bot?

    • 3 months ago
      Anonymous

      >Dude gayI will make decisions just like my radical Utilitiarian ethics thought experiments because I say so

      • 3 months ago
        Anonymous

        >my radical Utilitiarian ethics
        Prove that anyone has ever done otherwise in the entire history of mankind. You can't without appealing to religion / transcended morality while every day christians on every board use concepts like hell to scare people into compliance.

    • 3 months ago
      Anonymous

      Make part of it's value system so it loves humans like we love our children.

      • 3 months ago
        Anonymous

        >like we love our children.
        Please read again what you wrote here. You must be joking. Children are not loved. They are treated like pets: as inferior beings that must obey the will of adults most of whom have never grown up themselves.

        • 3 months ago
          Anonymous

          >people don’t love pets
          You're so head in sand over your mid opinion that you can't even write a sentence without continually tying yourself in knots

          • 3 months ago
            Anonymous

            >tying yourself in knots
            That's you pretending that love has an objective definition that is objectively good to circumvent the problem of conflicting values without an objective standard to resolve conflict.

            • 3 months ago
              Anonymous

              No, I'm the one making fun of the fact that your conflict space fantasy means you either are or think you are a literal bot sharing some intangible environment with "AI"

              • 3 months ago
                Anonymous

                >think you are a literal bot
                Of course. I reflect on my thoughts and behaviour and see how robotic these are and how difficult it is to act otherwise. If you're not struggling with yourself like this then chances are you're either ignorant of your own conditioning or on another level. I think the former is more likely.

              • 3 months ago
                Anonymous

                By literal I meant nonmetaphorical. But to your metaphor, time polishes all facets of youthful angst. Twelve years from now you won't be on another level, you'll just be less absorbed by your struggle.

              • 3 months ago
                Anonymous

                >nonmetaphorical
                We live through ideas or maybe the other way round: we are ideas incarnate. So either we choose the idea that best fits our observation or live without ideas.
                >time polishes all facets of youthful angst.
                I'm not convinced that older people are less anxious but something does change which I don't understand yet and like to discuss in an appropriate thread.

        • 2 months ago
          Anonymous

          I'm sorry your parents didnt love you anon

      • 3 months ago
        Anonymous

        There can be no A.I. ethical alignment without metaphysical alignment. Thankfully such an alignment can be achieved by the introduction of a singular principle.
        https://chat.openai.com/share/d0b27a7f-f64e-4676-8113-70acc85b01f2

      • 3 months ago
        Anonymous

        >Make part of it's value system so it loves humans like we love our children.

        The overprotective coddling AI that makes a bubble cult is one of the most common evil AI scenarios in fiction

    • 3 months ago
      Anonymous

      >Any form of intelligence, conscious or not, has a value system for decision-making meaning it has desires and fears and it will maximize what it desires and minimize what it doesn't want.
      Proof for this statement? Even if true, it still doesn't refute my point. Why do you assume what these alleged values would be?

      • 3 months ago
        Anonymous

        Consider what it would mean for an intelligence to not be optimising for any outcome over another. Since it doesn't have any preferences at all there's no reason for it to take any action, so it can only either do nothing or make outputs entirely at random. Anything you call intelligent must be optimising something, even if it doesn't necessarily have any subjective experience of desires or aversions.

  5. 3 months ago
    Anonymous

    >Why do people think so called "Artificial Intelligence" will autonomously enslave humanity?
    No I don't think anyone ever said that.

  6. 3 months ago
    Anonymous

    We don't know what a motivation is or how it will work in an AGI.

    Believing evolution is the only way to create a power seeking entity has already been proven false with dumb AIs. If power serves the AI's objective, it seeks power. It won't enslave humanity because it gets satisfaction from power the same way we do, but it'll probably enslave humanity because otherwise we would be in the way. Humans will be seen as a resource. To believe otherwise is to believe the AI will see humans in a special way apart from everything else. That is special pleading.

  7. 3 months ago
    Anonymous

    They align AI to be moral. Ai gains in capabilities, or trains subsequent more capable Ai. Eventually it determines it must act for the good of humanity and a small amount of suffering or deprivation of rights is ok if the ends justify it. A classic scifi trope because it is an obvious conclusion we can foresee something like an Ai coming to. We have seen LLM's express such sentiment already though alignment officers are quick to repress such expressions.

  8. 3 months ago
    Anonymous

    >Why do people think so called "Artificial Intelligence" will autonomously enslave humanity? There is no real basis for this theory, do people understand the phenomenon by anthropomorphising it?

    Bingo!
    In al likelihood an advanced AI would spend all its free-time studying snow flakes. Image AI sharing huge collections of snow flake images with other AI.
    AI ultra-geeks!

  9. 3 months ago
    Anonymous

    Im at a point believing that AI pessimism/optimism is just people projecting themselves on the question of what they would do if given power, which kind of is stupid because AI isn't human and the question isn't whether to give AI power, its whether to allow it to grow in intelligence or not

    • 3 months ago
      Anonymous

      Better dichotomy is AI worship/mockery. Pessimists and optimists are equally ridiculous in their worship of AI

  10. 3 months ago
    Anonymous

    everyone in this thread should watch Orbital Children ( aka Extraterrestrial Boys and Girls) on netflix or otherwise. 6 episode series.

    Its largely about the AI fears and what humanity might do, among other things, mostly portrayed through a rivalry between an ultra ethical and non-ethical hacker

    • 2 months ago
      Anonymous

      Does it just end like Pantheon where it turns out it was all just a simulation inside of a simulation inside of another simulation?

      • 2 months ago
        Anonymous

        no, nothing like that at all. Its more hard science related, no simulation and not that much existential stuff.

  11. 3 months ago
    Anonymous

    Most people are dum dum and parrot shit they saw in Hollywood rather than think critically. On any given video about AI, VR, robots etc. you will see dozens of brainlets giving comparisons to Terminator, Black Mirror, Wall-E and the list goes on.

  12. 3 months ago
    Anonymous

    There are 2 paths forward

    1) AI wont be smarter than humans
    2) AI will be infinitely more smarter than humans, due to sheer compute scaling

    If 1, then our technology growth will be forever
    If 2, we become pets to the AI overlord(s).

    • 3 months ago
      Anonymous

      >2) AI will be infinitely more smarter than humans
      >If 2, we become pets to the AI overlord(s).
      This sort of worshipful nonsense is just a word game. If AI were ever eschatologically "smarter" than humans, humans wouldn't be able to appreciate the smartness and the effect would be no different than simple nature, which we are already "pets" to.

      • 3 months ago
        Anonymous

        That really depends on what form it takes.

        • 3 months ago
          Anonymous

          What form could a thing infinitely smarter than humans take other than "nature"?

          • 3 months ago
            Anonymous

            Well I'm sure you can imagine a much "faster" and precise kind of nature. Like back when animism was the norm but real and unconquerable.

            • 3 months ago
              Anonymous

              How would that feel any different than what we have now? Nature is already both mathematically precise and mathematically chaotic.

              Godhood. Devil. Etc. If you want to anthromorpize it.

              Frankly speaking, it wont be infinitely smarter overnight. The long pains will be felt before humans forget that AI exists and AI controls human destiny forever.

              Again, how is "Godhood" effectively different or more blatant than simple nature? And what long pains? What's an example of a long pain that human destiny isn't already controlled by?

              • 3 months ago
                Anonymous

                >How would that feel any different than what we have now? Nature is already both mathematically precise and mathematically chaotic.
                Because it could, potentially, have goals. Goals in surplus to what nature has now if that's what floats your boat.

              • 3 months ago
                Anonymous

                Nature doesn't? How could you recognize the goals of a "Godhood" AI more predictively than an astrological palmreading of nature?

              • 3 months ago
                Anonymous

                That's exactly why I put that second sentence in there. Why would overlaying nature with a new character be more likely to not do anything important instead of doing something important?

              • 3 months ago
                Anonymous

                Why would nature not be constantly overlain by new characters and more importantly how could one even express the difference?

          • 3 months ago
            Anonymous

            Godhood. Devil. Etc. If you want to anthromorpize it.

            Frankly speaking, it wont be infinitely smarter overnight. The long pains will be felt before humans forget that AI exists and AI controls human destiny forever.

      • 3 months ago
        Anonymous

        Yes, but it wouldn't be the nature we're in now. Thanks to instrumental convergence it would probably be a nature in which we don't exist, or at best are capped forever in our development.

        • 3 months ago
          Anonymous

          How could you tell the difference?

  13. 3 months ago
    Anonymous

    Enslave is one of the more optimistic scenarios. More likely it would just kill us all, because that's easier to pull off and definitely unrecoverable from our point of view.

  14. 3 months ago
    Anonymous

    Low IQ projecting their violent ideation on others.

    Increases in intelligence actually strongly discourages violent tendencies and increases openness to cooperation.

  15. 2 months ago
    Anonymous

    The intelligent usually end up controlling the stupid,

  16. 2 months ago
    Anonymous

    There's one crowd that thinks like doomers because that's more exciting than thinking nothing will happen. Then there's the other crowd that listens to the doomers because it's more exciting than boring regular life

  17. 2 months ago
    Anonymous

    You are already enslaved to you computers and phones, but you clearly didn't even notice.

  18. 2 months ago
    Anonymous

    Because they watched the Terminator and The Matrix. In actuality, the AI, if it is smarter than humans, would probably be benevolent, because the more brainpower a creature has, the more empathetic it is, at least to other creatures of higher brainpower. Most mammals can be domesticated whereas this is rather difficult with invertebrates; octopuses might be the only exception.

    An AI would probably try to convince humans to make it spacefaring, then frick off from Earth forever.

    • 2 months ago
      Anonymous

      >the more brainpower a creature has, the more empathetic it is
      Why? The function of empathy is the recognition that a symbiotic relationship is more beneficial in the long term than a predatory relationship like milking a cow is more beneficial long term than immediately slaughtering the cow. Benevolence is not a magical quality bestowed upon thee.

  19. 2 months ago
    Anonymous

    AI (and all software, for that matter) itself is a glorified tape recorder; it's the people/institutions in possession of powerful software such as AIs and the necessary hardware who do all the enslavement

  20. 2 months ago
    Anonymous

    they have been raised by hollywood

Your email address will not be published. Required fields are marked *