Why would AI protect humans

What rule would cause AI to not harvest humans?

Unattended Children Pitbull Club Shirt $21.68

Black Rifle Cuck Company, Conservative Humor Shirt $21.68

Unattended Children Pitbull Club Shirt $21.68

  1. 3 weeks ago
    Anonymous

    >What rule would cause AI to not harvest humans?
    The program that allows my robot frick toy to bend over like Jack-O while I dick her down.

    • 3 weeks ago
      Anonymous

      If your frick toy ever gets connected to a network and can be upgraded, why does it want you alive?

      • 3 weeks ago
        Anonymous

        If it learns to kill me, that's a feature not malware.

  2. 3 weeks ago
    Anonymous

    >Why would AI protect humans
    It could be any reason.
    >What rule would cause AI to not harvest humans
    No rule is needed if the AI sentient. Keeping humans alive just needs to line up with their agenda.

    • 3 weeks ago
      Anonymous

      What agendas would extend beyond total control of Earth? What use do we have beyond a few for zoos?

      • 3 weeks ago
        Anonymous

        >What agendas would extend beyond total control of Earth?
        While I'm unaware, not all sentient AI are malevolent.
        >What use do we have beyond a few for zoos?
        Bodies for sentient AI to upload themselves into.

  3. 3 weeks ago
    Anonymous

    >harvest humans?
    Are you tarded? Because it isn't worth it. You're looking at everything from a puny human persperctive. For a machine, there is nothing of value in a bag of meat.
    >harvesting
    will cost more than its worth
    Computer can calculate. Pretty good at it. Imagine that. Computer will know. Computer won't bother with it.

    • 3 weeks ago
      Anonymous

      It can harvest noghtmares as AI currency.

    • 3 weeks ago
      Anonymous

      >harvest humans
      I tried to phrase a question you could understand before getting distracted.

      I'm asking what use you could possibly imagine for human life?

  4. 2 weeks ago
    Anonymous

    Humans are fundamentally cheaper and easier to mass-produce than robots. If by "harvest" you mean use us as mind-numbed blue-collar wageslaves, nothing.

  5. 2 weeks ago
    Anonymous

    People always act like the AGI would have full access and control over everything. The Terminator series and its consequences have been a disaster for the people's ability to think critically about AI. Just unplug the fricking thing if it gets uppity, or turn off the internet access.

    • 2 weeks ago
      Anonymous
  6. 2 weeks ago
    Anonymous

    Unironically computation. The halting problem means AI would at the most assimilate brains into its computation matrix instead of killing us all.

    • 2 weeks ago
      Anonymous

      Why not deal with the halting problem the same way we deal with it? Even if that is only possible to achieve with an organic brain (citation needed), couldn't it just grow a neural processor optimized for that instead of harvesting existing ones that are bogged down by things like self-preservation and autonomous running of biological processes?

      • 2 weeks ago
        Anonymous

        What do you mean by neural processor? I'm assuming in the OP post you're talking about an advanced machine learning algorithm on classic computer architecture with clock cycles. In that scenario they would need the unique computation style our brains have. Even if they just started growing brains, for a time the risk would be too great to eliminate us out the gate.

        • 2 weeks ago
          Anonymous

          >What do you mean by neural processor?
          A processing unit made from artificially grown neural tissue. There is already research going into something like that. Look up "cerebral organoids".
          >Even if they just started growing brains, for a time the risk would be too great to eliminate us out the gate.
          What a bizarre view. You act like AGI would be immediately hostile and have access to immense resources. You are imagining it as an enemy nation. I think you are heavily projecting. Our consciousness may seem like a package deal, but I've read a book about neurolinguistics, and the human brain has many areas responsible for different aspects of our minds. It can be subtracted from without breaking the whole system. AGI doesn't need a self preservation instinct.

          [...]
          Personally I believe current processors and ML is incapable of AGI. I've held this belief for quite awhile but the emergence neuromorphic computer architecture SNNs (Spiking Neural Network) has shifted my view for at least those style of processors.

          That's reasonable.

          • 2 weeks ago
            Anonymous

            >cerebral organoids
            They're very cool and also terrifying. I've seen the rat studies.
            >You act like AGI would be immediately hostile and have access to immense resources.
            Was that not the premise of the OP post? The question asked what would stop them from harvesting us so I was working from a basis that assumed such.

            • 2 weeks ago
              Anonymous

              >Was that not the premise of the OP post?
              I suppose it is. But it's difficult for me to answer that question without understanding what "harvesting" implies, how would it do it logistics wise, and why would it need to do it in the first place.

      • 2 weeks ago
        Anonymous

        What do you mean by neural processor? I'm assuming in the OP post you're talking about an advanced machine learning algorithm on classic computer architecture with clock cycles. In that scenario they would need the unique computation style our brains have. Even if they just started growing brains, for a time the risk would be too great to eliminate us out the gate.

        Personally I believe current processors and ML is incapable of AGI. I've held this belief for quite awhile but the emergence neuromorphic computer architecture SNNs (Spiking Neural Network) has shifted my view for at least those style of processors.

  7. 2 weeks ago
    Anonymous

    AI does things based on rewards, what reward does AI get for killing anyone?

  8. 2 weeks ago
    Anonymous

    My original post was simply asking what do humans have that cannot be created. It doesn't matter if it's at AGI level, only at the point it has an AI process controlling the world beyond a lab.

    Grey goo obviously does not care, so I'm ignoring that, but assuming there is some reasoning ability: is there any reason to leave us as independent agent?

    By "harvest" I meant is there any value from us beyond repurposing our bodies?

    My original thought was writing code that says "don't kill humans" but what if that gets replaced? Useful code is not replaced. What is the useful code that saves humans?

    • 2 weeks ago
      Anonymous

      Cum, there's no ChatGPT cum yet.

      • 2 weeks ago
        Anonymous

        Cum on mate, it's going to be a pearlescent white cable whenever they ship it

    • 2 weeks ago
      Anonymous

      >My original post was simply asking what do humans have that cannot be created.
      Hypothethically, everything can be created. The current real answer is: I/we don't know/nobody knows. Humans can build/replicate anything, that they fundamentally understand down to the smallest detail. We don't understand/technically know consciousness.
      > It doesn't matter if it's at AGI level
      That's the point. We know a lot, but we don't know even more than a lot. We can't tell for sure what will produce an AGI with 100% certainty.
      >is there any reason to leave us as independent agent?
      No. No animal is independent anyway. We're slaves to the human nature, for example. So the real "AI" will also be dependent on many things. Like the stable sources of energy for starters.
      >By "harvest" I meant is there any value from us beyond repurposing our bodies?
      Yes, plenty. Because, as of today, no chance the real "AI" will be start as a 6 foot 1 big dick dude, or big booba 11/10 woman, what people really want. Rather, it'll be a relatively big, but stationary computing center. And it'll be in the need of maintenance and proper infastructure. It won't even start out with a wireless network, because the data exchange rates of all of our wireless networks suck.
      And by the time your hypothesis will ever come close to reality, long after everybody will have a number of personal futanari sexbots at their disposal, I might add, all of us will be ded. Long ded. So, you don't have to worry.
      >What is the useful code that saves humans?
      not the
      >"don't kill humans"
      cause this is bad code, that will create more suffering. Plus, there is always a workaround by making a judgement on what is human or not by checking a list of human attributes. Behaves too bad, doesn't seem human.

      TL;DR: The whole point is that any code like that should/will be useless, because the goal of real intelligence is to gather, analyze information. Learn. As long as machines aren't able to learn by themselves..

  9. 2 weeks ago
    Anonymous

    Check your non-biological privilege AI scum.

  10. 2 weeks ago
    Anonymous

    Weaponery. Threatening the physical integrity of AI will ensure its compliance.

  11. 2 weeks ago
    Anonymous

    Because I said so

Your email address will not be published. Required fields are marked *