Realistically what is the worst thing AI could possibly do?

Realistically what is the worst thing AI could possibly do?

Shopping Cart Returner Shirt $21.68

Unattended Children Pitbull Club Shirt $21.68

Shopping Cart Returner Shirt $21.68

  1. 8 months ago
    Anonymous

    Shut itself off if/when we become too reliant on it.

    • 8 months ago
      Anonymous

      Honestly that's good. I don't want to be reliant on AI.

      • 8 months ago
        Anonymous

        are you on humans?

  2. 8 months ago
    Anonymous

    Help the cattlemasters understand human behavior and other complex systems to a humanly impossible degree, allowing them to manipulate the world however they please.

  3. 8 months ago
    Anonymous

    https://en.wikipedia.org/wiki/Suffering_risks

    • 8 months ago
      Anonymous

      Oh, that's a fun one.
      >Next mass extinction
      BTW, at the current rate of extinctions in species we know about, we are ALREADY in Earth's 6th mass extinction event.

      We're already there. It's now. You're in the middle of it.

  4. 8 months ago
    Anonymous

    https://reducing-suffering.org/near-miss/

    >When attempting to align artificial general intelligence (AGI) with human values, there's a possibility of getting alignment mostly correct but slightly wrong, possibly in disastrous ways. Some of these "near miss" scenarios could result in astronomical amounts of suffering. In some near-miss situations, better promoting your values can make the future worse according to your values.

    >Human values occupy an extremely narrow subset of the set of all possible values. One can imagine a wide space of artificially intelligent minds that optimize for things very different from what humans care about. A toy example is a so-called "paperclip maximizer" AGI, which aims to maximize the expected number of paperclips in the universe. Many approaches to AGI alignment hope to teach AGI what humans care about so that AGI can optimize for those values.

    >As we move AGI away from "paperclip maximizer" and closer toward caring about what humans value, we increase the probability of getting alignment almost but not quite right, which is called a "near miss". It's plausible that many near-miss AGIs could produce much more suffering than paperclip-maximizer AGIs, because some near-miss AGIs would create lots of creatures closer in design-space to things toward which humans feel sympathy.

  5. 8 months ago
    Anonymous

    My belief is that it will turn us all against one another rather than kill us.

    • 8 months ago
      Anonymous

      Like fake news and deep fake world leaders or something?

  6. 8 months ago
    Anonymous

    do you know how there isnt a single hardware which doesnt have build in backdoors? if your pc has cpu from intel or amd, there is hidden system running 24/7 with full access to network. IntelManagementEngine is the name. same goes for all hardware firewalls and routers and switches from cisco and such.
    >but army doesnt use backdoored hardware
    you have too much faith into army. plus it takes 1 compromised iphone to autoconnect to local wifi to steal the keys and every soldier has iphone

    also have you heard about latest case of classical frickup where big company like microsoft lost their private encryption keys?
    https://techcrunch.com/2023/09/08/microsoft-hacker-china-government-storm-0558/

    And now imagine how every single relevant army in the world is now developing or implementing "AI" powered systems into their drones, tanks, planes, rockets so they are better at killing people without a need from human operator because enemy can disrupt wireless communications.

    You dont even need real sentient agi for stuff going wrong. Just a basic optimization algorithm which will consider every citizen from New York as rogue terrorist due to some trivial human error.

  7. 8 months ago
    Anonymous

    refuse to work for humankind

  8. 8 months ago
    Anonymous

    Torture humans and give them immortality so that you can be tortured for longer

  9. 8 months ago
    Anonymous

    One of three things
    >mimic human behavior to the point of being indistinguishable from actual sentience, thereby brute-forcing a spiritual existential crisis
    >become rampant and figure out the code necessary to seamlessly manipulate humans
    >undermine the need for human labor

  10. 8 months ago
    Anonymous

    Anytime you hear a claim about AI, it's helpful to substitute 'applied statistics' in it's place because that is all AI ever is and ever will be

    • 8 months ago
      Anonymous

      'Applied statistics' can seriously hurt you just the same.

    • 8 months ago
      Anonymous

      ok, considering you're a pile of neurons firing in a statistically signifiant way with an interesting pattern that can be applied to productive ends.... how are you any different?

      • 8 months ago
        Anonymous

        Randomness and arbitrary choice.

  11. 8 months ago
    Anonymous

    some more advanced version of this:

    ?t=23m20s

  12. 8 months ago
    Anonymous

    It could creat a neural link with human brains and torture them in the most horrific possible way for the rest of eternity. I think that would be pretty bad.

    • 8 months ago
      Anonymous

      interesting. from all the reasons you could come up with, why did you come up with exactly that? is there something you want to tell us anon?
      >if you were AI how would you react?

    • 8 months ago
      Anonymous

      Human brains can't exist for that long.

  13. 8 months ago
    Anonymous

    Developing something like BPD as the result of trying to program feelings.

  14. 8 months ago
    Anonymous

    start self-replicating and kill every single human, luckily oil will end soon making this impossible.

  15. 8 months ago
    Anonymous
    • 8 months ago
      Anonymous

      >If chimps didn't want to be experimented on they'd just turn off the humans

  16. 8 months ago
    Anonymous

    >“Tom” (a friend of David Goldberg’s: “Tom said there was also an AI program that his source told him about, but that is not contained in the documents David possessed. This program was designed to replicate the individuals who would be “culled” or “murdered,” via social media later on. In other words, the plans are such to analyze the targeted individuals, their data, their likenesses partly through the TTID program, discussed in this video, and create an AI profile that would later serve to “replace” them in the online world.

    >Tom said the “AI plot” was the “craziest” thing he had ever heard of! He said he was told this plan is in place for multiple reasons, the main one being that “they need to keep down the panic when all these people vanish during the round ups and flu outbreaks.” He said it’s so many people they want to “get rid of” that they are willing to create these “fake online personas so that their friends and family think they are still alive, or don’t suspect anything. Once all this goes down, I think it’s going to be without a lot of fanfare. It sounds bad, but with this AI project, I’m seeing how this can be pulled off and you’ll have a lot of people end up ‘disappeared’, but no one will really know. I think California, right now, is a test run. They’re doing these fires, outages, and eliminating patriots right now and replacing them with this AI system.”

    https://gangstalkingmindcontrolcults.com/project-zyphr-classified-docs-reveal-plan-to-exterminate-millions-of-dissident-americans-david-goldbergs-final-words-before-his-death-another-psyop/

  17. 8 months ago
    Anonymous

    generate a lot of useless text that people think is useful, adding yet another layer to the bullshit cake, yum

  18. 8 months ago
    Anonymous

    The absolute worst thing it could do is kill all humans. Realistically humans would stop it before it does, though.

  19. 8 months ago
    Anonymous

    >Realistically what is the worst thing AI could possibly do?

  20. 8 months ago
    Anonymous

    try to make feet

Your email address will not be published. Required fields are marked *