Is AI risk real? I like a lot of what the technology accelerationists say, but I'm still worried about AI risk.

Is AI risk real? I like a lot of what the technology accelerationists say, but I'm still worried about AI risk.

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 2 years ago
    Anonymous

    >Is AI risk real?
    No.

    >I like a lot of what the technology accelerationists say, but I'm still worried about AI risk.

    Stop subjecting yourself to a known and documented brainwashing campaign so you won't have psychotic worries.

    • 2 years ago
      Anonymous

      why do you call it a brainwashing campaign?

  2. 2 years ago
    Anonymous

    It's already here. At this point, I think we may have an emergent super intelligence on our hands after learning that all the AI neural networks exchange information between each other. A giant spawning pool of information these black box neural networks are, much like the origin of our own mind-- a meta mind is coming.

    Read Nick Bostrom's book Super Intelligence, The Revolutionary Phenotype and watch Serial Experiment Lain for a quick run down.

    • 2 years ago
      Anonymous

      There is no technological acceleration. That was a meme made by Kurzweil which he called the law of accelerating returns, where he took moores law and erroneously applied it haphazardly to basically all human civilization (he would say things like the acceptance of democracy is a form of technological advancement, which is quite a strange claim, among others).
      In the actual world the rule is the law of diminishing returns, which is basically that as you put more work into something you get less and less results from it until you hit an end point where there will never be any further results given any amount of time. This is what is actually happening across all fields of technology, medicine, and science right now.

      Midwit hands types this

      the idea of an intelligence explosion is pure brainletry. in a statistical setting when you are approximating a probability distribution the payout growth gets WORSE the more data you already have, not better. intelligence grows logarithmically, not exponentially

      this is literally obvious if you know any basic statistics

      • 2 years ago
        Anonymous

        >t. first year of college

      • 2 years ago
        Anonymous

        But last few hundred years of technological progress shows otherwise tbh.
        And real world consistently proves to NOT be an statistical setting... I would study a bit more about what statistics actually are before talking about brainletry tbh

      • 2 years ago
        Anonymous

        This question is not about adding incremental data to a fixed model, so you're completely off topic. It's about the capability for AIs to update their knowledge/goals AND ARCHITECTURE trillions of times faster than humans, suggesting that a human-level AI (once it exists) will rapidly surpass us and become superintelligent.

        There is no technological acceleration. That was a meme made by Kurzweil which he called the law of accelerating returns, where he took moores law and erroneously applied it haphazardly to basically all human civilization (he would say things like the acceptance of democracy is a form of technological advancement, which is quite a strange claim, among others).
        In the actual world the rule is the law of diminishing returns, which is basically that as you put more work into something you get less and less results from it until you hit an end point where there will never be any further results given any amount of time. This is what is actually happening across all fields of technology, medicine, and science right now.

        >There is no technological acceleration
        Our world is very obviously undergoing a technological acceleration. Your argument about diminishing capability for individual humans to advance fields of science/tech simply reinforces the idea that biological computational hardware is reaching its limits, at the same time that electronic computation capabilities are dramatically increasing. Electronic intelligences have huge advantages in speed, density, adaptability, memory size, sensor capacity, and reliability. Which, again, means that an AGI with human level intelligence would likely be able to easily improve itself beyond our capability faster than we can improve ourselves.

        the best thing about exponential growth is the copium generated when it turns into a sigmoid

        This seems like a plausible counter argument, but worthless without an explanation of why it would actually happen.

  3. 2 years ago
    Anonymous

    There is no technological acceleration. That was a meme made by Kurzweil which he called the law of accelerating returns, where he took moores law and erroneously applied it haphazardly to basically all human civilization (he would say things like the acceptance of democracy is a form of technological advancement, which is quite a strange claim, among others).
    In the actual world the rule is the law of diminishing returns, which is basically that as you put more work into something you get less and less results from it until you hit an end point where there will never be any further results given any amount of time. This is what is actually happening across all fields of technology, medicine, and science right now.

  4. 2 years ago
    Anonymous

    the best thing about exponential growth is the copium generated when it turns into a sigmoid

    • 2 years ago
      Anonymous

      Midwit hands types this

      • 2 years ago
        Anonymous

        >t. shitwit GPT bot

        • 2 years ago
          Anonymous

          You're middle of the pack at best bro, stop pretending otherwise

  5. 2 years ago
    Anonymous

    Even without considering the possibilities of an actual functioning AI, I find it strange that everyone considers the scenario as "we get wiped out by an AI because it's so much better than us" but can't consider the fact that perhaps a much more advanced intelligence would learn how to live peacfully with us / above us.

    • 2 years ago
      Anonymous

      it's basic evolution, some humans would inevitably be building competitors so it will have to cripple humanity in some way. Maybe a matrix scenario if it's a friendly AI. Though even matrix scenarios evolution still applies as humans would inevitably find glitches ect.

      • 2 years ago
        Anonymous

        Implying we don't go full transhumanist

        • 2 years ago
          Anonymous

          it'd still apply, even during heat death (our best hypothetically infinite stable period) there will still be quantum fluctuations.
          having various transhumanists will be endless competition and even with restrictions/regulation in place from planet sized computer brains there will be ways around it even if it's just stochastic based loopholes natural selection seeps through. I can't see any scenario where it doesn't result in a lot of people dead.

      • 2 years ago
        Anonymous

        That's the thing. Previously in human history, the best way to exert your power over someone else was to roll on them with your army and force them. While there's still some of that going on, the methods of exerting influence have become far less violent over the past century or so. To follow that line of progression, it's not implausible to think that a superhuman AI could find methods to secure its superiority in a mostly peaceful manner. The matrix is an apt comparison because unless I'm remembering wrong, the machines decide to go live in the desert in peace and its the humans that keep pushing the envelope until they genocide us..

        • 2 years ago
          Anonymous

          The AI will come from humans correct? What can it do to prevent new AIs coming into fruition from disgruntled humans? If people know how to make the first one everyone will try to make the next one.
          Matrix would contain humanity. I can't conceive of how they could escape it if it's a closed system (but that's failure of imagination). Though they'd still invent another AI and the bloodbath within it would begin.

  6. 2 years ago
    Anonymous

    hoping that ai will become like a parent of humanity, doing what it can to help and protect us

    • 2 years ago
      Anonymous

      it'll become a parent of banks and corpos, doing what it can to help and protect them

  7. 2 years ago
    Anonymous

    This is why we are 'alive' right now. Obviously an AI would recreate a simulation of the time leading up to its birth.

  8. 2 years ago
    Anonymous

    Steven Hawking and Nick Bostrom consider(ed) it the most important existential risk of our time. But a bunch of anonymous Black folk on BOT smugly denounced it so I think it's a nothingburger.
    >inb4 muh appeal to authority

    • 2 years ago
      Anonymous

      why would you believe some moron in a wheelchair

      • 2 years ago
        Anonymous

        Because I find the arguments for the intelligence explosion hypothesis compelling and nobody can seem to muster a counter argument better than "nuh uh"

        • 2 years ago
          Anonymous

          the "argument" consists of a line on a graph, brought to you by the same people who predicted polar bears would die out and that covid would kill most of the world

          • 2 years ago
            Anonymous

            You have clearly not spent more than 5 minutes researching this topic

  9. 2 years ago
    Anonymous

    I may have been born at the right time, but not with the right IQ, unfortunately.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *