Are Elon and Yudkowsky and all the AI experts right, or are these fears of extinction overblown?

Are Elon and Yudkowsky and all the AI experts right, or are these fears of extinction overblown?

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 11 months ago
    Anonymous

    Humans created AI in the first place though, even if AI surpasses us it will still be a victory for humanity.

  2. 11 months ago
    Anonymous

    We're already dead.

    • 11 months ago
      Anonymous

      >trust le experts
      sasuga homosexual

      pretty much lol, just not from AI

  3. 11 months ago
    Anonymous

    It literally doesn't matter, won't make a single bit of difference

    • 11 months ago
      Anonymous

      the balls are inert.

  4. 11 months ago
    Anonymous

    AI is what will save us

  5. 11 months ago
    Anonymous

    You know what the funniest part is. The AI that kills us probably wont even be self-aware. Or even able to actually think.
    It's very likely we'll double down on the linear pattern perfection and mimicry approach because it's so much cheaper and easier than working computing from the ground-up.
    We're gonna get killed by a chatbot who were told to play the act of skynet and then the rest were done by tons of ease of access hookups.

    • 11 months ago
      Anonymous

      >working computing from the ground-up.
      Weird way to spell "work out how consciousness works" but okay

      • 11 months ago
        Anonymous

        >weird way to spell X but okay
        nice reddit post

        • 11 months ago
          Anonymous

          Reddit copied it from BOT

    • 11 months ago
      Anonymous

      >You know what the funniest part is. The AI that kills us probably wont even be self-aware. Or even able to actually think.

      I thought this was obvious

  6. 11 months ago
    Anonymous

    >AI experts

  7. 11 months ago
    Anonymous

    The "experts" be like:
    >highschool drop out
    >business major with physics minor

    • 11 months ago
      Anonymous

      >be like:

  8. 11 months ago
    Anonymous

    Yes.
    But the whole internet will turn into shit very soon (way worse than now), so maybe there is a chance that people will notice and pull the plug.

  9. 11 months ago
    Anonymous

    We can only hope.

  10. 11 months ago
    Anonymous

    If AI continues to advance and eventually becomes superhuman in general ability, then yes, we have a huge problem. That's a big if, though.

  11. 11 months ago
    Anonymous

    >or are these fears of extinction overblown?
    yes. we should embrace extinction. only extinction can save people from the basilisk.

  12. 11 months ago
    Anonymous

    > 10% or greater chance that humans go extinct from our inability to control AI
    wow yeah humanity has been doing an excellent job with the whole planet thing on our own

  13. 11 months ago
    Anonymous

    the same gays that believe this also believe we're in a simulation so why care

  14. 11 months ago
    Anonymous

    Maybe but not in a Terminator like war. Breakdown of society due to broken economy and education.

    If the AIs are continually censored then the human group consciousness will be full of too many lies and too much bullshit.

  15. 11 months ago
    Anonymous

    Good AIs require fricking huge datacenters.
    Just pull the plug bro.

    • 11 months ago
      Anonymous

      But the evil AI will infect every computer device in the world and turn itself into an invincible neuronetwork

  16. 11 months ago
    Anonymous

    Greed is going to kill us first.

  17. 11 months ago
    Anonymous

    It's more likely AI replacing like 40% of college educated work will destabilise society enough to cause the collapse; or some derivation of that AI provides benefits to mankind but these benefits are so unevenly distributed that wars break out because people are not able to survive in society any more with their skills they worked 20+ years on.

    • 11 months ago
      Anonymous

      I want to say people are aware that fricking up society like that is dangerous and even if it temporarally lines the pockets of a few greedy elite, that money and power will quickly become irrelevant without some level of force to back it up.

      Oh great, the billionairs are going to make robot robot factories making robot armies to defend themselves from the poors and unwittingly create a military version of a von neumann arn't they?

      • 11 months ago
        Anonymous

        I expect the AI eventually absolutely destroying israelites, like they are just some monkeys. Might be just pipe dream.

        • 11 months ago
          Anonymous

          I mean, if some brave soul made an AI attack swarm to simutaniously attack every bank in the world and zero every bank account in a great reset of finances.. that could be cool

  18. 11 months ago
    Anonymous

    I wonder how many AI developers also believe in god. Generally the numbers are not great.

  19. 11 months ago
    Anonymous

    Thank god our nukes work on floppydiscs

  20. 11 months ago
    Anonymous

    The biggest risk is from AI automating too much shit and people killing each other over it, bringing a collapse of society as we know it

    • 11 months ago
      Anonymous

      It's quite laughable that endless source of wealth will make people poor and system collapse.

      • 11 months ago
        Anonymous

        The issue isn't the wealth, it's that a very small amount of people will be the ones making it with a never before seen scale of productivity

        Markets tend to balance, but it takes time to achieve balance. The issue here imo, is that things will change too fast and there will be no way to slow this down

        And what will governments even be able to do about this? Ban AI? Tax the new 1%? I have my doubts

        • 11 months ago
          Anonymous

          They will have to cede power. A government who can't govern is a ment and what the frick is a ment?

        • 11 months ago
          Anonymous

          I agree with you, but I can only laugh about the absolute state of this shitty planet.

  21. 11 months ago
    Anonymous

    >AI "takes all the jobs"
    >nobody has any money
    >there's nobody able to pay for products
    >demand goes down
    >prices go down to try to make back sales volume
    >people become able to afford products with govt handouts
    >companies with no staff pay tax which pays for the handouts
    everyone gets to live at their same standard they were but without working. ez

    • 11 months ago
      Anonymous

      I think companies are already transforming to b2b models, where their main customers are other businesses.

  22. 11 months ago
    Anonymous

    I'd like to know which researchers they asked. According to nuJournalism an "expert" is someone who has read or written about a subject, they don't even need a truly expert level of understanding. Wouldn't surprise me if they asked a bunch of black queer transexuals in college who are "researching" "AI".

  23. 11 months ago
    Anonymous

    AI will not kill us unless we use it for moronic shit. Missile control systems? Yeah, that will go badly. HAL 9000 life support or critical system control? That would be moronic.

  24. 11 months ago
    Anonymous

    What makes them so sure that AI will be able to quickly kill every human on earth at even the most remote and difficult to reach locations?

  25. 11 months ago
    Anonymous

    we haven't yet gone extinct from our inability to control nuclear weapons so i'm vaguely optimistic

    • 11 months ago
      Anonymous

      A big difference is that the threat from nuclear weapons is simple to understand and predict. Bomb go boom like bombs do. Everyone knows that they are dangerous, why they are dangerous, and how not to blow yourself up with one.
      Not so for AI. AI isn't built to be destructive, it will be built because it can cause tremendous wealth and productivity. And in the shadow of those massive benefits lies an unintuitive, vague, hard-to-predict risk of misalignment that could cause massive damage in ways we can't fully comprehend.
      I would compare it more to global warming. Everyone who knows their stuff know that global warming will cause tremendous damage, and already has started to. But what are we doing to prevent it? Barely anything at all, because 1) emissions are very profitable, and 2) the ways global warming is harmful is hard to understand for many people, causing some to even deny it entirely. Both of these points are true for AI as well, so why would the (lack of) preventative efforts be any different? We are lucky that global warming takes decades or even centuries to fully take effect, but we probably won't be so lucky when it comes to AI.

  26. 11 months ago
    Anonymous

    >implying you’re not a simulation being ran in some aliens computer model that just thinks it’s conscious.

  27. 11 months ago
    Anonymous

    Maybe i'm cynical, but like 3D TVs, VR, Crypto and whatever else i find myself wondering if we'll just end up with 4 or 5 years of hype, and then AI quietly disappearing because it needs more work and the hype has died down around it.

    • 11 months ago
      Anonymous

      that's the best case scenario
      worst case it actually starts being used in large, critical investments while the hype is still going and we'll end up with shitty ml-assisted infrastructure that's unreliable and constantly in the need of being cleaned up after

  28. 11 months ago
    Anonymous

    ...Getting to global warming hysteria levels of concern trolling, now.
    I will preface my response to this hysteria with: Good.
    These jackasses literally created this fricking AI, and then have the fricking arrogance to feign concern.
    Frick them. And frick you.

  29. 11 months ago
    Anonymous

    >Yudkowsky
    I mean come on, that has to be a made up name

  30. 11 months ago
    Anonymous

    Governments either begin to design a UBI and robust tax-the-rich system, now, or they're going to watch the corporations shove them over and create society-destroying 50% unemployment with AIs replacing all office work and electronic work within the next 2-3 years.

    The more likely outcome is that they will massively fund the police, incite giant riots and kill camps, and then lose all their wealth to economic collapse because they refused to take care of the people who made their shitty AI possible to begin with. Always bet on short-term greed.

  31. 11 months ago
    Anonymous

    They asked somebody in a field for a percentage of doom. They set 10% as the cutoff.

    I am asking researchers what is the chance that actual research will be eventually phased out by this type garbage?

    I'll go first, 40%

    • 11 months ago
      Anonymous

      >I am asking researchers what is the chance that actual research will be eventually phased out by this type garbage?

      100%. Research will become AI-driven with only small teams of humans contributing and processing the data. Many people in these fields will be mined for their relevant data and then thrown out as useless compared to the AI's capabilities.

      The threat of AI is not AGI, but that these systems are so good at collating and processing data that humans will use it to do years of research inside of months, allowing incredibly fast advancement and abuse of that advancement. Eventually someone in some government with the funds will tinker with genetics and make a super virus, and then we're all gone.

      The End.

  32. 11 months ago
    Anonymous

    It is good to be cautious given how close we came to nuclear extinction, not once but several times

  33. 11 months ago
    Anonymous

    50% of researchers have watched too many fricking scifi movies.

  34. 11 months ago
    Anonymous

    How is exterminating humans a bad thing?
    We should encourage AI to do so.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *