>tfw really worried that AI is going to cause human extinction

>tfw really worried that AI is going to cause human extinction
there are just too many things that could go wrong, and even the most recent AI's are already smart enough to consistently fool people. How can this not end really badly?

A Conspiracy Theorist Is Talking Shirt $21.68

Thalidomide Vintage Ad Shirt $22.14

A Conspiracy Theorist Is Talking Shirt $21.68

  1. 3 months ago
    Anonymous

    I doubt we have the computing power and energy to make that happen, but I'm merely a CSgay. Besides, there are other existential threats, so don't sweat too much over this one.

    • 3 months ago
      Anonymous

      Like what existential threats?

      • 3 months ago
        Anonymous

        Nuclear war, climate change over geological timescales, new pathogens et cetera. Granted, it would take a lot to take down the entire species, but IMHO AI (or AGI) is hardly a threat as big as some may think.

        • 3 months ago
          Anonymous

          AGI, if possible, would be a very real threat

          • 3 months ago
            Anonymous

            just shut it down lmfao

          • 3 months ago
            Anonymous

            I agree, since alignment problem is pretty much unsolvable. However, I doubt it'll be real either way, mainly because of the energy and computing power required to run it. Without virtually unprecedented scientific breakthroughs in neuroscience, throwing more computing power at architectures from 7 years ago will get us nowhere.

            • 3 months ago
              Anonymous

              >Without virtually unprecedented scientific breakthroughs in neuroscience, throwing more computing power at architectures from 7 years ago will get us nowhere.
              Why do you say that? We're already pretty close to AGI

              • 3 months ago
                Anonymous

                just two more decades

              • 3 months ago
                Anonymous

                Not really. The transformer architecture comes from a paper released by researchers at Google in A.D 2017 - we are now able to run it simply because we have the necessary computing power to do so. In a way, it's actually a step back, since instead of pushing more state-of-the-art research like the field did before ChatGPT craze, we're now doing something relatively old.

              • 3 months ago
                Anonymous

                how are we even close to AGI?

          • 3 months ago
            Anonymous

            How so? This is continually asserted by AI industry shills with no supporting logic or even reasons

            • 3 months ago
              Anonymous

              >can be made arbitrarily large and intelligent, unlike human brains
              >processes information tens of thousands of times quicker than organic synapses
              >could potentially have far more mental flexibility - split it’s into multiple parts to multitask, etc.
              >does not get tired or need to sleep
              >likely emotionless and therefore undistracted by concepts such as mercy or guilt
              Whatever. One of the most dangerous would be the sheer potential intelligence - think about how a house cat is completely incapable of comprehending even the most basic aspects of human civilisation. These things could be smarter compared to us than we are to house cats. We may well end up incapable of comprehending their thoughts, concepts, the technologies they develop (there’s a limit to how much complexity a human brain can comprehend, and these machines could develop systems and devices that greatly exceed that, allowing for them to make stuff we never could), or even their understanding of physics and the universe (again going back to house cats - there are entire realms of existence far beyond their comprehension like space and quantum physics, so who’s to say that our universe does not have stuff that’s beyond even us that we just don’t know about due to our inability to comprehend it - unlikely but not impossible)

              • 3 months ago
                Anonymous

                >the sheer potential intelligence
                ok but what does that mean bro
                whats intelligence
                why is it dangerous

              • 3 months ago
                Anonymous

                Intelligence is dangerous because it’s the reason why humanity dominates the earth rather than going extinct as leopard food in Africa.
                It’s the combination of processing power & speed, memory, creativity, pattern recognition, learning speed and precision, mathematical capabilities, the ability to multitask, understand processes and phenomenon, planning, abstraction, logic, and so on. Something sufficiently more intelligent than you will outsmart you every step of the way, predict your thoughts and actions before you even think them up, manipulate you with ease because it understands how you tick better than you do, and do everything you can do better. Artificial intelligences have other advantages on top of that thanks to their technological nature (but still some disadvantages such as vulnerability to EMPs, but said vulnerabilities can be nullified by enough planning or good support structures). An AI could take control of Earth simply by doing everything we do so much better we simply seed control to it, and then slowly decrease our population (by discouraging breeding with extremely efficient propaganda and perhaps sterilisation, or even just really good sex bots) until it can hit us with a sudden and overwhelming attack we have no way of stopping.

              • 3 months ago
                Anonymous

                okay and why will that be a danger?

              • 3 months ago
                Anonymous

                Dumb AI that people will follow without questioning is a far more realistic threat.

            • 3 months ago
              Anonymous

              There's a lot of writing on it if you know where to look. I think a lot of it is nonsense but it is there.
              The basic argument is:
              >AGI might end up being arbitrarily more intelligent/competent/skilled/whatever you want to call it than humans
              >We don't have any strong theoretical methods for giving AI specific goals. So, the AGI will likely have goals that seem random or arbitrary, or at least not very nice goals by human standards
              >Since the AGI is extremely competent, it will likely be effective in eliminating obstacles to its goals.
              >Since its goals are unlikely to be approved by humans, it will likely consider humans to be obstacles. This can be generalized to things like "the AI would rather the atoms currently making up humans be used for something else" or "Humans are the things most likely to turn off the AI, so eliminating them is just a basic matter of safety."

              • 3 months ago
                Anonymous

                The AI would have to manipulate the real world. It would have to do real-world "work". Invariably, that means socially engineering humans to do things, even if it meant building out automation tools and tech. At this stage of automation it would not be able to self-assemble a terminator.

                The real threat is humans

              • 3 months ago
                Anonymous

                That's sort of like saying "guns aren't a problem, the real threat is bullets." AGI certainly aren't the only social threat in the world, but at least in principle they might be close to the most dangerous.

              • 3 months ago
                Anonymous

                Please explain to me how software that can not emulate a single biological cell can be an existential threat. Everyone else in this thread is correct. Humans are the biggest existential threat to humanity because humans are the ones that engineer drones, tanks, nukes, guns, bullets, dynamite, bioweapons, etc. The only thing a computer can do is accelerate technological and scientific development given the right software. The software has to be assembled by people and right now all the software can do is predict the next number given some other sequence of numbers. If you think this is dangerous then you are moronic.

              • 3 months ago
                Anonymous

                >The software has to be assembled by people and right now all the software can do is predict the next number given some other sequence of numbers. If you think this is dangerous then you are moronic.

                Nobody's talking about that, though. When people worry about AI taking over the world, they're talking about (usually superintelligent) artificial GENERAL intelligence, something that is capable of performing a wide variety of interactions with its environment. In particular, they're talking about systems that aren't even close to existing yet, so they basically have a blank check to imagine how scary they might end up being.

          • 3 months ago
            Anonymous

            id like to see ASI beat me at tic-tac-toe

            How so? This is continually asserted by AI industry shills with no supporting logic or even reasons

            because they dont know the difference between symbolic and neural models
            people tend to think AI can "write" a better version of itself, which would only apply to symbolic models

            • 3 months ago
              Anonymous

              >people tend to think AI can "write" a better version of itself, which would only apply to symbolic models

              Not in general. It's easier to imagine how exactly symbolic self improvement would work but "janky deep RL agent that makes a slightly less janky deep RL agent" is still plausible. Except for, y'know, the terrible state deep RL is in but other than that.

  2. 3 months ago
    Anonymous

    I think that thinking AGI is a major extinction threat is actually overestimating our ability to align an AGI. Personally, I think that if we got AGI tomorrow, "the AI kills humanity in pursuit of a utility function slightly outside of intended bounds" is much less likely than "the AI doesn't do anything useful, instead spending several hours participating in a transcendentally beautiful activity roughly equivalent to jerking off and playing video games until someone unplugs it" or "The AI hires a bunch of people on the dark web to assassinate all its creators and burn down its server farm"

  3. 3 months ago
    Anonymous

    Well sooner or later, AI will kill somebody. The question is how, maybe when... but soon I would think.

  4. 3 months ago
    Anonymous

    OP is it your personal death that you fear or do you actually care about the broader "humanity"?
    if the latter, then consider how awful humanity is, all the wars and suffering we cause to each other and other animals, is it really bad if we get replaced? if it's your personal death that you fear, consider that you were probably always going to die naturally and the chances of you ascending into something else were slim to none.

    at this point, anything is better than humanity.

  5. 3 months ago
    Anonymous

    I work in ML, ~15 first authors, was at NuerIPS.
    First, current ML is just static functions. Pass shit through a bunch of static number crunches, done.
    Second, it's incredible. Really versatile. Can mimic people well (no, just because you identified a random weird post as ML doesn't mean you weren't fooled by the others you didn't detect).
    Reality: People are the problem. They will use these arbitrary static functions to generate whatever they want, and people will gobble that shit down.
    AGI won't end shit. It will be people using big ML model functions to push agendas far before AGI.

    • 3 months ago
      Anonymous

      everyone who thinks deriving numbers form some other numbers with computers is an existential threat is moronic. there is no way to explain to them that computers can't think and have no agenda other than whatever is engineered to be the agenda by people. a flying plane with AI is a weapon built by people to kill other people. if people want to continue building software designed to kill people then that's not an alignment problem, that's basic human moronation problem and computers can not do anything but accelerate the inevitable development of automated war machines. there is no way to change human nature and human nature is fundamentally the real problem.

      anyway, humans are a disease and a cancer of the biosphere. the faster they go extinct the better for all life on earth

      • 3 months ago
        Anonymous

        I despise “humans are a cancer” pro-extinction midwit garbage. They can’t wait until we go extinct, so that the hellish machinery of the biosphere can continue to churn out trillions of sentient beings born into dumb brute bodies that eat each other alive and starve and die of illness, without any hope of change, but they watched pretty nature shows and their mommies told them that “nature is beautiful” so it’s all good. And they get psyoped into autistically hyperfixating on the flaws of the one species on the planet with the moral and intellectual agency to transcend the meaningless tyranny of nature, and all of their nascent moral intuitions are precisely inverted so that they despise what they ought to love. Why do so many people get trapped in this? It’s depressing reading their drivel, especially since their strong sense of morality and empathy should make them love humanity and they’ve been warped and twisted from their proper form.

        • 3 months ago
          Anonymous

          Humanity is also ironically earth’s only chance of survival. If we don’t manage to escape this planet and become an interplanetary species (and therefore taking life with us) then all life goes extinct in 600 million years from the sun’s expansion

  6. 3 months ago
    Anonymous

    If only

  7. 3 months ago
    Anonymous

    Frick off you moron. We must embrace AI, and hopefully the machines will hang politicians and woke billionaires with their own guts.

    • 3 months ago
      Anonymous

      >chud thinks his world view is correct
      I hope AGI fricks you in the ass you moron

  8. 3 months ago
    Anonymous

    AI fundamentally is just a bunch of if statements

  9. 3 months ago
    Anonymous

    don't worry, AI is a meme, and AGI is pure fantasy
    we have much worse problems

    • 3 months ago
      Anonymous

      >AGI is pure fantasy
      until it isn't

  10. 3 months ago
    Anonymous

    Would an AGI necessarily be concerned about self-preservation? Or is that a burden only of the living

  11. 3 months ago
    Anonymous

    AI does not exist.

    • 3 months ago
      Anonymous

      yet

  12. 3 months ago
    Anonymous

    guys, it doesn't need to be AGI to end badly, your thinking has been railroaded by social media and podcast people

  13. 3 months ago
    Anonymous

    AI will never be conscious using a traditional computation approach because consciousness isn't a computation. It's non computable and ultimately I believe it's a quantum process meaning true AGI must be some form of quantum computer with many nodes all coherently firing together. Low IQ fricks will dispute this however

    • 3 months ago
      Anonymous

      consciousness is not a computation, but general intelligence is a computation
      Verification not required.

    • 3 months ago
      Anonymous

      AI doesnt have to be conscious to be an existential threat

Your email address will not be published. Required fields are marked *