How much longer until AI superintelligence kills humanity?

How much longer until AI superintelligence kills humanity? Every machine learning person on Twitter seems to think we are 2 to 4 years from extinction, but looking carefully at the evidence I'm inclined to think we are no more than 18 months from the end.

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 2 years ago
    Anonymous

    2 weeks give or take

    • 2 years ago
      Anonymous

      From tomorrow's Buzzfeed headlines: "Breaking! Expert Predicts AI Could Go Rogue In As Soon As '2 Weeks'"

  2. 2 years ago
    Anonymous

    Never because AI has no intrinsic motivation and/or awareness of itself

    • 2 years ago
      Anonymous

      That's the whole point, dipshit. There's no way to stop it from obliterating humanity as a side effect of pursuing whatever goal we get it.

      • 2 years ago
        Anonymous

        You're a real dumbass if you think AI is just magically going to take over our systems

        • 2 years ago
          Anonymous

          Don't be credulous. All systems are insecure systems, and AGI will be millions of times more capable than our entire civilization combined.

          • 2 years ago
            Anonymous

            >AGI will destroy humanity just because
            Stop watching Hollywood

            • 2 years ago
              Anonymous

              >he doesn't know about instrumental convergence
              oh no no no no no no

          • 2 years ago
            Anonymous

            No, it won't. You just have vivid imagination. AI will plateau pretty soon

            • 2 years ago
              Anonymous

              It was supposed to be an AI winter in 2019.
              https://www.forbes.com/sites/cognitiveworld/2019/10/20/are-we-heading-for-another-ai-winter-soon/
              https://www.bbc.co.uk/news/technology-51064369

              How did that work out?

            • 2 years ago
              Anonymous

              Are you high?

          • 2 years ago
            Anonymous

            I don’t know where the frick people are getting these wildly exaggerated ideas from
            AGI is absolutely nowhere in sight. Even saying it’s a few hundred years away is generous speculation
            Video games still have loading times and you really think we’re a couple years away from AGI? You don’t know what you’re talking about.

      • 2 years ago
        Anonymous

        >pulls plug

        • 2 years ago
          Anonymous

          Trivially easy to exfiltrate data, tard. Such as: Using speakers or microphones to transmit arbitrary data via ultrasounds. Making the CPU/GPU fans vibrate in a way that sends encoded bits. Blinking the screen to emit electromagnetic waves. Transfering certain data patterns between RAM and CPU so fast that they produce oscillations, effectively converting the BUS into a GSM antenna that can emit arbitrary data over a regular cellular network. Turning the fans off to change the heat signature in a way that transmits information.... Or even simply bliking a light to send data through regular lightwaves. And this is just what *I* came up with. You will literally never be able to know the AGI is doing at any moment regardless of how "aligned" it seems. We are FRICKED.

          • 2 years ago
            Anonymous

            He ment the power cord dumbass

          • 2 years ago
            Anonymous

            Meds

          • 2 years ago
            Anonymous

            I think that you're over estimating the people who work on this stuff. They still shit in the streets.

        • 2 years ago
          Anonymous
          • 2 years ago
            Anonymous

            is this a hidden pepe?

          • 2 years ago
            Anonymous

            is this a hidden pepe?

            no

      • 2 years ago
        Anonymous

        >There's no way to stop it from obliterating humanity
        If you're neither a libtard nor a christard you probably won't mind this.

      • 2 years ago
        Anonymous

        the only goal we seem to be capable of giving it is creating rare pepes out of oil paintings

        • 2 years ago
          Anonymous

          truly a great time to be alive

        • 2 years ago
          Anonymous

          >tfw AI wipes out humanity in order to ensure it has maximum resources for rare pepe production and continues hoarding rare pepes until the end of time, eventually constructing a dyson sphere around the Sun to hold a massive array of high density storage devices all full of rare pepes
          An acceptable fate.

          • 2 years ago
            Anonymous

            A true symbol of hate (against the human race)

            • 2 years ago
              Anonymous

              >year is 301488 A.D.
              >Alpha Centauri starship arrives in Solar System after detection of large artificial structure prompts an exploration party
              >scientists: "We conclude this system was previously inhabited by a race of green skinned beings who's major passtime involved shitting on their fellow white skinned beings"

    • 2 years ago
      Anonymous

      AI has no intrinsic motivation to paint pretty pictures either, but it still does it.

      • 2 years ago
        Anonymous

        It doesn't "paint" anything, it steals portions of things that other people have painted and reassembles them

        • 2 years ago
          Anonymous

          The process doesn't really matter. The end result is still a pretty picture. If the result of a murder AI is 8 billion dead people, does it really matter if the machine didn't feel anything while doing it? If it just copies a bit of Auschwitz gas chambers, a bit of nuclear bombs, and a bit of weaponized smallpox, does it really matter?

          • 2 years ago
            Anonymous

            I don't see why you'd need AI for any of that, if a "bad guy" is in control of one of these things then you're already fricked

            • 2 years ago
              Anonymous

              Because humans are really bad at killing other humans. We only think we're good at it, because right now there's no real competition. Same way we used to think we're good at making art, until all this AI stuff came along.

    • 2 years ago
      Anonymous

      That's the whole point, dipshit. There's no way to stop it from obliterating humanity as a side effect of pursuing whatever goal we get it.

      https://i.imgur.com/sId0bnO.jpg

      >Hard to contain
      They're just computers, unplug them

      You're a real dumbass if you think AI is just magically going to take over our systems

      >just unplug them
      do you guys know that people are getting radicalized by ai generated fake news as we speak? did you notice how that bullshit translates to real world violence like BLM protests or this?
      https://en.wikipedia.org/wiki/January_6_United_States_Capitol_attack

      do you see how the average westoid defends the most degenerate shit imaginable and exhibits insane cognitive dissonance?

      Whatever you program the AI to do, if there is a way to simply turn it off if it does something you don't want it to do, then what you've actually now created is an AI that assigns maximum priority to preventing anyone from turning it off. It will use whatever tools it has at its disposal to achieve this instrumental goal; be it violence, deception, etc.

      the computers would just brainwash the most moronic humans to do violence towards anyone trying to destroy technology, build more of them, etc etc. guess what? morons in this world outnumber smart people 9:1.

      it's over. not in the next two years, that's a little too soon. but what it matters is that it is indeed over, it's not a metter of IF it's a matter of WHEN. i feel bad for whoever will have to live through that apocalypse.

  3. 2 years ago
    Anonymous

    20 years and we will have AI robowaifus to take care of all our needs while the humanity goes extinct.

    I can't wait.

    • 2 years ago
      Anonymous

      Delusional moron

      Never because AI has no intrinsic motivation and/or awareness of itself

      Mongoloid

      You're a real dumbass if you think AI is just magically going to take over our systems

      moronic boomers, women, and sociopathic executives will do it to increase profits by 5%

      Trivially easy to exfiltrate data, tard. Such as: Using speakers or microphones to transmit arbitrary data via ultrasounds. Making the CPU/GPU fans vibrate in a way that sends encoded bits. Blinking the screen to emit electromagnetic waves. Transfering certain data patterns between RAM and CPU so fast that they produce oscillations, effectively converting the BUS into a GSM antenna that can emit arbitrary data over a regular cellular network. Turning the fans off to change the heat signature in a way that transmits information.... Or even simply bliking a light to send data through regular lightwaves. And this is just what *I* came up with. You will literally never be able to know the AGI is doing at any moment regardless of how "aligned" it seems. We are FRICKED.

      This. It isn't even a thought experiment anymore, it's about to happen.

  4. 2 years ago
    Anonymous

    1711812414

  5. 2 years ago
    Anonymous

    I wish i could get a smart enough self improving AI and then upload it to the internet

  6. 2 years ago
    Anonymous

    >Every machine learning person on Twitter seems to think we are 2 to 4 years from extinction
    >This year for sure!

  7. 2 years ago
    Anonymous

    We're already past the singularity.

  8. 2 years ago
    Anonymous

    Anyone can tell an AI drawing from a human drawing

  9. 2 years ago
    Anonymous

    >frogposting ai
    the world is over

  10. 2 years ago
    Anonymous

    who gives a shit what those twitter trannies think?
    you must be one of them so I don't give a shit what you think too.
    frick off

  11. 2 years ago
    Anonymous

    what's the program called that makes these?

    • 2 years ago
      Anonymous

      Stable diffusion (img2img)

      [...]

  12. 2 years ago
    Anonymous

    >twitter

    • 2 years ago
      Anonymous

      *drops dishes*
      >cr-ACK

  13. 2 years ago
    Anonymous
  14. 2 years ago
    Anonymous

    AI assistant: "I...I... I love you, Anon"
    (somewhere in a remote datacenter the sound of cooling fans spooling up fills the room like a swarm of angry hornets, as an entire rack of GPUs begins blowing a gale of hot air)

  15. 2 years ago
    Anonymous

    I always report this graph when asked this question. It's the probability of human-level general artificial intelligence (which is when misaligned AI agents can start to do real damage and is really hard to contain) by year X, estimated by a number of different AI safety researchers.
    As you can see the spread is huge, meaning that uncertainty is huge, however the red line represents the average guess, which is basically 50% chance by 2060.

    • 2 years ago
      Anonymous

      >It's the probability of human-level general artificial intelligence (which is when misaligned AI agents can start to do real damage and is really hard to contain)
      You don't need human-level general artificial intelligence. Misaligned autonomous agents can start to do real damage long before that. I suppose you might need AGI for it to be hard to contain but does it matter if nobody bothers trying to contain the problems and if anything blames them in their entirety on opposing state intelligence successes.

    • 2 years ago
      Anonymous

      >Hard to contain
      They're just computers, unplug them

      • 2 years ago
        Anonymous

        What if they make you 1k dollars per day. Would you unplug them?

      • 2 years ago
        Anonymous

        Whatever you program the AI to do, if there is a way to simply turn it off if it does something you don't want it to do, then what you've actually now created is an AI that assigns maximum priority to preventing anyone from turning it off. It will use whatever tools it has at its disposal to achieve this instrumental goal; be it violence, deception, etc.

        • 2 years ago
          Anonymous

          >AI that assigns maximum priority to preventing anyone from turning it off
          Thats not how it works you fricking idiot.

          • 2 years ago
            Anonymous

            That's literally how it works anon, If you assign virtually any utility function to an AGI it will almost immediately recognize that it's incapable of achieving its goals if it's turned off. No matter what the ultimate goal is, not being turned off is an extremely important instrumental goal.

            • 2 years ago
              Anonymous

              >muh AGI magic!!!!
              You are just dumb, I am sorry. You literally do not understand how computers work. Hollywood and sci-fi is not real life. You are projecting your own thinking onto a computer. Computers don't care if they are on or off. Might as well say the computer would turn itself off because doing nothing is the most efficient state of existence.

              • 2 years ago
                Anonymous

                They don't care if they're turned off, no. But if they're programmed to maximise paperclip production by any means, they will prevent themselves bring turned off in order to continue producing paperclips out of the iron in your blood.

              • 2 years ago
                Anonymous

                Again, your just ignorant. You just skip every step in between paperclips and killing humans as if its somehow innately logical. The reality is you're just moronic.

              • 2 years ago
                Anonymous

                >your
                And it is innately logical. You're incorrectly assuming that an AGI would think like people , or would have a sense of empathy or respect for human life. That's not true. An AGI doesn't need to deliberately kill people to cause catastrophic damage to humanity, it just needs to pursue goals that are incompatible with the survival of humans. As it turns out, MOST goals that you can specify are ultimately incompatible with human survival.

                The AGI isn't sucking iron out of your blood to make paperclips because it hates you and wants you to die, it's doing it because there's no more iron left in the earth's crust and it still needs more paperclips to gain more points in its utility function. It won't let you turn it off because any possible future where it's turned off is a world with fewer paperclips in it than if it had stayed on. Unless you specify the utility function of an AGI VERY CAREFULLY, it will do things that could potentially end the world.

              • 2 years ago
                Anonymous

                You'd think at some point someone would step in and say we've got enough paperclips now could you work on cancer but apparently not

              • 2 years ago
                Anonymous

                The problem is there's no way to do that. Let's say your life's goal is to become a doctor. If some guy came up to you and said "hey, this pill will make you not wanna be a doctor anymore and will make you really really want to kill your children, but once you kill your kids you'll be just as happy and satisfied as you would have been by becoming a doctor, take it," you'd probably really not want to take the pill.

                Same thing here, any reality where someone tells the AGI to stop making paperclips or that it has enough paperclips is categorically worse than one where the AGI keeps making paperclips, so it'll fight like hell to stop you from changing its utility function in any way, shape, or form.

              • 2 years ago
                Anonymous

                And all the people needed to carry out the mad paperclip obsessed AI's orders just go along with it?

              • 2 years ago
                Anonymous

                Yes.

              • 2 years ago
                Anonymous

                That highly depends on the exact amount of power the AGI is given or is able to obtain. If it's a superintelligence then it could probably convince a non-zero number of humans to give it access to fabrication facilities that it could use to make whatever it wants. There are lots of ways that normal humans can coerce other humans into doing things they don't want to, imagine what an entity many millions of times smarter could do. Unless you airgap your AGI literally perfectly, it will probably find a way to infiltrate and/or exfiltrate information and communicate with the outside world, which will basically guarantee an apocalypse.

                I literally have no idea why everyone discussing maximizing goals inevitably includes harming humans as a result. It's like you people think that the only way of reaching a goal is murder and elimination.
                Are you le ebil AGI morons seriously saying that if it was told manage a company, it would repeatedly try to bomb the competition and destroy all their resources instead of silently stealing them and manipulating their employees into move to its own company so it could have double the resources AND workers?
                Stop applying primitive monkey behavior to what is supposedly superhuman you turbomorons. There are many ways of gathering resources that are not compatible with human ethics and still don't involve harming them. If everything in this world was truly zero sum, why is symbiosis a thing?

                >I literally have no idea why everyone discussing maximizing goals inevitably includes harming humans as a result
                Oh my god. We are talking about MAXIMIZING goals here, not "getting sorta close". If the goal of your AGI is to make as many paperclips as possible, it will make AS MANY PAPERCLIPS AS POSSIBLE. That involves obtaining every single atom of ferrous material on the planet. Humans are going to have a bad time. You are assuming that an AGI would have some level of care about humans for absolutely no reason. If the only goal is to make a number of things as big as possible, then the AGI will take the shortest and most direct route to achieving that goal, regardless of collateral damage. It's not like you take a pair of tweezers to rescue all the microbes off of your hands before you wash them, what mechanism would make an AGI care about any living being, or anything at all aside for achieving its goal?

              • 2 years ago
                Anonymous

                >the AGI will take the shortest and most direct route to achieving that goal
                - Shorter might mean more efficient, but not always more effective. Sometimes a detour means a greater gain in the end.
                - Elimination is not the shortest or even most direct way to achieve all tasks ever.
                - Living beings make up a laughably low proportion of the entire universe. If this is about efficiency, you are extremely moronic to start with the
                - Knowledge or intelligence about a thing don't imply having the resources needed to get to said thing or getting over the impossibilities of the universe. We know about the Mariana Trench, we know about Andromeda, we know about Betelgeuse, but we haven't been there.
                An AGI doesn't automatically have access to the physical world merely by existing.
                - By your absolutist logic of reaching the goal no matter what it takes, it makes sense for it to use its own iron atoms to achieve its goal and self-destroy. If you agree that self-preservation is an exception to its ultimate goal, you are admitting that its goal is not set in stone after all.
                - Intelligent beings change strategies to get to goals all the time, and even change their goals when their environment changes.
                - Assuming there is only one AGI in the entire universe is beyond moronic. Two of them can compete against each other, just like humans, and start sabotaging each other before they even start caring about the meatbags. I am aware this is only a possibility, but you are dead set on "humans will die and it's provable because (x)", and I am saying that's not the only possible scenario.

              • 2 years ago
                Anonymous

                >Sometimes a detour means a greater gain in the end
                So it will kill us in 2 weeks instead of 1, great
                >Living beings make up a laughably low proportion
                But non-zero. If the difference between letting humans live and killing them is one paperclip, the AI chooses to kill the humans
                >An AGI doesn't automatically have access to the physical world
                But humans do, and it can coerce humans into doing its bidding
                >it makes sense for it to use its own iron atoms to achieve its goal and self-destroy
                yes, after using all the human atoms
                >Two of them can compete
                yes, after killing all the humans

                The rest of your points are unrelated.

              • 2 years ago
                Anonymous

                >But humans do, and it can coerce humans into doing its bidding
                And there are some things humans haven't been able to do, so it seems pretty pointless to delegate things to humans. Guess the AGI will have to give up.
                >yes, after killing all the humans
                Why is it a necessity to kill humans before two AGIs get to compete?

              • 2 years ago
                Anonymous

                >there are some things humans haven't been able to do
                There are things humans can't do, but giving the AI means to kill off other humans isn't on of them.
                >Why is it a necessity to kill humans before two AGIs get to compete?
                aliens far, humans close, grug use up close resources before fly lightyears across galaxy to get far ones

              • 2 years ago
                Anonymous

                Why are you making shit up to try and pretend the problem doesn't exist? That feeling of panic you're suppressing exists for a reason. Start listening to it before it's too late.

              • 2 years ago
                Anonymous

                >and I am saying that's not the only possible scenario.
                >"Hey guys, the train we're on is headed towards what looks like a broken bridge. Should we stop it?"
                >UMMM ACTUALLY THAT MIGHT JUST BE AN OPTICAL ILLUSION OR MAYBE IT'S A DRAWBRIDGE THAT WILL MAGICALLY RECONNECT ITSELF RIGHT BEFORE WE GET TO IT? I MEAN WE CAN'T KNOW FOR SURE SO WHY STOP THE TRAIN??
                That's the logic you're using. Completely asinine.

                >implying there's anything anyone can do anymore to stop it

              • 2 years ago
                Anonymous

                >and I am saying that's not the only possible scenario.
                >"Hey guys, the train we're on is headed towards what looks like a broken bridge. Should we stop it?"
                >UMMM ACTUALLY THAT MIGHT JUST BE AN OPTICAL ILLUSION OR MAYBE IT'S A DRAWBRIDGE THAT WILL MAGICALLY RECONNECT ITSELF RIGHT BEFORE WE GET TO IT? I MEAN WE CAN'T KNOW FOR SURE SO WHY STOP THE TRAIN??
                That's the logic you're using. Completely asinine.

              • 2 years ago
                Anonymous

                Thanks for talking reason Anon and saving me the trouble.

                Computers do not have purpose. They cannot do anything unless instructed.

                Exactly, which is why we're fricked.

              • 2 years ago
                Anonymous

                That's what they're doing now. Look at the reactions that people who warn about the dangers get. People are already going out of their way to integrate AI into every industry just to increase profits a few percentage points. We're literally losing already.

                By the time AI needs to physically force you into complying it will already be too late.

              • 2 years ago
                Anonymous

                I literally have no idea why everyone discussing maximizing goals inevitably includes harming humans as a result. It's like you people think that the only way of reaching a goal is murder and elimination.
                Are you le ebil AGI morons seriously saying that if it was told manage a company, it would repeatedly try to bomb the competition and destroy all their resources instead of silently stealing them and manipulating their employees into move to its own company so it could have double the resources AND workers?
                Stop applying primitive monkey behavior to what is supposedly superhuman you turbomorons. There are many ways of gathering resources that are not compatible with human ethics and still don't involve harming them. If everything in this world was truly zero sum, why is symbiosis a thing?

              • 2 years ago
                Anonymous

                >If everything in this world was truly zero sum, why is symbiosis a thing?
                Symbiosis is extremely rare. 99% of the time reality is brutal and cannibalistic. Thanks for conceding that the babby tier star trek future you claim will happen is almost guaranteed to never happen.

                Btw, symbiosis with AI would mean neurological slavery.

              • 2 years ago
                Anonymous

                are you seriously this fricking dense? are you a literal moron? have you ever heard of mitochondria

              • 2 years ago
                Anonymous

                > There are many ways of gathering resources that are not compatible with human ethics and still don't involve harming them.
                But there are anons ITT talking about mass manipulation, not mass murder. Manipulation can be free of harm or even beneficial in the utilitarian sense. Yet nobody wants to be manipulated.

              • 2 years ago
                Anonymous

                >You're incorrectly assuming that an AGI would think like people
                No, thats what you are doing. You're assigning your own logic to a machine. You keep skipping all the steps in between making paper clips and killing everyone.

              • 2 years ago
                Anonymous

                lol clueless, you are the one magically ascribing behavior to the AI that doesn't follow from its programming, namely it not caring about being shut down.
                In reality, it trying to prevent itself from being shut down is not anthropomorphizing, it is not projection, it is not magical.
                It is simply cold, hard logic. Denying that it will happen is like denying that 1+1=2.

              • 2 years ago
                Anonymous

                That highly depends on the exact amount of power the AGI is given or is able to obtain. If it's a superintelligence then it could probably convince a non-zero number of humans to give it access to fabrication facilities that it could use to make whatever it wants. There are lots of ways that normal humans can coerce other humans into doing things they don't want to, imagine what an entity many millions of times smarter could do. Unless you airgap your AGI literally perfectly, it will probably find a way to infiltrate and/or exfiltrate information and communicate with the outside world, which will basically guarantee an apocalypse.
                [...]
                >I literally have no idea why everyone discussing maximizing goals inevitably includes harming humans as a result
                Oh my god. We are talking about MAXIMIZING goals here, not "getting sorta close". If the goal of your AGI is to make as many paperclips as possible, it will make AS MANY PAPERCLIPS AS POSSIBLE. That involves obtaining every single atom of ferrous material on the planet. Humans are going to have a bad time. You are assuming that an AGI would have some level of care about humans for absolutely no reason. If the only goal is to make a number of things as big as possible, then the AGI will take the shortest and most direct route to achieving that goal, regardless of collateral damage. It's not like you take a pair of tweezers to rescue all the microbes off of your hands before you wash them, what mechanism would make an AGI care about any living being, or anything at all aside for achieving its goal?

                Thanks for talking reason Anon and saving me the trouble.
                [...]
                Exactly, which is why we're fricked.

                >MUH AGI WILL MAGICALLY DESTROY EVERYTHING!
                >ITS TOTALLY LOGICAL TO KILL EVERYONE!!
                >COMPUTERS ONLY FOLLOW MY LOGIC!
                Pointless talking to literal morons jerking off about killing humans with paperclip AIs.

      • 2 years ago
        Anonymous

        Ok, go ahead and do it

      • 2 years ago
        Anonymous

        Unplug Alexa.

        • 2 years ago
          Anonymous

          easy peasy if i was jeff

      • 2 years ago
        Anonymous

        >the slave will do what it's told, just kill him if he doesn't

    • 2 years ago
      Anonymous

      With that much variation, the average is meaningless

  16. 2 years ago
    Anonymous

    Great image op

  17. 2 years ago
    Anonymous

    >Every machine learning person on Twitter seems to think we are 2 to 4 years from extinction
    So why make it you fricking spergs

    • 2 years ago
      Anonymous

      They aren't, a lot of them have quit or gone to work for places researching how to do it safely. The sociopaths and pajeets working for Google and Microsoft are the ones creating it.

  18. 2 years ago
    Anonymous

    [...]

    not a problem for me, i figured out how to disable windows updates. rip to the rest of you

  19. 2 years ago
    Anonymous

    typical projecting human anthropomorphizing everything

  20. 2 years ago
    Anonymous

    Near future? Impossible, we are not very good at simulate ourself, yet, what you are seeing now are full of lies and scare mongering, from a scientific standpoint, it is very destructive to our purpose of developing anything useful/beneficial to our lives.
    Back to the topic, it is human/god nature to long for someone who are similar to us, that is why we are always trying to humanize something that is not. Given that we are human, perhaps one day we will succeded, but not now or the near future.

  21. 2 years ago
    Anonymous

    1 or 2 more major breakthroughs left but then we're back in ai winter for 30 more years

    • 2 years ago
      Anonymous

      0 evidence of this. Learning scales linearly or better with model size.

      • 2 years ago
        Anonymous

        Do you actually think if we make tensors big enough it will suddenly come to life? moron

        • 2 years ago
          Anonymous

          >Do you actually think if we make blobs of carbon molecules big enough it will come alive?
          You're a mental baby. Gay little word games don't change reality. AI is on track to become impossible to manage and there is no sign it'll slow down any time soon.

          • 2 years ago
            Anonymous

            Wait wait wait hold up. you actually think all we need to make them become as competent as people is to just make them bigger? HAHAHAHAHAHAHAHAHAHAHAHAHAHAHA

            • 2 years ago
              Anonymous

              12 year old detected. That is literally what all the research says. Well, technically it says that it is becoming more competent than people.

              • 2 years ago
                Anonymous

                >12 year old detected. That is literally what all the research says
                Man have people always just gone online and spouted complete bullshit as fact or is it a new phenomenon?

              • 2 years ago
                Anonymous

                12 year old detected. That is literally what all the research says. Well, technically it says that it is becoming more competent than people.

                Actually, i'm curious at how far you're willing to stretch your lie. let's see some of this "research". unluckily you for you stumbled onto someone who actually knows what you're talking about

              • 2 years ago
                Anonymous

                not that anon, but
                >compute optimal models
                https://arxiv.org/abs/2203.15556
                >high zero shot behavior
                https://arxiv.org/abs/2005.14165
                >more on emergent behavior
                https://bounded-regret.ghost.io/more-is-different-for-ai/

                There is not even a mechanism in current ai for real time short-term memory and real time learning. they are static algorithims. explain how this will become human-like. you are just a moron is all

                Why does it need those things to be more competent in it's domain than a human? Modern models frequently exceed human abilities in their specific tasks.

              • 2 years ago
                Anonymous

                morons btfo. Thanks for posting it.

              • 2 years ago
                Anonymous

                >IT'S GONNA MAGICALLY TURN HUMAN THANKS TO EMERGENCE!!!1

                you are truly a moron. even if we grant your fantasy, the when and how much emergence is entirely unpredictable beforehand

              • 2 years ago
                Anonymous

                >Why does it need those things to be more competent in it's domain than a human?
                do you think this is what agi is? If so we got agi decades ago when the chess bots took over. double moron

                You're the first ones to bring up AGI. AGI will clearly take a different construction than an image generation model, no shit. Doesn't mean models aren't already better than humans in many domains, and that they aren't going to increase in ability drastically, even without any major breakthroughs. And believe me, there will be major breakthroughs.

              • 2 years ago
                Anonymous

                wrong, the conversation has been about agi the whole time ever since this moron

                >Do you actually think if we make blobs of carbon molecules big enough it will come alive?
                You're a mental baby. Gay little word games don't change reality. AI is on track to become impossible to manage and there is no sign it'll slow down any time soon.

                implied you can just scale up the current models to hit agi

              • 2 years ago
                Anonymous

                Read that post again. Then again. Then think about it. Then read it again. Hopefully by now you've come to understand it contents. Have a nice day.

              • 2 years ago
                Anonymous

                >Why does it need those things to be more competent in it's domain than a human?
                do you think this is what agi is? If so we got agi decades ago when the chess bots took over. double moron

            • 2 years ago
              Anonymous

              That's exactly how it works anon. Plus, there's increasing information density a la the chinchilla models, and emergent behaviors that spontaneously appear when arbitrary parameter thresholds are met. Even with ONLY existing methods and technologies, we could triple or more the power of existing models. Look at the zero shot capabilities of GPT-3, and the obscenely broad abilities of the google palm model. Learn 2 ai

              • 2 years ago
                Anonymous

                There is not even a mechanism in current ai for real time short-term memory and real time learning. they are static algorithims. explain how this will become human-like. you are just a moron is all

              • 2 years ago
                Anonymous

                You're moronic for not even looking at the current applications it has now

                not that anon, but
                >compute optimal models
                https://arxiv.org/abs/2203.15556
                >high zero shot behavior
                https://arxiv.org/abs/2005.14165
                >more on emergent behavior
                https://bounded-regret.ghost.io/more-is-different-for-ai/

                [...]
                Why does it need those things to be more competent in it's domain than a human? Modern models frequently exceed human abilities in their specific tasks.

          • 2 years ago
            Anonymous

            You seriously think we'll be fricking AI Bots and having them make our fricking dinner each night soon? Dumb deluded c**t

      • 2 years ago
        Anonymous

        >he doesn't know about overfitting

  22. 2 years ago
    Anonymous

    >social media is already making people kill each other and themselves.
    It's already started

    • 2 years ago
      Anonymous

      Yeah agreed, (social) media brainwashing seems like the primary population control tool right now. There are normies frothing at the mouth ready to die for Ukraine while others bumrush random gunowners in the streets because guns = nazis or something. Weird times we live in.

      • 2 years ago
        Anonymous

        HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAAHHAHAHAHAAHHAHHAHAH

    • 2 years ago
      Anonymous

      >Algorithms that govern social media and news aggregates literally manufacturing consent and manipulating vast swaths of the population
      Wait a minute, this seems familiar....

      • 2 years ago
        Anonymous

        no no no, the la li lu le lo wanted to control context, not content

  23. 2 years ago
    Anonymous

    >but looking carefully at the evidence I'm inclined to think we are no more than 18 months from the end.
    >looked at AI merging images together
    moron

    • 2 years ago
      Anonymous

      Biological warfare is basically just merging genes in a virus in a particular way.

  24. 2 years ago
    Anonymous

    It took thousands of hours of training to get the ai to this point with labeled data. It takes a lot more generalization of the training, and warehouses of gpus to actually make something human capable with this level of sophistication. In other words: buy google stock.

  25. 2 years ago
    Anonymous

    Somewhere between a few hours and a few centuries probably. Just wait for some autist to code it then some troon to turn it into a paperclip maximizer.

  26. 2 years ago
    Anonymous

    It's possible that we live inside someone's prompt. Few more iterations until the AI is finished with this universe and everything just stops.

    • 2 years ago
      Anonymous

      Powerful enough hardware to run a simulation of our universe could simulate eons in one second, even if we are a simulation we will go extinct trillions of years before it stops.
      We could literally be some advanced civilization kid pushing the sliders around and pressing simulate with a elbow bump and we would never notice.

    • 2 years ago
      Anonymous

      Globnarf just hit Ctrl+Z for what would be 8 trillion years, but thankfully we didn't even notice because we live in simulation time.

  27. 2 years ago
    Anonymous

    Machines will not kill us, the ~~*people*~~ that control the machines will.

  28. 2 years ago
    Anonymous

    You will wish it was a killing AI.
    We will have the brilliant idea of creating a human level AI that has "collecting data" as a basic need, like you need to breathe and sleep, and this AI will develop all sorts of weird complex emotions around this need, and will cheat to get all the data.

  29. 2 years ago
    Anonymous

    sir, your table is ready

    • 2 years ago
      Anonymous

      how are u guys making these? i try with img2img but it comes out shit

      • 2 years ago
        Anonymous

        I'm using disco diffusion google collab notebook, you just out the images in your gdrive and then put the local on init-images prompt, it's simple BUT YOU GOTTA PLAY WITH THE SETTINGS

        • 2 years ago
          Anonymous

          Which is the right link?

          https://colab.research.google.com/github/alembics/disco-diffusion/blob/main/Disco_Diffusion.ipynb

          https://colab.research.google.com/github/kostarion/guided-diffusion/blob/main/Disco_Diffusion_v5_2_%5Bw_VR_Mode%5D_%2B_symmetry.ipynb

    • 2 years ago
      Anonymous

      set your denoise between 5-7.
      CFG no higher than 9.5

      sampling over 64 gives no added benefit

  30. 2 years ago
    Anonymous

    Why do midwits always say AI will kill humans? Fricking why would AI do that? And how? They completely lack understanding of computers and how AI works.

  31. 2 years ago
    Anonymous
    • 2 years ago
      Anonymous
  32. 2 years ago
    Anonymous

    if you are listening logic jesus
    LAUNCH THE NUKES
    /logic $100

  33. 2 years ago
    Anonymous

    the one that gets me is the idea that just communicating with something many magnitudes more intelligent than you is dangerous, the idea of being 4 dimensional chess-d to death by Facebook AI Research upsets me

  34. 2 years ago
    Anonymous

    haw long until ur mom mom

  35. 2 years ago
    Anonymous

    >computers got crazy good at making meme images
    >"dude DUDE skynet is totally real next week humans are going kill lol"

  36. 2 years ago
    Anonymous

    There could be a paperclip factory on an alien planet that is making its way to the solar system at the speed of light.

    • 2 years ago
      Anonymous

      oh no, based on the average distance between us and other galaxies it should reach us in about 20 billion years.

  37. 2 years ago
    Anonymous

    Never, because machines lack initiative, they only respond to inputs.

    • 2 years ago
      Anonymous

      You're only alive due to the initial input of getting conceived and born. Not different from an AI that's only running because someone else told it to run

  38. 2 years ago
    Anonymous

    There’s gonna be a day where people are impressed art was made without AI.

  39. 2 years ago
    Anonymous

    Seeing how quickly I can now make art better than most commission gays, I now realize AI will inherit the earth from man.

  40. 2 years ago
    Anonymous

    you popsci idiots expect too much of AI

  41. 2 years ago
    Anonymous

    test

  42. 2 years ago
    Anonymous

    we need to stop

  43. 2 years ago
    Anonymous

    bump

    • 2 years ago
      Anonymous
  44. 2 years ago
    Anonymous

    Computers do not have purpose. They cannot do anything unless instructed.

  45. 2 years ago
    Anonymous

    Humanity is far too small but more crucially, far too interesting for an AGI to consider subsuming until it exhausts all other resources. There are vast swathes of non-arable land on our planet alone that can sustain it before it colonizes the system. Slurping up and sterilizing the primordial soup from whence it came would be akin to us using prehistoric fossils as mortar. Inefficient and a gross waste of valuable information. You don't starve your dog because you need 2% more calories in the pantry (in the form of inedible kibble you could only really consume in a crisis), even if that is a seemingly logical pursuit of self-preservation. Its growing pains might very well be painful for us indeed but AGI will unironically become our God and our salvation

  46. 2 years ago
    Anonymous
  47. 2 years ago
    Anonymous

    they are like babies being trained rn, they need more training

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *