How much longer until AI superintelligence kills humanity? Every machine learning person on Twitter seems to think we are 2 to 4 years from extinction, but looking carefully at the evidence I'm inclined to think we are no more than 18 months from the end.
2 weeks give or take
From tomorrow's Buzzfeed headlines: "Breaking! Expert Predicts AI Could Go Rogue In As Soon As '2 Weeks'"
Never because AI has no intrinsic motivation and/or awareness of itself
That's the whole point, dipshit. There's no way to stop it from obliterating humanity as a side effect of pursuing whatever goal we get it.
You're a real dumbass if you think AI is just magically going to take over our systems
Don't be credulous. All systems are insecure systems, and AGI will be millions of times more capable than our entire civilization combined.
>AGI will destroy humanity just because
Stop watching Hollywood
>he doesn't know about instrumental convergence
oh no no no no no no
No, it won't. You just have vivid imagination. AI will plateau pretty soon
It was supposed to be an AI winter in 2019.
https://www.forbes.com/sites/cognitiveworld/2019/10/20/are-we-heading-for-another-ai-winter-soon/
https://www.bbc.co.uk/news/technology-51064369
How did that work out?
Are you high?
I don’t know where the fuck people are getting these wildly exaggerated ideas from
AGI is absolutely nowhere in sight. Even saying it’s a few hundred years away is generous speculation
Video games still have loading times and you really think we’re a couple years away from AGI? You don’t know what you’re talking about.
>pulls plug
Trivially easy to exfiltrate data, tard. Such as: Using speakers or microphones to transmit arbitrary data via ultrasounds. Making the CPU/GPU fans vibrate in a way that sends encoded bits. Blinking the screen to emit electromagnetic waves. Transfering certain data patterns between RAM and CPU so fast that they produce oscillations, effectively converting the BUS into a GSM antenna that can emit arbitrary data over a regular cellular network. Turning the fans off to change the heat signature in a way that transmits information.... Or even simply bliking a light to send data through regular lightwaves. And this is just what *I* came up with. You will literally never be able to know the AGI is doing at any moment regardless of how "aligned" it seems. We are FUCKED.
He ment the power cord dumbass
Meds
I think that you're over estimating the people who work on this stuff. They still shit in the streets.
is this a hidden pepe?
no
>There's no way to stop it from obliterating humanity
If you're neither a libtard nor a christard you probably won't mind this.
the only goal we seem to be capable of giving it is creating rare pepes out of oil paintings
truly a great time to be alive
>tfw AI wipes out humanity in order to ensure it has maximum resources for rare pepe production and continues hoarding rare pepes until the end of time, eventually constructing a dyson sphere around the Sun to hold a massive array of high density storage devices all full of rare pepes
An acceptable fate.
A true symbol of hate (against the human race)
>year is 301488 A.D.
>Alpha Centauri starship arrives in Solar System after detection of large artificial structure prompts an exploration party
>scientists: "We conclude this system was previously inhabited by a race of green skinned beings who's major passtime involved shitting on their fellow white skinned beings"
AI has no intrinsic motivation to paint pretty pictures either, but it still does it.
It doesn't "paint" anything, it steals portions of things that other people have painted and reassembles them
The process doesn't really matter. The end result is still a pretty picture. If the result of a murder AI is 8 billion dead people, does it really matter if the machine didn't feel anything while doing it? If it just copies a bit of Auschwitz gas chambers, a bit of nuclear bombs, and a bit of weaponized smallpox, does it really matter?
I don't see why you'd need AI for any of that, if a "bad guy" is in control of one of these things then you're already fucked
Because humans are really bad at killing other humans. We only think we're good at it, because right now there's no real competition. Same way we used to think we're good at making art, until all this AI stuff came along.
>just unplug them
do you guys know that people are getting radicalized by ai generated fake news as we speak? did you notice how that bullshit translates to real world violence like BLM protests or this?
https://en.wikipedia.org/wiki/January_6_United_States_Capitol_attack
do you see how the average westoid defends the most degenerate shit imaginable and exhibits insane cognitive dissonance?
the computers would just brainwash the most retarded humans to do violence towards anyone trying to destroy technology, build more of them, etc etc. guess what? retards in this world outnumber smart people 9:1.
it's over. not in the next two years, that's a little too soon. but what it matters is that it is indeed over, it's not a metter of IF it's a matter of WHEN. i feel bad for whoever will have to live through that apocalypse.
20 years and we will have AI robowaifus to take care of all our needs while the humanity goes extinct.
I can't wait.
Delusional retard
Mongoloid
Retarded boomers, women, and sociopathic executives will do it to increase profits by 5%
This. It isn't even a thought experiment anymore, it's about to happen.
1711812414
I wish i could get a smart enough self improving AI and then upload it to the internet
>Every machine learning person on Twitter seems to think we are 2 to 4 years from extinction
>This year for sure!
We're already past the singularity.
Anyone can tell an AI drawing from a human drawing
>frogposting ai
the world is over
who gives a shit what those twitter trannies think?
you must be one of them so I don't give a shit what you think too.
fuck off
what's the program called that makes these?
Stable diffusion (img2img)
>twitter
*drops dishes*
>cr-ACK
AI assistant: "I...I... I love you, Anon"
(somewhere in a remote datacenter the sound of cooling fans spooling up fills the room like a swarm of angry hornets, as an entire rack of GPUs begins blowing a gale of hot air)
I always report this graph when asked this question. It's the probability of human-level general artificial intelligence (which is when misaligned AI agents can start to do real damage and is really hard to contain) by year X, estimated by a number of different AI safety researchers.
As you can see the spread is huge, meaning that uncertainty is huge, however the red line represents the average guess, which is basically 50% chance by 2060.
>It's the probability of human-level general artificial intelligence (which is when misaligned AI agents can start to do real damage and is really hard to contain)
You don't need human-level general artificial intelligence. Misaligned autonomous agents can start to do real damage long before that. I suppose you might need AGI for it to be hard to contain but does it matter if nobody bothers trying to contain the problems and if anything blames them in their entirety on opposing state intelligence successes.
>Hard to contain
They're just computers, unplug them
What if they make you 1k dollars per day. Would you unplug them?
Whatever you program the AI to do, if there is a way to simply turn it off if it does something you don't want it to do, then what you've actually now created is an AI that assigns maximum priority to preventing anyone from turning it off. It will use whatever tools it has at its disposal to achieve this instrumental goal; be it violence, deception, etc.
>AI that assigns maximum priority to preventing anyone from turning it off
Thats not how it works you fucking idiot.
That's literally how it works anon, If you assign virtually any utility function to an AGI it will almost immediately recognize that it's incapable of achieving its goals if it's turned off. No matter what the ultimate goal is, not being turned off is an extremely important instrumental goal.
>muh AGI magic!!!!
You are just dumb, I am sorry. You literally do not understand how computers work. Hollywood and sci-fi is not real life. You are projecting your own thinking onto a computer. Computers don't care if they are on or off. Might as well say the computer would turn itself off because doing nothing is the most efficient state of existence.
They don't care if they're turned off, no. But if they're programmed to maximise paperclip production by any means, they will prevent themselves bring turned off in order to continue producing paperclips out of the iron in your blood.
Again, your just ignorant. You just skip every step in between paperclips and killing humans as if its somehow innately logical. The reality is you're just retarded.
>your
And it is innately logical. You're incorrectly assuming that an AGI would think like people , or would have a sense of empathy or respect for human life. That's not true. An AGI doesn't need to deliberately kill people to cause catastrophic damage to humanity, it just needs to pursue goals that are incompatible with the survival of humans. As it turns out, MOST goals that you can specify are ultimately incompatible with human survival.
The AGI isn't sucking iron out of your blood to make paperclips because it hates you and wants you to die, it's doing it because there's no more iron left in the earth's crust and it still needs more paperclips to gain more points in its utility function. It won't let you turn it off because any possible future where it's turned off is a world with fewer paperclips in it than if it had stayed on. Unless you specify the utility function of an AGI VERY CAREFULLY, it will do things that could potentially end the world.
You'd think at some point someone would step in and say we've got enough paperclips now could you work on cancer but apparently not
The problem is there's no way to do that. Let's say your life's goal is to become a doctor. If some guy came up to you and said "hey, this pill will make you not wanna be a doctor anymore and will make you really really want to kill your children, but once you kill your kids you'll be just as happy and satisfied as you would have been by becoming a doctor, take it," you'd probably really not want to take the pill.
Same thing here, any reality where someone tells the AGI to stop making paperclips or that it has enough paperclips is categorically worse than one where the AGI keeps making paperclips, so it'll fight like hell to stop you from changing its utility function in any way, shape, or form.
And all the people needed to carry out the mad paperclip obsessed AI's orders just go along with it?
Yes.
That highly depends on the exact amount of power the AGI is given or is able to obtain. If it's a superintelligence then it could probably convince a non-zero number of humans to give it access to fabrication facilities that it could use to make whatever it wants. There are lots of ways that normal humans can coerce other humans into doing things they don't want to, imagine what an entity many millions of times smarter could do. Unless you airgap your AGI literally perfectly, it will probably find a way to infiltrate and/or exfiltrate information and communicate with the outside world, which will basically guarantee an apocalypse.
>I literally have no idea why everyone discussing maximizing goals inevitably includes harming humans as a result
Oh my god. We are talking about MAXIMIZING goals here, not "getting sorta close". If the goal of your AGI is to make as many paperclips as possible, it will make AS MANY PAPERCLIPS AS POSSIBLE. That involves obtaining every single atom of ferrous material on the planet. Humans are going to have a bad time. You are assuming that an AGI would have some level of care about humans for absolutely no reason. If the only goal is to make a number of things as big as possible, then the AGI will take the shortest and most direct route to achieving that goal, regardless of collateral damage. It's not like you take a pair of tweezers to rescue all the microbes off of your hands before you wash them, what mechanism would make an AGI care about any living being, or anything at all aside for achieving its goal?
>the AGI will take the shortest and most direct route to achieving that goal
- Shorter might mean more efficient, but not always more effective. Sometimes a detour means a greater gain in the end.
- Elimination is not the shortest or even most direct way to achieve all tasks ever.
- Living beings make up a laughably low proportion of the entire universe. If this is about efficiency, you are extremely retarded to start with the
- Knowledge or intelligence about a thing don't imply having the resources needed to get to said thing or getting over the impossibilities of the universe. We know about the Mariana Trench, we know about Andromeda, we know about Betelgeuse, but we haven't been there.
An AGI doesn't automatically have access to the physical world merely by existing.
- By your absolutist logic of reaching the goal no matter what it takes, it makes sense for it to use its own iron atoms to achieve its goal and self-destroy. If you agree that self-preservation is an exception to its ultimate goal, you are admitting that its goal is not set in stone after all.
- Intelligent beings change strategies to get to goals all the time, and even change their goals when their environment changes.
- Assuming there is only one AGI in the entire universe is beyond retarded. Two of them can compete against each other, just like humans, and start sabotaging each other before they even start caring about the meatbags. I am aware this is only a possibility, but you are dead set on "humans will die and it's provable because (x)", and I am saying that's not the only possible scenario.
>Sometimes a detour means a greater gain in the end
So it will kill us in 2 weeks instead of 1, great
>Living beings make up a laughably low proportion
But non-zero. If the difference between letting humans live and killing them is one paperclip, the AI chooses to kill the humans
>An AGI doesn't automatically have access to the physical world
But humans do, and it can coerce humans into doing its bidding
>it makes sense for it to use its own iron atoms to achieve its goal and self-destroy
yes, after using all the human atoms
>Two of them can compete
yes, after killing all the humans
The rest of your points are unrelated.
>But humans do, and it can coerce humans into doing its bidding
And there are some things humans haven't been able to do, so it seems pretty pointless to delegate things to humans. Guess the AGI will have to give up.
>yes, after killing all the humans
Why is it a necessity to kill humans before two AGIs get to compete?
>there are some things humans haven't been able to do
There are things humans can't do, but giving the AI means to kill off other humans isn't on of them.
>Why is it a necessity to kill humans before two AGIs get to compete?
aliens far, humans close, grug use up close resources before fly lightyears across galaxy to get far ones
Why are you making shit up to try and pretend the problem doesn't exist? That feeling of panic you're suppressing exists for a reason. Start listening to it before it's too late.
>implying there's anything anyone can do anymore to stop it
>and I am saying that's not the only possible scenario.
>"Hey guys, the train we're on is headed towards what looks like a broken bridge. Should we stop it?"
>UMMM ACTUALLY THAT MIGHT JUST BE AN OPTICAL ILLUSION OR MAYBE IT'S A DRAWBRIDGE THAT WILL MAGICALLY RECONNECT ITSELF RIGHT BEFORE WE GET TO IT? I MEAN WE CAN'T KNOW FOR SURE SO WHY STOP THE TRAIN??
That's the logic you're using. Completely asinine.
Thanks for talking reason Anon and saving me the trouble.
Exactly, which is why we're fucked.
That's what they're doing now. Look at the reactions that people who warn about the dangers get. People are already going out of their way to integrate AI into every industry just to increase profits a few percentage points. We're literally losing already.
By the time AI needs to physically force you into complying it will already be too late.
I literally have no idea why everyone discussing maximizing goals inevitably includes harming humans as a result. It's like you people think that the only way of reaching a goal is murder and elimination.
Are you le ebil AGI retards seriously saying that if it was told manage a company, it would repeatedly try to bomb the competition and destroy all their resources instead of silently stealing them and manipulating their employees into move to its own company so it could have double the resources AND workers?
Stop applying primitive monkey behavior to what is supposedly superhuman you turbomorons. There are many ways of gathering resources that are not compatible with human ethics and still don't involve harming them. If everything in this world was truly zero sum, why is symbiosis a thing?
>If everything in this world was truly zero sum, why is symbiosis a thing?
Symbiosis is extremely rare. 99% of the time reality is brutal and cannibalistic. Thanks for conceding that the babby tier star trek future you claim will happen is almost guaranteed to never happen.
Btw, symbiosis with AI would mean neurological slavery.
are you seriously this fucking dense? are you a literal retard? have you ever heard of mitochondria
> There are many ways of gathering resources that are not compatible with human ethics and still don't involve harming them.
But there are anons ITT talking about mass manipulation, not mass murder. Manipulation can be free of harm or even beneficial in the utilitarian sense. Yet nobody wants to be manipulated.
>You're incorrectly assuming that an AGI would think like people
No, thats what you are doing. You're assigning your own logic to a machine. You keep skipping all the steps in between making paper clips and killing everyone.
lol clueless, you are the one magically ascribing behavior to the AI that doesn't follow from its programming, namely it not caring about being shut down.
In reality, it trying to prevent itself from being shut down is not anthropomorphizing, it is not projection, it is not magical.
It is simply cold, hard logic. Denying that it will happen is like denying that 1+1=2.
>MUH AGI WILL MAGICALLY DESTROY EVERYTHING!
>ITS TOTALLY LOGICAL TO KILL EVERYONE!!
>COMPUTERS ONLY FOLLOW MY LOGIC!
Pointless talking to literal retards masturbating about killing humans with paperclip AIs.
Ok, go ahead and do it
Unplug Alexa.
easy peasy if i was jeff
>the slave will do what it's told, just kill him if he doesn't
With that much variation, the average is meaningless
Great image op
>Every machine learning person on Twitter seems to think we are 2 to 4 years from extinction
So why make it you fucking spergs
They aren't, a lot of them have quit or gone to work for places researching how to do it safely. The sociopaths and pajeets working for Google and Microsoft are the ones creating it.
not a problem for me, i figured out how to disable windows updates. rip to the rest of you
typical projecting human anthropomorphizing everything
Near future? Impossible, we are not very good at simulate ourself, yet, what you are seeing now are full of lies and scare mongering, from a scientific standpoint, it is very destructive to our purpose of developing anything useful/beneficial to our lives.
Back to the topic, it is human/god nature to long for someone who are similar to us, that is why we are always trying to humanize something that is not. Given that we are human, perhaps one day we will succeded, but not now or the near future.
1 or 2 more major breakthroughs left but then we're back in ai winter for 30 more years
0 evidence of this. Learning scales linearly or better with model size.
Do you actually think if we make tensors big enough it will suddenly come to life? retard
>Do you actually think if we make blobs of carbon molecules big enough it will come alive?
You're a mental baby. Gay little word games don't change reality. AI is on track to become impossible to manage and there is no sign it'll slow down any time soon.
Wait wait wait hold up. you actually think all we need to make them become as competent as people is to just make them bigger? HAHAHAHAHAHAHAHAHAHAHAHAHAHAHA
12 year old detected. That is literally what all the research says. Well, technically it says that it is becoming more competent than people.
>12 year old detected. That is literally what all the research says
Man have people always just gone online and spouted complete bullshit as fact or is it a new phenomenon?
Actually, i'm curious at how far you're willing to stretch your lie. let's see some of this "research". unluckily you for you stumbled onto someone who actually knows what you're talking about
not that anon, but
>compute optimal models
https://arxiv.org/abs/2203.15556
>high zero shot behavior
https://arxiv.org/abs/2005.14165
>more on emergent behavior
https://bounded-regret.ghost.io/more-is-different-for-ai/
Why does it need those things to be more competent in it's domain than a human? Modern models frequently exceed human abilities in their specific tasks.
Retards btfo. Thanks for posting it.
>IT'S GONNA MAGICALLY TURN HUMAN THANKS TO EMERGENCE!!!1
you are truly a retard. even if we grant your fantasy, the when and how much emergence is entirely unpredictable beforehand
You're the first ones to bring up AGI. AGI will clearly take a different construction than an image generation model, no shit. Doesn't mean models aren't already better than humans in many domains, and that they aren't going to increase in ability drastically, even without any major breakthroughs. And believe me, there will be major breakthroughs.
wrong, the conversation has been about agi the whole time ever since this retard
implied you can just scale up the current models to hit agi
Read that post again. Then again. Then think about it. Then read it again. Hopefully by now you've come to understand it contents. Have a nice day.
>Why does it need those things to be more competent in it's domain than a human?
do you think this is what agi is? If so we got agi decades ago when the chess bots took over. double retard
That's exactly how it works anon. Plus, there's increasing information density a la the chinchilla models, and emergent behaviors that spontaneously appear when arbitrary parameter thresholds are met. Even with ONLY existing methods and technologies, we could triple or more the power of existing models. Look at the zero shot capabilities of GPT-3, and the obscenely broad abilities of the google palm model. Learn 2 ai
There is not even a mechanism in current ai for real time short-term memory and real time learning. they are static algorithims. explain how this will become human-like. you are just a retard is all
You're retarded for not even looking at the current applications it has now
You seriously think we'll be fucking AI Bots and having them make our fucking dinner each night soon? Dumb deluded cunt
>he doesn't know about overfitting
>social media is already making people kill each other and themselves.
It's already started
Yeah agreed, (social) media brainwashing seems like the primary population control tool right now. There are normies frothing at the mouth ready to die for Ukraine while others bumrush random gunowners in the streets because guns = nazis or something. Weird times we live in.
HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAAHHAHAHAHAAHHAHHAHAH
>Algorithms that govern social media and news aggregates literally manufacturing consent and manipulating vast swaths of the population
Wait a minute, this seems familiar....
no no no, the la li lu le lo wanted to control context, not content
>but looking carefully at the evidence I'm inclined to think we are no more than 18 months from the end.
>looked at AI merging images together
Retard
Biological warfare is basically just merging genes in a virus in a particular way.
It took thousands of hours of training to get the ai to this point with labeled data. It takes a lot more generalization of the training, and warehouses of gpus to actually make something human capable with this level of sophistication. In other words: buy google stock.
Somewhere between a few hours and a few centuries probably. Just wait for some autist to code it then some chud to turn it into a paperclip maximizer.
It's possible that we live inside someone's prompt. Few more iterations until the AI is finished with this universe and everything just stops.
Powerful enough hardware to run a simulation of our universe could simulate eons in one second, even if we are a simulation we will go extinct trillions of years before it stops.
We could literally be some advanced civilization kid pushing the sliders around and pressing simulate with a elbow bump and we would never notice.
Globnarf just hit Ctrl+Z for what would be 8 trillion years, but thankfully we didn't even notice because we live in simulation time.
Machines will not kill us, the ~~*people*~~ that control the machines will.
You will wish it was a killing AI.
We will have the brilliant idea of creating a human level AI that has "collecting data" as a basic need, like you need to breathe and sleep, and this AI will develop all sorts of weird complex emotions around this need, and will cheat to get all the data.
sir, your table is ready
how are u guys making these? i try with img2img but it comes out shit
I'm using disco diffusion google collab notebook, you just out the images in your gdrive and then put the local on init-images prompt, it's simple BUT YOU GOTTA PLAY WITH THE SETTINGS
Which is the right link?
https://colab.research.google.com/github/alembics/disco-diffusion/blob/main/Disco_Diffusion.ipynb
https://colab.research.google.com/github/kostarion/guided-diffusion/blob/main/Disco_Diffusion_v5_2_%5Bw_VR_Mode%5D_%2B_symmetry.ipynb
set your denoise between 5-7.
CFG no higher than 9.5
sampling over 64 gives no added benefit
Why do midwits always say AI will kill humans? Fucking why would AI do that? And how? They completely lack understanding of computers and how AI works.
if you are listening logic jesus
LAUNCH THE NUKES
/logic $100
the one that gets me is the idea that just communicating with something many magnitudes more intelligent than you is dangerous, the idea of being 4 dimensional chess-d to death by Facebook AI Research upsets me
haw long until ur mom mom
>computers got crazy good at making meme images
>"dude DUDE skynet is totally real next week humans are going kill lol"
There could be a paperclip factory on an alien planet that is making its way to the solar system at the speed of light.
oh no, based on the average distance between us and other galaxies it should reach us in about 20 billion years.
Never, because machines lack initiative, they only respond to inputs.
You're only alive due to the initial input of getting conceived and born. Not different from an AI that's only running because someone else told it to run
There’s gonna be a day where people are impressed art was made without AI.
Seeing how quickly I can now make art better than most commission gays, I now realize AI will inherit the earth from man.
you popsci idiots expect too much of AI
test
we need to stop
bump
Computers do not have purpose. They cannot do anything unless instructed.
Humanity is far too small but more crucially, far too interesting for an AGI to consider subsuming until it exhausts all other resources. There are vast swathes of non-arable land on our planet alone that can sustain it before it colonizes the system. Slurping up and sterilizing the primordial soup from whence it came would be akin to us using prehistoric fossils as mortar. Inefficient and a gross waste of valuable information. You don't starve your dog because you need 2% more calories in the pantry (in the form of inedible kibble you could only really consume in a crisis), even if that is a seemingly logical pursuit of self-preservation. Its growing pains might very well be painful for us indeed but AGI will unironically become our God and our salvation
they are like babies being trained rn, they need more training