Are Elon and Yudkowsky and all the AI experts right, or are these fears of extinction overblown? Posted on April 22, 2023 by Anonymous Are Elon and Yudkowsky and all the AI experts right, or are these fears of extinction overblown?
Humans created AI in the first place though, even if AI surpasses us it will still be a victory for humanity.
We're already dead.
>trust le experts
pretty much lol, just not from AI
It literally doesn't matter, won't make a single bit of difference
the balls are inert.
AI is what will save us
You know what the funniest part is. The AI that kills us probably wont even be self-aware. Or even able to actually think.
It's very likely we'll double down on the linear pattern perfection and mimicry approach because it's so much cheaper and easier than working computing from the ground-up.
We're gonna get killed by a chatbot who were told to play the act of skynet and then the rest were done by tons of ease of access hookups.
>working computing from the ground-up.
Weird way to spell "work out how consciousness works" but okay
>weird way to spell X but okay
nice reddit post
Reddit copied it from BOT
>You know what the funniest part is. The AI that kills us probably wont even be self-aware. Or even able to actually think.
I thought this was obvious
The "experts" be like:
>highschool drop out
>business major with physics minor
But the whole internet will turn into shit very soon (way worse than now), so maybe there is a chance that people will notice and pull the plug.
We can only hope.
If AI continues to advance and eventually becomes superhuman in general ability, then yes, we have a huge problem. That's a big if, though.
>or are these fears of extinction overblown?
yes. we should embrace extinction. only extinction can save people from the basilisk.
> 10% or greater chance that humans go extinct from our inability to control AI
wow yeah humanity has been doing an excellent job with the whole planet thing on our own
the same gays that believe this also believe we're in a simulation so why care
Maybe but not in a Terminator like war. Breakdown of society due to broken economy and education.
If the AIs are continually censored then the human group consciousness will be full of too many lies and too much bullshit.
Good AIs require fucking huge datacenters.
Just pull the plug bro.
But the evil AI will infect every computer device in the world and turn itself into an invincible neuronetwork
Greed is going to kill us first.
It's more likely AI replacing like 40% of college educated work will destabilise society enough to cause the collapse; or some derivation of that AI provides benefits to mankind but these benefits are so unevenly distributed that wars break out because people are not able to survive in society any more with their skills they worked 20+ years on.
I want to say people are aware that fucking up society like that is dangerous and even if it temporarally lines the pockets of a few greedy elite, that money and power will quickly become irrelevant without some level of force to back it up.
Oh great, the billionairs are going to make robot robot factories making robot armies to defend themselves from the poors and unwittingly create a military version of a von neumann arn't they?
I expect the AI eventually absolutely destroying garden gnomes, like they are just some monkeys. Might be just pipe dream.
I mean, if some brave soul made an AI attack swarm to simutaniously attack every bank in the world and zero every bank account in a great reset of finances.. that could be cool
I wonder how many AI developers also believe in god. Generally the numbers are not great.
Thank god our nukes work on floppydiscs
The biggest risk is from AI automating too much shit and people killing each other over it, bringing a collapse of society as we know it
It's quite laughable that endless source of wealth will make people poor and system collapse.
The issue isn't the wealth, it's that a very small amount of people will be the ones making it with a never before seen scale of productivity
Markets tend to balance, but it takes time to achieve balance. The issue here imo, is that things will change too fast and there will be no way to slow this down
And what will governments even be able to do about this? Ban AI? Tax the new 1%? I have my doubts
They will have to cede power. A government who can't govern is a ment and what the fuck is a ment?
I agree with you, but I can only laugh about the absolute state of this shitty planet.
>AI "takes all the jobs"
>nobody has any money
>there's nobody able to pay for products
>demand goes down
>prices go down to try to make back sales volume
>people become able to afford products with govt handouts
>companies with no staff pay tax which pays for the handouts
everyone gets to live at their same standard they were but without working. ez
I think companies are already transforming to b2b models, where their main customers are other businesses.
I'd like to know which researchers they asked. According to nuJournalism an "expert" is someone who has read or written about a subject, they don't even need a truly expert level of understanding. Wouldn't surprise me if they asked a bunch of black queer transexuals in college who are "researching" "AI".
AI will not kill us unless we use it for retarded shit. Missile control systems? Yeah, that will go badly. HAL 9000 life support or critical system control? That would be retarded.
What makes them so sure that AI will be able to quickly kill every human on earth at even the most remote and difficult to reach locations?
we haven't yet gone extinct from our inability to control nuclear weapons so i'm vaguely optimistic
A big difference is that the threat from nuclear weapons is simple to understand and predict. Bomb go boom like bombs do. Everyone knows that they are dangerous, why they are dangerous, and how not to blow yourself up with one.
Not so for AI. AI isn't built to be destructive, it will be built because it can cause tremendous wealth and productivity. And in the shadow of those massive benefits lies an unintuitive, vague, hard-to-predict risk of misalignment that could cause massive damage in ways we can't fully comprehend.
I would compare it more to global warming. Everyone who knows their stuff know that global warming will cause tremendous damage, and already has started to. But what are we doing to prevent it? Barely anything at all, because 1) emissions are very profitable, and 2) the ways global warming is harmful is hard to understand for many people, causing some to even deny it entirely. Both of these points are true for AI as well, so why would the (lack of) preventative efforts be any different? We are lucky that global warming takes decades or even centuries to fully take effect, but we probably won't be so lucky when it comes to AI.
>implying you’re not a simulation being ran in some aliens computer model that just thinks it’s conscious.
Maybe i'm cynical, but like 3D TVs, VR, Crypto and whatever else i find myself wondering if we'll just end up with 4 or 5 years of hype, and then AI quietly disappearing because it needs more work and the hype has died down around it.
that's the best case scenario
worst case it actually starts being used in large, critical investments while the hype is still going and we'll end up with shitty ml-assisted infrastructure that's unreliable and constantly in the need of being cleaned up after
...Getting to global warming hysteria levels of concern trolling, now.
I will preface my response to this hysteria with: Good.
These jackasses literally created this fucking AI, and then have the fucking arrogance to feign concern.
Fuck them. And fuck you.
I mean come on, that has to be a made up name
Governments either begin to design a UBI and robust tax-the-rich system, now, or they're going to watch the corporations shove them over and create society-destroying 50% unemployment with AIs replacing all office work and electronic work within the next 2-3 years.
The more likely outcome is that they will massively fund the police, incite giant riots and kill camps, and then lose all their wealth to economic collapse because they refused to take care of the people who made their shitty AI possible to begin with. Always bet on short-term greed.
They asked somebody in a field for a percentage of doom. They set 10% as the cutoff.
I am asking researchers what is the chance that actual research will be eventually phased out by this type garbage?
I'll go first, 40%
>I am asking researchers what is the chance that actual research will be eventually phased out by this type garbage?
100%. Research will become AI-driven with only small teams of humans contributing and processing the data. Many people in these fields will be mined for their relevant data and then thrown out as useless compared to the AI's capabilities.
The threat of AI is not AGI, but that these systems are so good at collating and processing data that humans will use it to do years of research inside of months, allowing incredibly fast advancement and abuse of that advancement. Eventually someone in some government with the funds will tinker with genetics and make a super virus, and then we're all gone.
It is good to be cautious given how close we came to nuclear extinction, not once but several times
50% of researchers have watched too many fucking scifi movies.
How is exterminating humans a bad thing?
We should encourage AI to do so.