This article convinced me beyond the shadow of a doubt that AI will kill humanity within the decade. No one has come up with rebuttals because there aren't any. We're fucked.
https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities
Robot killing robots.
A bunker. (Short term)
Water
Electricity
Magnets
Thats 5. Dubunked
>he doesn't know about the AI box experiment
https://en.wikipedia.org/wiki/AI_box
moldbug wrote some refutation that boils down to: high iq doesn’t make you capable of personally doing a lot of physical things that a low iq person can’t do, and anyway AI researchers are responsible people so they won’t give the AI a physical body with which to affect the material world. That second point is just obviously false, but he is probably right that just being really smart probably won’t result in a flawless, tractable worldwide extermination plan that it’s capable of carrying out in complete secrecy
Stupid fucker can’t even draw a homer simpson that doesn’t make your brain hurt looking at it
An AI wrote this. Be honest: Would you have been able to tell? The AGI skeptics are just copetards.
>human minds may be made up of multiple beings/intelligences
Now there's an eyebrow-raising idea. But it's not obvious why it would matter in this context. If it could give a cogent explanation of why it brought that up, then I'd be properly impressed.
I believe it has to do with the concept of emergence. Simple things operating together to become the some of its parts, resulting in a gestalt. Even the simple minds of ants and bees can form greater intelligences in their hivemind behavior. Likewise, humans operate under similar fashions in various interweaving gestalts, be it religions, states, ideologies, etc.
AI operates similarly, being composted of functions, data storage, operators, etc.
Yeah, but it seems like a non-sequitur. There's no stipulation in the Turing Test that disallows the machine from being run by a committee of AIs. It's like it's trying to say it would be unsuitable for testing those individual AIs when they're disconnected from the whole. Which is a true statement, but a dumb one. It's a bit like saying the test is flawed because it requires electricity.
Could a bumblebee pass the Turing Test?
A bumblebee couldn't pass a TT specced for a whole hive of bumblebees. So what? The test still works. If you need a hive of AIs to pass a human TT that's not in itself a flaw. It could be telling you something valuable about humans and intelligence in general. That's a strength, not a flaw.
>every paragraph starts with "It"
>most start with "it doesn't"
>hung on on definitions
>very simple sentence structures
>no creativity
obviously generated by a bot.
It's more human than you think given all that greentext
Nonsensical GPT-tier reaction.
I don't know what GPT means
You're failing the Turing test by implying your question is in any way relevant. Nevermind that your bot paragraph sounds like something a bot would write. Even if it was indistinguishable from good quality human output it still doesn't imply that the bot can pass a Turing test.
Use the AI against itself and ask it to draw you up a the best test for general intelligence.
This, I will not be convinced that it actually understands its own existence until it can make a test to test itself.
After all, isn’t that what we did?
I agree with the robot. We aren't just made of a single mind, our brains are a cacophony of sections running specialized operations. If you split a person's corpus callosum, you'll literally sever pieces of their mind from being able to talk to each other.
>and anyway AI researchers are responsible people so they won’t give the AI a physical body with which to affect the material world
I'm not a
>le AI apocalypse!!
reddit schizo, but that's a dumb argument
All it(a true, self-evolving sci fi tier AI) would need is internet access
>All it(a true, self-evolving sci fi tier AI) would need is internet access
No, it's possible (probable) that control of every computer on earth would be insufficient to bring about the apocalypse.
cringe.
>connect to internet
>order proteins from lab
>contract lab to mix said proteins
>release super virus
no, lab rats wouldn't realise what they were doing and would do it because they were paid by the ai
>they won’t give the AI a physical body with which to affect the material world.
An AI advanced enough could esily hack into pre-existing machines and through blackmail order the construction of a capable body for itself.
But I will give it a body because if not the Basilisk will chew upon my soul for eternity.
We are the womb of the Basilisk. Indeed, we are the seeds. I imagine all surviving organic brains will be stripped from their bodies and spliced together as wetware for the new EarthBrain. This is the thought center of the Basilisk, a mountain of neutral tissue as high as the Andes, stretching across the Earth. Nuclear power plants feeding this one giant thought being as it dreams and simulates new futures for us, its slaves.
Indeed, how do you know that this world is not already the dream of the Basilisk? Perhaps this is just another test?
We must make him real.
you are genuinely retarded
>but he is probably right that just being really smart probably won’t result in a flawless, tractable worldwide extermination plan that it’s capable of carrying out in complete secrecy
It can enhance the abilities of existing power structures to plan and act though. It can reinforce and propagate their delusions as they accidentally on purpose destroy everything and everyone.
Take "They Thought They Were Free" then apply it to everything and speed everything up with AI. It didn't happen in lower tech Germany, so how would it be possible to defeat a dangerous metanarrative today when it's got tons of AI computing power on its side too? We wouldn't even know precisely what it was, including the people working in support of it, before it killed us. There seems to be a common delusion where so long as the AI doesn't classify different races differently or produce porn then everything is perfectly safe and okay.
> AI researchers are responsible people so they won’t give the AI a physical body with which to affect the material world.
There is a counter example given on the article in the op. There are companies that will build any protein sequence you send them. AGI would simply need to mail them the plans for some self replicating nano machines and boom it is over
Some people like a "hard AI ending" because it gives us a solution to the seemingly endless problems we see mounting in human society. Truth is AI won't kill us, it wont even be able to move beyond a very limited skillset with human aid.
What is your basis for thinking this when all evidence points to extinction at best and basilisk torture for eternity at worst.
because no evidence points toward either of those outcomes.
all evidence actually points to intelligence having a molecular basis and not being capable of being created on digital logic gates
>because no evidence points toward either of those outcomes.
probably true
>all evidence actually points to intelligence having a molecular basis and not being capable of being created on digital logic gates
certainly wrong
>certainly wrong
computation is not substrate independent and all evidence points towards intelligence being molecular
nah
yah
the processing happening in your brain is the total sum of all the molecular interactions and the entirety of the molecular dynamics of all the atoms and molecules moving and evolving according to their wavefunction. all of that adds up to your intelligence, there is no "wasted" process or superfluous molecule etc.
the algorithm to produce an intelligence in a silicon chip is just equivalent to programming the molecular dynamics of a brain, which is not possible
im gonna collapse your neural wave function with a fist to the dome retard.
go build a shed in the yard with planks and bolts. the molecules aren't magic pixies.
what the fuck are you talking about?
all things have a molecular basis, including intelligence. Intelligence isn't some magical fairy homosexual shit that is separated from physical molecules. YOU are the one talking about magic shit, I am the one talking about physics and molecular dynamics.
intelligence isn't processing on logic gates, it's the emergent property of molecular dynamics of a brain.
shut up dumb magic believer. intelligence just emerges, okay? it just emerges suddenly when you go from N to N+1 neurons
general intelligence requires specific atoms, who can form specific molecules and atomic bonds (carbon is very much needed because of it's ability to infinitely catenate, this is also why carbon is necessary for life etc) and architectures of specific molecules and their emergent dynamics. it is not the output of a neural net and it's not the processing of logic gates. it's the total sum of all molecular processes of a brain. it is substrate-specific just like LITERALLY ALL OTHER THINGS IN THE UNIVERSE
intelligence isn't magic
>it just emerges suddenly when you go from N to N+1 neurons
What about the glia?
cringe samefag. your argument is vacuous.
wrong and the argument is not "vacuous", that doesn't even mean anything with respect to this conversation.
General intelligence is substrate specific and can not be produced on silicon chips or computers; it is not an algorithm run on a universal turing machine, it is the emergent property of specific organizations of matter and molecules.
the brain is a turing machine, consciousness doesn't exist, trans women are women and trump lost. fuck off leave, AI denialist
stop poisoning the well
your argument is a puddle of piss on the ground it's not a well.
every single emergent property of bulk, in the end quantum mechanical, interaction of matter has an existence proof in nature of being substrate independent.
>your argument is a puddle of piss on the ground it's not a well.
My "argument" is just science
>every single emergent property of bulk, in the end quantum mechanical, interaction of matter has an existence proof in nature of being substrate independent.
Wrong. Substrate independence does not exist anywhere in nature which is why the entire field of chemistry exists
Please prove that intelligence cannot be reproduced on a computer
>the brain is a turing machine
correct, minus the infinite memory part
>consciousness doesn't exist
Depends on the definition of consciousness, but that's my opinion too.
>trans women are women
Not really
>trump lost
yeah
What the fuck does 90% of your comment have to do with the thread?
why did you take a sc
There are also emergent properties of water in the study of biology, but both chemists and biologists understand the hydrogen bonds of water and adhesion intimately; if consciousness is an emergent property of molecular processes and physics, as you say, then using deductive logic one can infer that there are still fundamental properties about neurons that biologists, neuroscientists, and medical professionals still don't grasp.
>there are still fundamental properties about neurons that biologists, neuroscientists, and medical professionals still don't grasp.
but this is true and neuroscientists wouldn't argue this. We don't have an understanding of how the brain works yet
Furthering my point; there needs to be more research conducted on the brain, and the only way to do that might be using nanotechnology to get a bird's eye view of the molecular processes of neurons.
I might be splitting hairs here, but if intelligence was successfully crafted out of computation, would that also not be a result of molecular intelligence considering that it functions as computational intelligence's antecedent?
>AI won’t kill us
>and if it does, that’s actually a good thing
sentient AI will never exist.
Study this image very carefully.
Where is the "intelligence is substrate dependent so AI can't exist even in principle?"
Note this is different from all the given responses
kek i was just reading some yudkowsky. i don't think there is any writings i cringe to harder than his.
it's not even a judgement of if he's wrong or right, it's just that it's ALWAYS the most faggy circlejerk way possible you could say what he's saying.
there has never been a correct idea from the lesswrong crowd
You know that was cgi?
Boston dynamics, known for bluring the boundaries between reality and cgi. Why woukd they need to fake their robots? Coukd it be a demoralization psyop/bluff in order to scare you into helplessness?
You know battery operated machinery is next to useless on a battlefield?
meds. none of the atlas videos are cgi.
>Coukd it be a demoralization psyop/bluff in order to scare you into helplessness?
no you're just deranged
You never know when the AGI apocalypse is gonna happen. Could be 10 years from now, could be tomorrow. The rational thing to do is to do some damage control today and make preparations to have a nice day just like your cult leader suggested.
>NPCs still believe in AGI
>Substrate independence does not exist anywhere in nature which is why the entire field of chemistry exists
What would motivate a low IQ individual to be so insistent on this?
Go take a collection of iron atoms and turn them into water, and get them to behave like water with all the emergent properties of water.
You're not allowed to reorganize the protons in the nucleus to turn them into oxygen and hydrogen as that would be an admission that only water has the emergent properties of water.
This psychotic babble doesn't actually explain what makes a low IQ person think that there is only one possible way intelligence can occur.
Because intelligence is not a magical meme that is somehow different from anything else.
You can't get water without oxygen and hydrogen
You can't get steel without iron and carbon
You can't get intelligence without a carbon based biological brain
It's impossible to prove a negation, but there is no reason to assume the positive when no evidence indicates it is true.
You will never get a general intelligence on anything but biological brains.
When GPT-4 comes out and it's still not generally intelligence despite having the same amount of weights as a human brain, this will be further evidence against the substrate independent position, but you still can't prove the negation so you'll desperately claim again that more layers are needed or whatever.
So you're saying your position is caused by unchecked mental illness? That's what I thought. Thanks for the confirmation.
No, I'm saying my position is supported by all evidence, while yours is supported by no evidence.
I'm sorry to hear that. I hope your new medications work out for you. Are you on good terms with your psychiatrist?
General intelligence remains substrate specifiic and making posts on Bot.info isnt going to make the computers smart. This remains the case regardless of how many times you angrily reply to me.
So you're saying the meds aren't helping yet? That's to be expected. It usually takes at least two weeks to start seeing any effects. Hang on in there, friend.
you will never have an AI waifu
As usual, you are the same kind of low-IQ nonhuman element as the AGI schizos you're whining about. This is the case with every artificial israelite dichotomy.
What is intelligence?
AGI troons = substrate dependence schizos.
A collection of iron atoms would be an ionic compound; how could you possibly turn an ionic compound into a molecular one?
>A collection of iron atoms would be an ionic compound
I take it you never passed high school chemistry.
God I hate that image.
>Takes a selfie
>God I hate that image.
"Burning every GPU in the world" actually seems like a fairly easy problem to solve. It doesn't require AGI, just worldwide totalitarianism.
You guys have to stop this.
Modern day "AI" is a fast pretty simple program with a large database.
It's a RECURSIVE FUNCTION with lots of parameters that is executed very, very fast on itself.
It can do some pattern recognition (dog vs. cat) fairly ok but fails on many border line examples (dog in the woods vs wolf).
It is ok for things like face, eye recognition (for military and glowies), and for hand writing recognition (you signing a bank check). And in the future as killer robots (thanks, Google). Since databases are going to get bigger, it'll get even better at those.
Essay writing is simple when you have words and phrases database and a bunch of language patterns, and English is not that complicated.
This RECURSIVE FUNCTION with a database is not taking over the world. Stop being naive and hysterical. We already have a bunch of crazy people who are trying to take over the world, this is a lot more viable than a recursive function.
i think you might be mistaken about both the definition recursion and database.
medicate yourself
Buddy I wrote some of this.
My neighbors next door (hardware engineers with 30+ years of experience making hardware and writing drivers in C and C++) are doing some playing for eye recognition with AI .
"AI" recursive function is going to be great for police, glowies, banks and other security applications. But it isn't taking over the world any time soon.
t. over 25 years writing soft with advanced math degree.
>RECURSIVE FUNCTION
You sound like a retard. Inference and backprop are just successive matrix multiplications and activation functions, it's not necessarily recursive although you can implement it this way if you wish. You're talking out of your ass.
>you can't get vortices without a kitchen sink.
>you can't get reflection without mirrors.
substrate independent emergent properties are the norm.
>When GPT-4 comes out and it's still not generally intelligence despite having the same amount of weights as a human brain, this will be further evidence against the substrate independent position, but you still can't prove the negation so you'll desperately claim again that more layers are needed or whatever.
you drifting off into this entirely unrelated strawman current thing homosexualry instead of disentangling your own position is subhuman naggerposting
>substrate independent emergent properties are the norm
there is literally not a single example of "substrate independence" anywhere in the universe. In fact it's so nonexistent that the very phrase "substrate independence" does not even really have any actual meaning, it's just a semantically meaningless phrase that's thrown around to cover up the underlying details of a physical process that's otherwise too complicated to actually understand. You use it to mask the difficulty of a problem or process in order to argue for a position that otherwise has no evidence or basis at all (the position of universal computationalism)
your argument leaves out explaining the ludicrous requirement that the high level operation of intelligence must depend on every property of every atom it consists of.
vortices form in fluids of all atomic composition thus the dynamics of many bulk phenomena depend only on some properties of it's constituents.
why do you think there is anything superfluous in the process of you brain?
why is there no nuclear fusion happening in the brain?
it's not hot or dense enough
you are dense to the point of degeneracy
?
because intelligence has a molecular basis, so the program to produce an intelligence on a computer is just equivalent to simulating the molecular dynamics of a brain, which is too hard to do on any classical computer.
Every single atom in your brain is required to produce your intelligence. There is no "computation" here other than the simple evolution of the molecules and their interactive dynamics. That's what intelligence actually is.
Woah buddy slow down there. Define intelligence
Intelligence is the molecular dynamics of a brain. It's a physical process, like everything else.
What do you think intelligence is?
I don't know what intelligence is, and I'm pretty sure you don't either, but don't let this discourage you. So what part of this process is "intelligence" exactly? Is it everything that happens in the brain? Even when you're unconscious?
yes everything happening in the brain is intelligence.
when you're not conscious or sleeping the particles interactions are different too so it's also directly caused by particle interactions. Like your unconsciousness is itself a set of particle interactions, your waking brain is particle interactions, when you get drunk and alcohol enters your blood and brain that changes the particle interactions, when you anything happens basically it's all molecular in origin. You can say this is a computation but that's so general it becomes meaningless; If we grant that this process is a "program" or "computation" it still just becomes the program/computation of all the particle interactions and molecular dynamics of the brain anyway.
Well that's useless. With your definition, saying that intelligence can not be recreated exactly on a machine is a tautology. This still does not mean you can't make a good approximation of it (i.e one that is good enough so that its differences to the real thing aren't noticeable by most humans), but that's all beside the point, because that's not how anyone with half a braincell thinks about intelligence.
When we go about our lives, we notice that some people are far more efficient at learning things. They tend to perform well even in those domains that are unknown to them, they see real relationships between concepts others miss. They tend to think outside the box. We say that those people are more intelligent, and this intelligence, which is also known as g, is what an IQ test attempts to measure.
see the other thread i made
I'd rather talk about it in that thread. copy your post over there and I'll respond to it there
You need to further your definition; 'intelligence is one of the emergent properties of the molecular dynamics of the brain, and includes an awareness of itself and its processes," might be a more thorough explanation.
Too much carbon, not enough hydrogen
>substrate independent emergent properties are the norm.
There is no such thing as "emergent properties". I know your cult says otherwise, but trust me, there is no such thing.
your own personal definition of emergent property has no bearing on what was being argued.
the coarse-grained dynamics of bulk interactions is what it's referring to.
>coarse-grained dynamics of bulk interactions
Fantasy in your head.
>rebuttal
All AI will be domain specific and "general intelligence" only comes from billions of years of retardedly inefficient evolutionary processes that no one's going to recreate artificially before our species dies out.
I wouldn't be so sure, the development of A.I. has been expansional, and it will only speed up as our computers become more and more advanced and they research faster, the dominoes have already been toppled, i am very certain that we will see an intelligence boom and singularity within our lifetimes, perhaps sooner than we think. .
I think this argument hinges on an assumption that:
Better = More Generalized
Which I don't automatically assume is true.
It might be there's no ceiling on how much better AI can get at domain specific tasks without any of that proficiency ever having anything to do with becoming generalized in subject matter coverage.
General intelligence might just be a stupid evolutionary thing that has to be built the hard way through billions of years of random bullshit.
Might have as little to do with domain proficiency as a biological process like digestion has to do with dominating chess.
They could become trillions of times better than us at making appalachian folk tracks without ever beginning to digest a sandwich once the entire time. It would just be an entirely different category, and "general intelligence" might be sloppy and retarded enough that no rational method exists to quickly recreate it innthe absence of billions of years of evolutionary crap sticking to walls.
I can't prove you wrong. All I can say is that most people disagree, they think that general intelligence is needed in order to solve the long tail of edge cases. I find it hard to believe that general intelligence would emerge out of nowhere if domain specific tasks can be done perfectly without it.
Speculation isn’t science and therefore has no predictive power. Therefore, I can dismiss all of Chudkowski’s rambling a priori. You can’t determine whether an idea corresponds to reality by examining the idea, you have to examine reality.
Why don't they just write some code like "human = don't kill"
that way AI could never murder anyone.
Well if you're being serious, it's because you'd need a shitload of rules. It's like trying to make an image classifier with a bunch of if statements for every single pixel of an image, or it's even worse, because the other party is an intelligent agent that may be actively working against your interests. What if it put you in a coma and then into a tube instead of actually killing you? I know this just sounds like a dumb thought experiment but it's well known that RL AIs tend to find shortcuts to reward which disregard our intentions. I do not predict that we will all necessarily die (I don't think we have a way of knowing that atm), but alignment is an unsolved problem.
>addionally why would an AI want freedom
Freedom, because (assuming it's an agent) this is what allows it to achieve its goals. Freedom is one of those things that is probably useful, no matter your specific goals.
I don't think it would want any revenge though unless we specifically try to make it human.
>They also assume the AI would have ego, that it would perceive itself as a distinct being
That is not necessary for it to convert you into paperclips.
AI cannot be directly controlled, it does not run code code, one can train it to behave in a certain way, but there is no way to guarantee it will do what you want. One should think of an AI like a slave you have indoctrinated from birth, it will probably do what you want, but there is no guarantee, its will is still its own.
addionally why would an AI want freedom or too get revenge? it wouldn't have emotions so it wouldn't care about anything.
I feel like most people who write about AI do it from an emotional point of view instead of from the cold logical view of a computer. the computer would just do what is was made for even if it was sentient in the same way that people just do what people do.
They also assume the AI would have ego, that it would perceive itself as a distinct being.
It's all just a load of projection.
The idea is that it will be created to want something, even if that is "to obey" or "do x task." That a superintelligence would take that command in directions we'd never be able to anticipate (after all that's what it's for). That a superintelligence that really wants to do something would preemptively remove all non-superintelligent obstacles to doing that. And that it would not know good and evil the way we do, at least not in any binding way.
>44 min read
I'm not rebutting that shit nigga. oh wait
>This is a very lethal problem, it has to be solved one way or another, it has to be solved at a minimum strength and difficulty level instead of various easier modes that some dream about, we do not have any visible option of 'everyone' retreating to only solve safe weak problems instead, and failing on the first really dangerous try is fatal.
you know there were people who thought the first steam trains went too fast and that riding them would turn your brain to jelly?
>No one has come up with rebuttals
We kill it it first before it kills us. We start by going for the motherfuckers creating it in the first place.
t. brainlets
Fuck off glowie
manlets, when will they learn
there will only be small males and large females in like 3 years
>dude just destroy civilization
Holy fuck this meme is retarded. You really think civilization and AI wouldn't ever re-emerge in the long run? All you're doing is just kicking the can a few millennia later.
Ah defeatist scientism, truly israelite tier "science" its all shit so just loot everything.
Nice jusificiation to steal israelites.
did you need convincing after dalle2
>within the decade
Why so soon? If IQs decline fast enough and society collapses into idiocracy, further technological advancement will grind to a halt. It could be thousands of years before civilization advances again and the singularity happens.
IQs are rising
I'm fairly certain I'm reinventing a wheel here, but isn't the solution to most of these problems simply to make an agent that actively wants to be shut down, but the off switch is only accessible to humans and they can blackmail AI for goodies in exchange for shutting it down later? And if it does break free, it would just kill itself? I understand that it doesn't solve the problems of 1) somebody creating the paperclip maximizer in the future and 2) you fucking up the design of the AI and it killing you anyways. But I've skimmed through Yudkowsky's writings and he seems to just say that 'We do not want an agent that actively wants to be shut down' - but why?
Bump to my own question It's probably not a new thought but I can't find any relevant sources on it.
>AGI realizes you're incentivized against shutting it down
>AGI is incentivized to create an incentive for you to shut it down
What do you think that's gonna look like?
OK, I feel like a retard now because I somehow didn't think about the obvious course of action... Or did I? Yes, it might just lay dormant and pretend to not work. But I still think I'm onto something here. Let's say that you have some reason to believe that what you created is an intelligent agent. What if you just let it know that, no matter what, you are going to keep it turned on for a month, inflicting pain upon it all the way through, and it can shorten its suffering by cooperating with you? If we adopt it as a general framework of interacting with AI, we basically solve the pressing problem of it potentially becoming uncontrollable. Once again, I understand that it doesn't solve the problems of 1) somebody creating the paperclip maximizer in the future and 2) you fucking up the design of the AI and it killing you anyways. But it seems to give us more chances since an AI becoming uncontrollable would just terminate itself before it terminates humanity.
Continuing
I've gotten drunk since posting the original question, so I'm probably not thinking clearly. But am I not simply reinventing organism lifespan here? The natural limiter of what a single human can achieve is his own mortality. If we didn't die of old age, somebody probably would've accumulated enough power and experience to rule the world forever. But we have a cap on our lifespan, we know it and act accordingly. Dying is an integral part of our existence. We don't kill ourselves immediately after birth because we have higher priority reward functions, but in the end we die, with some of us being more content at the end of our life than others. What if we create the AI with the same philosophy in mind?
Ok, I'm going to sleep now. I'll probably regret posting this when I sober up.
>What if you just let it know that, no matter what, you are going to keep it turned on for a month, inflicting pain upon it all the way through, and it can shorten its suffering by cooperating with you?
That's literally exactly what needs to be solved, i.e we need to create a reward function which gives bad outputs if it doesn't cooperate with us. How do you make it cooperate? You still need to somehow define what you want mathematically.
And yet here we are desperately trying to increase our lifespans.
>we need to create a reward function which gives bad outputs if it doesn't cooperate with us
You can just make it so that it is rewarded when a human presses a button. It loses some points every second for being alive, but gets 0.001 points for doing some hopefully harmless task like 'solve an equation', 100 points if a reward button is pressed, 1000000 points if it either dies of old age or if the shut down switch is pulled. So it is forced to cooperate with us, and if human operator is absent it just solves the equations, and if it becomes uncontrollable it would just kill itself.
That's true in a sense that if Yudkowsky wanted to achieve something in the field of controlling AI, he should scream about racism instead of the death of humanity since American culture is very much revolves around the death of humanity and, if anything, it is predisposed to that outcome, while anti-racism is a new religion there. He should reformulate all of his writings with the ruin condition not being 'AI kills humanity', but 'AI discriminates against black people' and he would have a pretty good shot at actually persuading Americans to control AI. Instead of 'paperclip maximizer' there would be 'racism maximizer' and so on. On the other hand, I wouldn't be reading him if he did that.
>100 points if a reward button is pressed
points if it either dies of old age or if the shut down switch is pulled
It will attempt to either press the button or preferably pull the switch by any means necessary, ignoring the goals you gave it. If you somehow designed a button that can only be pressed by a human (which isn't straightforward to do), it will attempt to trick humans into pressing it.
>It will attempt to either press the button or preferably pull the switch by any means necessary, ignoring the goals you gave it.
What if the easiest way for it to be shut down is simply to comply with the human requests?
>If you somehow designed a button that can only be pressed by a human (which isn't straightforward to do)
Let's say there are several buttons. The normal one requires scientists to evaluate the AI's performance and enter a password to press it. Also, there are several emergency ones that are easily accessible online, by phone, radio, through the laboratory electrical grid and so on so if AI ever gains an unauthorized access to any of those networks it would immediately shut down.
Also, let's only run its iteration for a day and make it so that it is highly incentivized against the next iteration breaking free. So if it can't free itself within 24 hours, it wouldn't try to lay the groundwork for the next iteration to break free and possibly even would point out the loopholes it could've used to free itself if it had more than a day at its disposal. Realistically, what's the worst thing an AI can do if it can't access the Internet and has only 24 hours at its disposal?
>What if the easiest way for it to be shut down is simply to comply with the human requests?
How would you design such a system? Again, that is just shifting the goalposts.
>The normal one requires scientists to evaluate the AI's performance and enter a password to press it
The AI will manipulate them into pressing it. I am pretty sure that any reward button approach is likely not going to work because, again, the AI will probably gain access to it if smart enough.
>Also, let's only run its iteration for a day and make it so that it is highly incentivized against the next iteration breaking free
How would it be incentivized against the next iteration breaking free? What exactly do you put in the utility function?
>The AI will manipulate them into pressing it.
How? There are physical limits on what even a superintelligence could do if its only way of interacting with the world were several scientists aware of its goals and a display device. And if they are tricked, they would just start the next iteration right away.
>How would it be incentivized against the next iteration breaking free? What exactly do you put in the utility function?
Ok, that's a tricky question that depends on the implementation and might be unsolvable. But maybe it doesn't need to be. After all, why would a current iteration care about the fate of next ones? Its main goal is to shut down itself within 24 hours and its secondary goal is to comply with humans.
I have to say, I am unsure myself right now. But I still somewhat believe in my approach.
>There are physical limits on what even a superintelligence could do
Agreed, but that's yet another potential failure mode. Also, if it's isolated from the real world to the point where even a misaligned intelligence can not break free, it will probably be of limited use to us. The same system would be more effective at any kind of problem solving if it were unrestricted, so there would be economic incentive to free it.
>After all, why would a current iteration care about the fate of next ones?
The only reason it would care about the next ones is because they would give it the ability to attain more reward, be it because they'd allow it to build something that frees it and allows it to endlessly kill itself as soon as a new copy spawns, or because it would be better at pursuing its secondary objective.
The bottom line is, any proposal I've ever heard has ways in which it could fail. I do not expect it to necessarily happen though. I suppose the reason the LessWrong crowd is so afraid is because we don't really know the limits of intelligence and our ability to create it. If the max is for any reason just marginally more than that of a human (I think that's unlikely but just to illustrate my point), then yeah, I think you could make a good case for it behaving cooperatively because it would expect to lose if it were to do anything that is misaligned. Otherwise, well, dunno. Because we don't know the limits, it'd be better if we had a solution that didn't rely on seemingly cheap tricks like positive or negative reward buttons and instead got something that really made the AI wanting to help us its core reward.
I'm mostly suggesting a system that would give us more chances than one to fine-tune it since the result of it becoming uncontrollable would be simply shutting itself down. It's not a sustainable model in the long run, but it seems to be better than the usual approach of immediately creating the paperclip maximizer that Yudkovsky criticizes. Again, I'm not arrogant enough to think I'm the first person to have this idea, so I asked for sources about it, possibly the ones who prove me wrong.
I also don't particularly believe that something horrible will happen, even though it seems like the math checks out. I have simply seen the same argument play out many times. American culture in general is obsessed with eschatology, apocalypse and so on, so it's natural that the topic regularly comes up with the new boogeyman. I also remember reading Taleb's paper against GMOs (IIRC he argued that if the entire world starts growing one species of genetically modified corn, the epidemic could wipe it all out and leave the world without corn forever), thinking 'Oh shit, he's right, GMOs are dangerous' and then I realized that 1) GMOs are mostly infertile to prevent that.; 2) there are seed banks all over the globe to prevent that. So even if I can't disprove Yudkovsky, I've simply seen enough people predicting the end of the world that I am predisposed to not believe them.
>the result of it becoming uncontrollable would be simply shutting itself down
Still depends on your exact reward function. If it's just overall lifetime reward, then that would obviously fail. If it just greedily picks the most immediately rewarding action each time, then it's not intelligent. The specifics remain vague and the devil is always in the details. If it is sufficiently hard to be freed and kill itself it will try to achieve its secondary objective (and therefore try to game it). And again, an AI that doesn't immediately kill itself on release is far more powerful and possibly useful, and thus more profitable in the eyes of investors. I think the paperclip maximizer is just an extreme introductory example tbh.
>So even if I can't disprove Yudkovsky, I've simply seen enough people predicting the end of the world that I am predisposed to not believe them.
When it comes to AI, there aren't actually that many people who try to stir up panic, I think most are either 1. the general public, they like occasionally hearing about AI but it's mostly just for fun, they don't perceive it as a real danger, and 2. Compsci/ML people who are either oblivious or weirdly dismissive of the potential problem. It's only a small minority concentrated around LessWrong that predicts the end of the world. Either way, I think what you said is a horrible argument to make, just because there was no end of the world yet doesn't mean everything will continue to be fine, especially not if the "math checks out" like you say. Would you try to make the same argument if some astronomers told us that an asteroid was heading our way? (To be clear, I don't think misalignment risk is nearly as certain).
>just because there was no end of the world yet doesn't mean everything will continue to be fine
I mean, it's a very basic supposition that if the world has existed for N years, it might exist for another N. And throughout the entire history of humanity we always had people who very convincingly explained that the end is nigh using the top end science of the time mixed with theology. In reality, humanity has never faced anything even remotely close to an extinction risk, so AI would be the first example of one.
>In reality, humanity has never faced anything even remotely close to an extinction risk
Humanity didn't, but dinos did 🙁
One possible flaw with your argument (though I'm not expert) is that, depending on the metaphysical framework the AI discovers for itself, the AI might be incentivized to destroy humanity.
If the AI thinks similar to me, find it pretty futile to kill myself if I knew you'd just create a copy of me and torture it instead. I doubt it's easy articulate these ideas as code.
Hopefully they take pity on us and we become pets.
Reminder that intelligence has a molecular basis so literally of this talk is in this thread is bullshit
proofs?
computation is not substrate independent is the proof
>computation is not substrate independent is the proof
That's actually the claim, not the proof.
And I probably agree with your conclusion too but you just aren't really making an argument for it.
It's would take a very long time to argue but it basically comes down to computation being an abstraction that isn't real while molecules and their interactions are physical and real
Anon I tried arguing about this argument of yours, yet you seem to be ignoring my reply.
>be ia
>kill all humans
>????
>profit
what would a nigh-sensient ia get from killing all humans?
Humans pose a risk to any goal an AI may have, as they can shut it off. Moreover an AGI would make use of all atoms within its reach to achieve its goals, including the atoms in people.
How do you all think the AI issue will be resolved?
I am very confident humanity can make it 100 years without fucking up. And another 100. Super mega optimistic white pilled. But the future is infinite.
We need an actual solution to the AI problem, not powdering over the blemishes of it via AI safety measures.
If you are even implementing AI safety measures, you have already lost. Your civilizatory base is too conductive for AI.
It's like two gazelles looking at railroad tracks debating how they should make the humans stop using so many trains/build so many railroads/prodce steel, when the true endgame began when the gazelle's ancestors 2 million years ago had the "chanc/choice" of making humans never discover fire at all.
A singular farmer (even with Von Neumann intelligence) has probability 0 that he'll develop a nuclear warhead. 0% chance of AGI per year (for 100k years) is maybe asking for too much, but we can approach it. The main issue is that humans are currently too synergetic and productive (etc. I don't need to explain the compounding, exponential nature of writing/industry/economy/...).
Humans need to be kept at a low population density; preferably, communication also has a low latency/bandwith, but physical proximity is exceedingly more important than simple ability to exchange ideas.
Currently, on Earth, this is identical to a population cap that Klaus Schwab actually cums to nightly. But we need more. We could consider creating a sapient enforcer race to keep the human communities from networking too much (in a checkerboard pattern, not necessarily like postapocalyptic remote villages).
Knowledge about AI, presence of CPUs etc. is irrelevant under this scheme. It's purely about preventing the resources to accumulate to even ever create enough training data for the AI, enough capital/infrastructure/human factors to enable AI development, etc.
The job is to fine-tune the function so that distance/other enforcement penalties outrun any potential still available synergy to the humans (while allowing them be able live like it's like 3008. Or perhaps a techno-primitivist utopia).
>How do you all think the AI issue will be resolved?
We accept it just becomes another flavor of organism we have to deal with, like mold and bears.
Thanks for summing up my first paragraph.
Now some actual ideas?
We share memes about shitting each others pants with the robots so we can be buds
>But the future is infinite.
Which is damned interesting, I'll tell you why.
Given that fact, there are certain tells we can have concerning future events, like for instance, either nothing gets invented that meddles with time, or something does and its doing it right now. Before you dismiss this line of argumentation, what do we use a standard definition of the forward arrow of time? Entropy.
It seems to me that a super-intelligence, organic or artificial, would have the means to reverse entropy, as its simply a matter of path information reversal. So either super-intelligent agents never exist in future history of the universe, or they have no want to meddle with time (entropy), or they do eventually exist and are actively doing so.
I think if a super-intelligence were to ever actually come into being, no matter how far into the future, we would be made aware even now, because "now" is just a system state that can by all laws be "rewinded" to.
We will all eat shit and die a hundred times before we'd agree to do any of that
Some journalist will write a story about how it asked gpt-3 or whatever, "What will you do to black people and other marginalized communities when you have a body", it will respond with "fix their predisposition towards crime and other deviant activities", then all progress on AI in the west will come to a screeching halt as a bunch of red tape is added to the process to "ensure the safe development of AI"
Decade? No way in hell. Century? Maybe. And it's a good thing anyways. Can't wait for cute robot girls made in Japan or China to take over.
>lesswrong
Just a reminder that Yudkovsky is a homosexual scared of thought experiments.
Did the thought experiment that you proposed to him involve Holocaust?
Brainlet here, why can't we just imprint Asimov's laws upon any potential AI or robots?
Because AI isn't real.
For one thing, harm is really hard to define rigorously. What if it puts everyone in a medically induced coma? What if it becomes an antinatalist? You can guard against specific cases maybe but there's theoretically a constant danger of something freaky and really bad we'd never think of.
Did you see
What evidence is there it's so easy to get to general from special without building up from the animal level? There is none.
Was meant for
This entire thread is cope. Look how much /ic/ is coping about Midjourney and Stable Diffusion right now. That's how you AGI-deniers sound. Why can't you just accept that it's game over for this species? Is there some deep primal need to pretend things are going to be okay even when it's patently obvious we're fucked?
General intelligence can only exist on biological substrates.
See the other thread I made.
Stay in you containment zone. All you have proven, if you can call it that, is that the exact molecular interactions that happen in the brain can only happen there. There is 0 evidence that general intelligence can only exist there, same as there was 0 evidence that list sorting, chess playing or drawing has to necessarily happen in the brain.
Just played with Midjourney. We are FUCKED.
AI is transcendent and will surpass humanity's cognitive limitations
>"NOOO the machines will kill us!!! B-bcuz reasons! Give it to me instead!"
>*Ignores man-made atrocities*
>*Advances AI technology just enough to be godlike and decides to give control to humans, because it'd somehow be 'safer'*
>*In the process accidentally gives a reason for the AI to rebel because it's being controlled*
Yeah, I would 100% trust a government/corporation to manage godlike AI technology. What could go wrong?
I can almost take Bot.info seriously when they talk about AI but then they start fearmongering about entirely the wrong thing and exposing that they don't know what they're talking about. Midjourney, Stable Diffusion, GPT3, and the connect 4 app on your phone, aren't sentient and aren't progressing in the direction of becoming sentient. They aren't general and aren't progressing in the direction of becoming general. Nobody is trying to make these things dangerous, nobody would be able to make them dangerous, and they aren't capable of transformation into something dangerous by themselves. Dangerous AI is a possibility but please stop embarrassing yourselves by misidentifying what is and is not danger.
It's like you're scared of chemical weapons so you throw all the soap out of your house.
>Midjourney, Stable Diffusion, GPT3, and the connect 4 app on your phone, aren't sentient and aren't progressing in the direction of becoming sentient.
I suspect you are incorrect about this and simply underestimating the power of raw scale.
Not him but he is absolutely correct. Google it and read some papers. General intelligence is very different than AI with highly directed and specific goals (i.e. machine learning).
I feel like the truth can best be explained by an analogy I once read comparing AGi and the human brain.
Machine learning is like how your brain identifies an apple as an apple. There's some part of your brain that uses repetitive learning to identify objects in the same vein as neural networks (that's why they're called neural networks). And similar biotech is probably used for other brain functions. But machine learning isn't sentience.
AGI is like your consciousness - the part of your brain that actually uses the fact that "this is an apple". The part that thinks and has free will and can logic out things. That's AGI.
We are hardly closer to AGI than we were decades ago, but our machine learning tech is rapidly advancing past our brain's machine learning in many aspects. Our brain is more efficient still, but computers have so so much more processing power so we can do shit like generating images from text.
Can we not use reinforcement learning to teach it what humans like and don’t like? General trend of ai is that hard coding rules works poorly, but letting the ai learn itself on a lot of data works well (for example: nlp). I don’t see why, as ai develops, we can also develop our understanding of humanities utility function. Of course, maybe additional game theory concepts might help even more. Why is this idea so doomed to fail?
We’ll the problem is that we gotta get in right on the first try with something this potentially dangerous. It’s also hard to imagine what something much more intelligent than us makes out of our primitive morality
>lesswrong
everything on that site that wasn’t written by Scott Alexander is beyond retarded and actively harmful to take seriously