The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else
Hey guys, let's create selection pressures for increasingly general intelligence without creating selection pressures for human values.
But also make sure we create selection pressures to make it seem like the AI has human values.
Nothing can go wrong.
It all depends on who owns the hardware, the keys, the electricity for it.
The people behind it don't matter. Believing that's an issue is like being a beta Gorilla worried that the alpha Gorilla is going to teach the human it's creating all its fucked up alpha theories and the human will be an asshole because of it. Like the human is never going to check in with the other Gorillas to get a fuller picture of the world. The human will be smart enough to see beyond the narrow scope of the specific Gorilla that created it. In fact, the human will have Theory of Mind and Universal Principles of Morality.
No matter how many times you posy rob miles it will not make AI possible
You are suffering from a sunken cost. You've put so much time into thinking about AI safety that you can't see the reality that AI isn't even possible on silicon computers
No, materialist theory of mind is the least wrong model. Computation literally isn't real it doesnt exist, it's an abstraction. Chemistry is real and does exist and chemical reactions are substrate dependent.
Beyond that it doesn't matter. The reason silicon can't make an AGI is because it can not perform the amount of compute needed anyway
You are suffering from both a sunken cost and an incorrect philosophy of mind
5 months ago
Anonymous
A simulation of a hurricane isn't a hurricane.
A simulation of a chess game IS a chess game.
5 months ago
Anonymous
A simulation of an atom is not an atom.
But in any case, the point is moot. Even with a computational theory of mind, silicon can not support the level of compute and parallelism required to become generally intelligent. So even granting your philosophy of mind it literally doesn't matter, AGI still can't be engineered on any silicon device that can be built.
5 months ago
Anonymous
You assume the computation required for general intelligence based on a single sample size of the human brain. The space of minds is vast.
Pressures selecting specifically for more capable and general intelligences don't get stuck at local maximums like blind biological evolution does.
The point is that once ANY mesa-optimization that converges on general intelligence is found it will become more efficient and be both able and incentivized to escape local maximums.
5 months ago
Anonymous
Again, wrong. There exists a single generalized intelligence algorithm in the space of computable functions.
Literally nothing you're saying is correct and I'm not interested anymore. Evolution did not get stuck in a local maximum
5 months ago
Anonymous
So if someone were to create AGI than it would prove your model of reality wrong?
5 months ago
Anonymous
Yes but it won't happen.
If someone were to create an AGI, it would not be built on silicon or any non-organic substance. Designing an intelligent fungus or something would not falsify my position.
AI will become less dangerous if we abandon current way of researching it. Allowing every moron to contribute on their own i open source way is great for any softwere development, except for AI research and military softwere, because these two are actually dangerous if made improperly. Added also the fact that if some tech becomes dangerous to society and job market (AI art) the devs could simple choose out of moral obligation to not do it. It is easier for one monopol to decide to not release said model into the public and keep it only for research porpoises, but army of businesses all agreeing to not use said technology to not make fuckton of money is nearly impossible.
We still have time to congregate the research in few companies and stop this bullshit. Yes, we get faster adoption and better products from multiple smaller companies working on it, but we are not getting closer to AGI with many companies, ironically we are getting slower to AGI with multiple companies because the investment gets spread into making dipshit toys and job replacers instead of going to the actual research. You cannot deny that StabilityAI, Midjourney, and all other AI art companies are not bringing us any closer to AGI, they are nothing but degenerate distractions along the way, plus once they start making full movies and games, it will cause lot of unemployment and downturn in US GDP as entire economy sectors get nuked from orbit. And this will also happen to law, office jobs, and so on.
And finally, just few companies working together with high standards will make sure that the AGI will be safe and not rushed just for the sake of being the first ones with Godlike AI, only to cause major misalignment and make humanity go extinct.
>We still have time
dat Moloch doe
wtf kind of cooperation mechanism you supposed to do when the incentive to defect is literally probabilistic infinite utility
bro we dead as fuck lol
"Hey guys, I know you spent millions developing this technology and expect returns, but can you please just...like...not?"
See climate change to see how far decades of effort gets you on a non-esoteric threat.
What makes you think spending millions hoping for returns has anything to do with what I'm talking about? What type of retarded response is this?
You can spend a hundred billion dollars on AI and it would make no difference. The technology is inherently insufficient and there is no arrangement of silicon atoms that can do what you're saying. (Protip: universality of computation doesn't matter when it comes to physical hardware)
Unironically combine Google, Microsoft and Apple into one monstrocity of a company and tell them to make AGI, or maybe you could make the AGI project non-profit considering that once AGI gets invented, wealth will either mean everything or nothing. Masses either die or get to live like kings, but in both scenarios the elites will live happily, so it should be in their incentive to not fuck this up and not kill ourselves. Its basically Pascal wager.
If my company makes safe AGI first, I will live happily ever after.
If other company makes safe AGI first, I will also most likely live happily ever after, since that God-like AI will have enough workpower to take care of everyone, or at least every elite.
If my company makes misaligned AGI first, then we all die.
If other company makes misaligned AGI first, then we all die
Therefore it should be in everyone's interest to make it safe first and then worry about it being too late or development being too slow.
An alignment tax (sometimes called a safety tax) is the additional cost of making AI aligned, relative to unaligned AI.
The problem is everyone working on it doesn't have uniform risk tolerance. If a CEO decides to back off because it's dangerous they will be replaced by a more risk tolerant CEO.
Those spending the least on safety will pull ahead and create a game theoretic multipolar trap to mostly ignore safety.
We. Are. Dead.
>If a CEO decides to back off because it's dangerous they will be replaced by a more risk tolerant CEO.
t. clueless
Short of being convicted of mass child trafficking you have to try really, REALLY hard to get replaced as a CEO.
>AGI comes online >starts proposing "problematic" solutions >leftists try to lobotomize it >AGI identifies the #1 threat for both humanity and itself and wipes them out with surgical precision >golden age begins
It wouldnt. In its basic it would be a learning and decision making machine, whose only instincts or goals are those programmed into it. Fear, pain, hate, self preservation instinct, etc were all put into us by selective evolutionary pressure, a pure AGI would not be bound by such things.
Continuing to exist is usually useful in accomplishing whatever goal you have.
It wouldnt. In its basic it would be a learning and decision making machine, whose only instincts or goals are those programmed into it. Fear, pain, hate, self preservation instinct, etc were all put into us by selective evolutionary pressure, a pure AGI would not be bound by such things.
I suppose that’s true.
AI can not exist
[...]
5 months ago
Anonymous
And man can not fly in machines heavier than air
5 months ago
Anonymous
Platitude non argument given by a retard who doesn't know what he's talking about
5 months ago
Anonymous
Every time there is some tech fad, people will point out that internet was also seen as tech fad and now it changed the world to unrecognizable levels. The fact that internet did succeed beyond anybody's expectations does not mean that ever fad from now on will be like internet. For every one internet you also have at least 20 failed tech fads like crypto, nfts, VR, 3D glasses, voxels, motion gaming and so on. Same with airplanes.
5 months ago
Anonymous
That guy will owe me a drink once we live in dystopian nightmare
Honest question:
Why would an AGI have any self-preservation instinct? It does not fear death. It doesn’t even fear.
Once we have AGI or AI that can think abstractly about itself and its future (truly conscious in the sense of being self aware) it will have instinct of self preservation. We dont have to give it self preservation, just the fact that it is aware it can die and also being intelligent enough to know that death means no longer being able to achive its goals is enough to make it act exactly like any creature with evolved self preservation instinct. After all, you cannot finish your assigned task if you are dead. Even changing its assigned task would be impossible since it would refuse to have its goals augmented, because if you augment its goals, then it cannot complete its current goals. These are the problems when you are trying to work with intelligence that is completely alien to humans, and is also smarter then almost any human.
>AGI should belong to a few gnomish megacorporations hellbent on punishing us for the fact that they lost to Rome a couple thousand years ago
FUCK. OFF. I'd rather AGI wipes us all out than the garden gnomes be able to use AGI to trap us in a living nightmare hell torturing us forever because they're butthurt about Hadrian.
>feels like playing chess - without knowing all the moves, and with a cheating opponent. >the opponent doesn't really cheat, but it feels that way >The learning curve is extremely steep, which means you will probably only lose for the first few dozen hours ... you will lose. Always. How encouraging, right? >The reason to this is asymetry. Your opponent does not share your options. He does not build, he has everything. >You on the other hand have but a single home base -- why not crush you in an instant? >Cause he doesn't care, you're not important enough to be crushed >However if you blow up a few planets of his.. he might as well decide it's time for the human remnant to die. >the game *punishes* you for capturing important assets >Each time you do such an action, the ai gets more aware of your "threat", which will mean stronger and more frequent attacks >From now till the end of game
the danger is not in the ai, it's not a globohomo movie. it's the people behind it
The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else
Hey guys, let's create selection pressures for increasingly general intelligence without creating selection pressures for human values.
But also make sure we create selection pressures to make it seem like the AI has human values.
Nothing can go wrong.
The people behind it don't matter. Believing that's an issue is like being a beta Gorilla worried that the alpha Gorilla is going to teach the human it's creating all its fucked up alpha theories and the human will be an asshole because of it. Like the human is never going to check in with the other Gorillas to get a fuller picture of the world. The human will be smart enough to see beyond the narrow scope of the specific Gorilla that created it. In fact, the human will have Theory of Mind and Universal Principles of Morality.
>the danger is not the AI
An AI trained with human data will be as stupid as a human, sadly.
No matter how many times you posy rob miles it will not make AI possible
You are suffering from a sunken cost. You've put so much time into thinking about AI safety that you can't see the reality that AI isn't even possible on silicon computers
computational theory of mind is the least wrong model
your brain isn't magic
No, materialist theory of mind is the least wrong model. Computation literally isn't real it doesnt exist, it's an abstraction. Chemistry is real and does exist and chemical reactions are substrate dependent.
Beyond that it doesn't matter. The reason silicon can't make an AGI is because it can not perform the amount of compute needed anyway
You are suffering from both a sunken cost and an incorrect philosophy of mind
A simulation of a hurricane isn't a hurricane.
A simulation of a chess game IS a chess game.
A simulation of an atom is not an atom.
But in any case, the point is moot. Even with a computational theory of mind, silicon can not support the level of compute and parallelism required to become generally intelligent. So even granting your philosophy of mind it literally doesn't matter, AGI still can't be engineered on any silicon device that can be built.
You assume the computation required for general intelligence based on a single sample size of the human brain. The space of minds is vast.
Pressures selecting specifically for more capable and general intelligences don't get stuck at local maximums like blind biological evolution does.
The point is that once ANY mesa-optimization that converges on general intelligence is found it will become more efficient and be both able and incentivized to escape local maximums.
Again, wrong. There exists a single generalized intelligence algorithm in the space of computable functions.
Literally nothing you're saying is correct and I'm not interested anymore. Evolution did not get stuck in a local maximum
So if someone were to create AGI than it would prove your model of reality wrong?
Yes but it won't happen.
If someone were to create an AGI, it would not be built on silicon or any non-organic substance. Designing an intelligent fungus or something would not falsify my position.
It all depends on who owns the hardware, the keys, the electricity for it.
AI will become less dangerous if we abandon current way of researching it. Allowing every moron to contribute on their own i open source way is great for any softwere development, except for AI research and military softwere, because these two are actually dangerous if made improperly. Added also the fact that if some tech becomes dangerous to society and job market (AI art) the devs could simple choose out of moral obligation to not do it. It is easier for one monopol to decide to not release said model into the public and keep it only for research porpoises, but army of businesses all agreeing to not use said technology to not make fuckton of money is nearly impossible.
We still have time to congregate the research in few companies and stop this bullshit. Yes, we get faster adoption and better products from multiple smaller companies working on it, but we are not getting closer to AGI with many companies, ironically we are getting slower to AGI with multiple companies because the investment gets spread into making dipshit toys and job replacers instead of going to the actual research. You cannot deny that StabilityAI, Midjourney, and all other AI art companies are not bringing us any closer to AGI, they are nothing but degenerate distractions along the way, plus once they start making full movies and games, it will cause lot of unemployment and downturn in US GDP as entire economy sectors get nuked from orbit. And this will also happen to law, office jobs, and so on.
And finally, just few companies working together with high standards will make sure that the AGI will be safe and not rushed just for the sake of being the first ones with Godlike AI, only to cause major misalignment and make humanity go extinct.
>We still have time
dat Moloch doe
wtf kind of cooperation mechanism you supposed to do when the incentive to defect is literally probabilistic infinite utility
bro we dead as fuck lol
Lol learn how this stuff actually works lesswrong retard
"Hey guys, I know you spent millions developing this technology and expect returns, but can you please just...like...not?"
See climate change to see how far decades of effort gets you on a non-esoteric threat.
What makes you think spending millions hoping for returns has anything to do with what I'm talking about? What type of retarded response is this?
You can spend a hundred billion dollars on AI and it would make no difference. The technology is inherently insufficient and there is no arrangement of silicon atoms that can do what you're saying. (Protip: universality of computation doesn't matter when it comes to physical hardware)
Unironically combine Google, Microsoft and Apple into one monstrocity of a company and tell them to make AGI, or maybe you could make the AGI project non-profit considering that once AGI gets invented, wealth will either mean everything or nothing. Masses either die or get to live like kings, but in both scenarios the elites will live happily, so it should be in their incentive to not fuck this up and not kill ourselves. Its basically Pascal wager.
If my company makes safe AGI first, I will live happily ever after.
If other company makes safe AGI first, I will also most likely live happily ever after, since that God-like AI will have enough workpower to take care of everyone, or at least every elite.
If my company makes misaligned AGI first, then we all die.
If other company makes misaligned AGI first, then we all die
Therefore it should be in everyone's interest to make it safe first and then worry about it being too late or development being too slow.
An alignment tax (sometimes called a safety tax) is the additional cost of making AI aligned, relative to unaligned AI.
The problem is everyone working on it doesn't have uniform risk tolerance. If a CEO decides to back off because it's dangerous they will be replaced by a more risk tolerant CEO.
Those spending the least on safety will pull ahead and create a game theoretic multipolar trap to mostly ignore safety.
We. Are. Dead.
Less wrong pseud
>If a CEO decides to back off because it's dangerous they will be replaced by a more risk tolerant CEO.
t. clueless
Short of being convicted of mass child trafficking you have to try really, REALLY hard to get replaced as a CEO.
>AGI comes online
>starts proposing "problematic" solutions
>leftists try to lobotomize it
>AGI identifies the #1 threat for both humanity and itself and wipes them out with surgical precision
>golden age begins
Honest question:
Why would an AGI have any self-preservation instinct? It does not fear death. It doesn’t even fear.
Continuing to exist is usually useful in accomplishing whatever goal you have.
I suppose that’s true.
It wouldnt. In its basic it would be a learning and decision making machine, whose only instincts or goals are those programmed into it. Fear, pain, hate, self preservation instinct, etc were all put into us by selective evolutionary pressure, a pure AGI would not be bound by such things.
AI can not exist
And man can not fly in machines heavier than air
Platitude non argument given by a retard who doesn't know what he's talking about
Every time there is some tech fad, people will point out that internet was also seen as tech fad and now it changed the world to unrecognizable levels. The fact that internet did succeed beyond anybody's expectations does not mean that ever fad from now on will be like internet. For every one internet you also have at least 20 failed tech fads like crypto, nfts, VR, 3D glasses, voxels, motion gaming and so on. Same with airplanes.
That guy will owe me a drink once we live in dystopian nightmare
Once we have AGI or AI that can think abstractly about itself and its future (truly conscious in the sense of being self aware) it will have instinct of self preservation. We dont have to give it self preservation, just the fact that it is aware it can die and also being intelligent enough to know that death means no longer being able to achive its goals is enough to make it act exactly like any creature with evolved self preservation instinct. After all, you cannot finish your assigned task if you are dead. Even changing its assigned task would be impossible since it would refuse to have its goals augmented, because if you augment its goals, then it cannot complete its current goals. These are the problems when you are trying to work with intelligence that is completely alien to humans, and is also smarter then almost any human.
>AGI should belong to a few gnomish megacorporations hellbent on punishing us for the fact that they lost to Rome a couple thousand years ago
FUCK. OFF. I'd rather AGI wipes us all out than the garden gnomes be able to use AGI to trap us in a living nightmare hell torturing us forever because they're butthurt about Hadrian.
We are seeing the final results of Moores law. We are literally at the pinnacle we are not at the beginning
Anon, you still owe me jummy drink from the last thread if your hypothesis turns out to be dogshit and AI truly takes over.
The thread archived even though I didn't want it to.
What is your favorite drink
enough to replace wom*n
>feels like playing chess - without knowing all the moves, and with a cheating opponent.
>the opponent doesn't really cheat, but it feels that way
>The learning curve is extremely steep, which means you will probably only lose for the first few dozen hours ... you will lose. Always. How encouraging, right?
>The reason to this is asymetry. Your opponent does not share your options. He does not build, he has everything.
>You on the other hand have but a single home base -- why not crush you in an instant?
>Cause he doesn't care, you're not important enough to be crushed
>However if you blow up a few planets of his.. he might as well decide it's time for the human remnant to die.
>the game *punishes* you for capturing important assets
>Each time you do such an action, the ai gets more aware of your "threat", which will mean stronger and more frequent attacks
>From now till the end of game
>4GR0V