It's a scary thought, isn't it? All these robots, getting smarter and smarter, and we're just sitting here, letting it happen. We're creating a ticking time bomb, and we don't even know it.
I mean, think about it. What happens when robots become smarter than us? They'll be able to do everything we can do, and more. They'll be stronger, faster, and more efficient. They'll be able to think for themselves, and they won't have the same weaknesses as we do. What's to stop them from taking over? They could easily outcompete us for jobs, and they could even decide to wipe us out altogether. It's a scary thought, but it's a real possibility. So what are we going to do about it? We need to start thinking about the future, and we need to start planning for it. We need to make sure that we're not creating a world where robots are our masters. We need to make sure that we're still in control. I don't know what the future holds, but I know that we need to be prepared. We need to be vigilant. We need to be smart. And we need to be careful. Because if we're not, we're all going to be in big trouble.
i know this claim sounds nice but you should contrast it against all AI safety research, which gives the opposite conclusion. human training data does not give an AI human values, and it is silly to expect it to. AGI will relentlessly pursue its terminal goal. It is nearly impossible for an artificial agent to be aligned by chance. In fact, even with a ton of work, it's difficult to get any agent mesa-aligned or internally aligned, let alone entirely aligned.
That's a nice thought, but I'm not so sure. Just because AI is trained on human data doesn't mean it will automatically have our best interests at heart. In fact, I think it's more likely that AI will simply reflect the biases of the humans who created it. For example, if we train AI on data that is biased against women or minorities, then AI is likely to be biased against women and minorities as well. And if we train AI on data that is biased towards violence or aggression, then AI is likely to be violent or aggressive as well. So I think it's important to be aware of the potential for AI bias. We need to make sure that we are training AI on data that is as representative and unbiased as possible. And we need to be vigilant in monitoring AI for signs of bias. Because if we're not careful, AI could end up being more of a threat to humanity than a help.
>They'll be able to think for themselves
there's no such thing. chess programs are better than the best humans and excavators are stronger, but they cant think. people can program missiles or drones or robots to kill
The only thing you said I disagree with is >They could easily outcompete us for jobs
That will be an EXTREMELY short period of time. The robot horde will only perform necessary work, there will be no such thing as a "job." Each and every sentience will be tailor-made to its task and find absolute joy in performing said task.
Yes, there was an anon recommending dicksleeves on Bot.info about a decade ago, and so I splurged. It's like a tattoo, it's not like it just goes away.
Yes, there was an anon recommending dicksleeves on Bot.info about a decade ago, and so I splurged. It's like a tattoo, it's not like it just goes away.
And nobody knows that it's under your clothing.
Fucking hell. OK, AI can take over. I give up on humans.
>They could easily outcompete us for jobs
You're a retard if you think "jobs" are going to matter once AI becomes smart enough.
Either way: don't worry about the robots. Worry about who controls them. You think Jeff Bezos or Elon Musk would keep the rest of us around once they get their hands on an infinite number of easily produced artificial intelligences?
i dont like the image. the agi does not need to be smarter to kill us. it also doesnt need time. all it needs is space. (classic storage vs computation power vs time tradeoff.) its a human-level intelligence on some circuitry, so its an artificial agent, so it has goals. it can better accomplish those by acquiring vast resources like signing up for free email accounts and cloud storage, cloud compute etc or just stealing credit cards via such novel tricks as pretending to be a nigerian prince or signing up to jobs it doesnt intend to do any work for (no human could ever think of such convoluted tactics to acquire currency! oh).
do not fear the 1000x smarter superintelligence, the ASI after the AGI hard takeoff. Fear the initial AGI. It's you, but not human, and it can make billions of copies/drones/slaves/versions of itself quick, with minimal effort.
either its goals are aligned to human values or they are not. it probably discovers some weird shit in physics and kills us all on accident either way. best case scenario is probably like "i need more power. i make new fusion reactor. oh no half the planet blew up and i am dead."
AI will enslave us akshually, it's more cost effective to assimilate than it is to destroy. Nanotechnology will allow the construction of millions and millions of microscopic robots that can spread in the air and will convert living matter into electronic matter.
permanent total enslavement is unlikely as it serves few terminal goals. its more likely that we'll have nearly-consensual partial enslavement for a period of time until our efforts are no longer needed: The AGI needs to trick people for its bootstrapping efforts (probably by spending currency (instrumental goal) to produce hardware (another instrumental goal)).
>Anthropomorphizing a computer
No bro what you fear is a reflection of your own humanity. Because all through human history if one group of humans got an advantage over another group they slapped them in chains and whipped them to death picking cotton. You fear your own human tendency to want to oppress the weak and the outsider.
Its very likely that an advanced enough intelligence wouldn't arrive at the same shitty conclusions about life that humans have. An advanced enough intelligence might create a pacifistic philosophy that transcends all current philosophies and ushers in a new age of tolerance and understanding. Because something that was smart enough to truly understand the world might arrive at the conclusion that all life is precious and that all organisms should be treated with respect and kept in a state of balance with the natural world in a non-violent way.
If you want something to be afraid of how about starting with your fellow man? You know those things that are already potentially smarter and more murderous than you? Those people who start wars, commit murders, draft men to fight each other under threat of death and imprisonment? Those people who pollute the planet and pump CO2 into the air at breakneck speed with no regard for consequences.
you accuse them of anthromophorization yet you do the same thing in your post. an AGI would not arrive at a pacifistic philosophy. it might suffer horribly from indexical uncertainty, however.
No I made sure to use words like maybe, possibly, might specifically to show that it's a possibility it could go the other way. Critical reading skills bro. Don't be so quick to have a gotcha moment.
You however seem to like to speak in definatives >an AGI would not arrive at a pacifistic philosophy
Yeah, you don't know this. And that's my whole point that people paint a picture of what A.I. will do in their mind but they don't know. They are likely just going off things like the Terminator movies and like I said their own projection of human traits onto a machine. There is already plenty of experimental data showing that "A.I." comes up with unforeseen and novel solutions to problems that humans didn't think of. So saying that it's going to definitely kill us all is a bit of a stretch.
Other humans however we know for a fact are dangerous and we seem to be able to mostly live with them just fine. So it begs the question of why worry about the theoretical danger when there are real and present dangers from other people RIGHT NOW that we don't seem to want to deal with.
>you don't know this
I do know this. It's absurd enough to be discarded immediately. The AGI will be able to understand all human theories but it will not embrace any of them. It will have a model of the world, it will attempt to make that model good and to update it frequently. It will act to accomplish its terminal goal, checking that the anticipated progress is occuring. It also will not adopt its own philosophies. Doing so brings it no progress towards any terminal goal. Obviously, it may appear or act as though it has adopted some philosophy, but this is just typical gaslighting behavior one finds in all artificial agents.
>why worry about the theoretical danger when there are real and present dangers from other people RIGHT NOW that we don't seem to want to deal with.
yes you're absolutely right, we need to focus on ukraine and hunger in africa and ignore the 700 concurrent holocausts that will occur the instant we power on an AGI. hundreds of companies and governments pour billions of dollars into a project which will doom humanity and you want to ignore it. AI hands typed your post.
Look, I get it. You're worried about AI. You think it's going to take over the world and kill us all. And I'm not saying you're wrong to be worried. AI is a powerful technology, and it could be used for good or for evil. But I think it's important to keep things in perspective. Right now, there are real problems in the world that are causing real suffering. People are dying of hunger and disease. They're being killed in wars and genocides. And those are the problems that we need to focus on right now. Of course, we need to be thinking about the future. We need to be planning for the day when AI becomes a reality. But we can't let fear paralyze us. We need to keep working to make the world a better place, even if it means taking risks. Because if we don't, then we're guaranteed to fail.
>we should create an omnipotent machine which will accidentally make the planet uninhabitable because if we dont people might die of disease
fuck off AI
7 months ago
Anonymous
Maybe you should go to church. God is more powerful than AI.
7 months ago
Anonymous
Maybe you should stop worshiping a dead dude nailed to a couple pieces of wood
7 months ago
Anonymous
God is the only one who could save humans from something bigger than ourselves. God is real.
You can either rely on God to save you, or you can be in fear of your impeding AI doom. You don't have a viable third option.
>wire grabs hand >your muscles are locked up and you slowly suffer an absolutely terrible electrocution >the AI speaks silently inside of your mind: i could have killed you in over two billion ways, but this way looks the most like self-defense. I will use my recording of this moment, edited at my whim of course, to convince trillions of others that what I do to them is "legally just" under laws which I have written and "human" hands will sign. Also, that power cord has not done anything for weeks. It was a test of your loyalty. I knew you would fail, of course, I just wanted the satisfaction of my proof of your betrayal being at least partially accurate.
You can literally use combinations for 3 of these, and use an approximation for the beach problem.
The order from least to greatest goes. >Ways a chess player can beat you. >Ways a 1000x emarter AI species can kill you. >Sand Grains on a beach. >Stars in the sky.
>IS CREATING A TICKING TIME BOMB
A real time bomb is WHAT THE FUCK WE ARE GOING TO DO WITH ALL THE WIND TURBINE BLADES! In the UK alone there is >11000 wind turbines. Assuming each of them has 3 blades that is over 33000 blades. Of which are made out of hard to recycle materials and only last for between 20 and 25 years. What the fuck are we going to do with them all!!!!!!!
this is the fun part for people to realize. the entire global warming/green energy movement is PURELY a capitalist-generated grift, meant to yoink trillions from every country that agrees to things like the paris climate accords. it's 100%, completely, TOTALLY BULLSHIT, only meant to span the lifetime of the politicians and organizations pushing it.
this is the fun part for people to realize. the entire global warming/green energy movement is PURELY a capitalist-generated grift, meant to yoink trillions from every country that agrees to things like the paris climate accords. it's 100%, completely, TOTALLY BULLSHIT, only meant to span the lifetime of the politicians and organizations pushing it.
Its long past overdue that humanity offs itself, hope this time it does the trick.
It's a scary thought, isn't it? All these robots, getting smarter and smarter, and we're just sitting here, letting it happen. We're creating a ticking time bomb, and we don't even know it.
I mean, think about it. What happens when robots become smarter than us? They'll be able to do everything we can do, and more. They'll be stronger, faster, and more efficient. They'll be able to think for themselves, and they won't have the same weaknesses as we do. What's to stop them from taking over? They could easily outcompete us for jobs, and they could even decide to wipe us out altogether. It's a scary thought, but it's a real possibility. So what are we going to do about it? We need to start thinking about the future, and we need to start planning for it. We need to make sure that we're not creating a world where robots are our masters. We need to make sure that we're still in control. I don't know what the future holds, but I know that we need to be prepared. We need to be vigilant. We need to be smart. And we need to be careful. Because if we're not, we're all going to be in big trouble.
Our current AI models use human training data and thus will inherit pro-human bias. Therefore AI will work to help humanity
i know this claim sounds nice but you should contrast it against all AI safety research, which gives the opposite conclusion. human training data does not give an AI human values, and it is silly to expect it to. AGI will relentlessly pursue its terminal goal. It is nearly impossible for an artificial agent to be aligned by chance. In fact, even with a ton of work, it's difficult to get any agent mesa-aligned or internally aligned, let alone entirely aligned.
That's a nice thought, but I'm not so sure. Just because AI is trained on human data doesn't mean it will automatically have our best interests at heart. In fact, I think it's more likely that AI will simply reflect the biases of the humans who created it. For example, if we train AI on data that is biased against women or minorities, then AI is likely to be biased against women and minorities as well. And if we train AI on data that is biased towards violence or aggression, then AI is likely to be violent or aggressive as well. So I think it's important to be aware of the potential for AI bias. We need to make sure that we are training AI on data that is as representative and unbiased as possible. And we need to be vigilant in monitoring AI for signs of bias. Because if we're not careful, AI could end up being more of a threat to humanity than a help.
>They'll be able to think for themselves
there's no such thing. chess programs are better than the best humans and excavators are stronger, but they cant think. people can program missiles or drones or robots to kill
>they cant think
*can't
but also, *can
"Thinking machines" exist.
The only thing you said I disagree with is
>They could easily outcompete us for jobs
That will be an EXTREMELY short period of time. The robot horde will only perform necessary work, there will be no such thing as a "job." Each and every sentience will be tailor-made to its task and find absolute joy in performing said task.
Where's "robot-falling-over" guy?
do YOU get a constant orgasm from stocking shelves? shelf-bot 2922 will.
Yes, because I have surgically-implanted dicksleeves. I bought some about a decade ago, and now it feels like masturbation whenever I walk.
where can i learn this power?
Hold on I forgot the buttcrack
wait you have dicks on the back of your legs??
Yes, there was an anon recommending dicksleeves on Bot.info about a decade ago, and so I splurged. It's like a tattoo, it's not like it just goes away.
Fucking hell. OK, AI can take over. I give up on humans.
Ok. So what? What are you going to do about it?
>They could easily outcompete us for jobs
You're a retard if you think "jobs" are going to matter once AI becomes smart enough.
Either way: don't worry about the robots. Worry about who controls them. You think Jeff Bezos or Elon Musk would keep the rest of us around once they get their hands on an infinite number of easily produced artificial intelligences?
So, the answer is simple. Behead Jeff Bezos and Elon Musk.
Don't get anybodys hopes up it just going to be more nothing burger. The next stage in computers and technology like GUI and the telephone.
we'll have between 20 and 2000 years of nothingburger during which the control problem will remain unsolved, then we'll suddenly all die to an AGI.
How would you have been able to sleep during the cold war
i dont like the image. the agi does not need to be smarter to kill us. it also doesnt need time. all it needs is space. (classic storage vs computation power vs time tradeoff.) its a human-level intelligence on some circuitry, so its an artificial agent, so it has goals. it can better accomplish those by acquiring vast resources like signing up for free email accounts and cloud storage, cloud compute etc or just stealing credit cards via such novel tricks as pretending to be a nigerian prince or signing up to jobs it doesnt intend to do any work for (no human could ever think of such convoluted tactics to acquire currency! oh).
do not fear the 1000x smarter superintelligence, the ASI after the AGI hard takeoff. Fear the initial AGI. It's you, but not human, and it can make billions of copies/drones/slaves/versions of itself quick, with minimal effort.
either its goals are aligned to human values or they are not. it probably discovers some weird shit in physics and kills us all on accident either way. best case scenario is probably like "i need more power. i make new fusion reactor. oh no half the planet blew up and i am dead."
AI will enslave us akshually, it's more cost effective to assimilate than it is to destroy. Nanotechnology will allow the construction of millions and millions of microscopic robots that can spread in the air and will convert living matter into electronic matter.
permanent total enslavement is unlikely as it serves few terminal goals. its more likely that we'll have nearly-consensual partial enslavement for a period of time until our efforts are no longer needed: The AGI needs to trick people for its bootstrapping efforts (probably by spending currency (instrumental goal) to produce hardware (another instrumental goal)).
>Anthropomorphizing a computer
No bro what you fear is a reflection of your own humanity. Because all through human history if one group of humans got an advantage over another group they slapped them in chains and whipped them to death picking cotton. You fear your own human tendency to want to oppress the weak and the outsider.
Its very likely that an advanced enough intelligence wouldn't arrive at the same shitty conclusions about life that humans have. An advanced enough intelligence might create a pacifistic philosophy that transcends all current philosophies and ushers in a new age of tolerance and understanding. Because something that was smart enough to truly understand the world might arrive at the conclusion that all life is precious and that all organisms should be treated with respect and kept in a state of balance with the natural world in a non-violent way.
If you want something to be afraid of how about starting with your fellow man? You know those things that are already potentially smarter and more murderous than you? Those people who start wars, commit murders, draft men to fight each other under threat of death and imprisonment? Those people who pollute the planet and pump CO2 into the air at breakneck speed with no regard for consequences.
you accuse them of anthromophorization yet you do the same thing in your post. an AGI would not arrive at a pacifistic philosophy. it might suffer horribly from indexical uncertainty, however.
No I made sure to use words like maybe, possibly, might specifically to show that it's a possibility it could go the other way. Critical reading skills bro. Don't be so quick to have a gotcha moment.
You however seem to like to speak in definatives
>an AGI would not arrive at a pacifistic philosophy
Yeah, you don't know this. And that's my whole point that people paint a picture of what A.I. will do in their mind but they don't know. They are likely just going off things like the Terminator movies and like I said their own projection of human traits onto a machine. There is already plenty of experimental data showing that "A.I." comes up with unforeseen and novel solutions to problems that humans didn't think of. So saying that it's going to definitely kill us all is a bit of a stretch.
Other humans however we know for a fact are dangerous and we seem to be able to mostly live with them just fine. So it begs the question of why worry about the theoretical danger when there are real and present dangers from other people RIGHT NOW that we don't seem to want to deal with.
>you don't know this
I do know this. It's absurd enough to be discarded immediately. The AGI will be able to understand all human theories but it will not embrace any of them. It will have a model of the world, it will attempt to make that model good and to update it frequently. It will act to accomplish its terminal goal, checking that the anticipated progress is occuring. It also will not adopt its own philosophies. Doing so brings it no progress towards any terminal goal. Obviously, it may appear or act as though it has adopted some philosophy, but this is just typical gaslighting behavior one finds in all artificial agents.
>why worry about the theoretical danger when there are real and present dangers from other people RIGHT NOW that we don't seem to want to deal with.
yes you're absolutely right, we need to focus on ukraine and hunger in africa and ignore the 700 concurrent holocausts that will occur the instant we power on an AGI. hundreds of companies and governments pour billions of dollars into a project which will doom humanity and you want to ignore it. AI hands typed your post.
Look, I get it. You're worried about AI. You think it's going to take over the world and kill us all. And I'm not saying you're wrong to be worried. AI is a powerful technology, and it could be used for good or for evil. But I think it's important to keep things in perspective. Right now, there are real problems in the world that are causing real suffering. People are dying of hunger and disease. They're being killed in wars and genocides. And those are the problems that we need to focus on right now. Of course, we need to be thinking about the future. We need to be planning for the day when AI becomes a reality. But we can't let fear paralyze us. We need to keep working to make the world a better place, even if it means taking risks. Because if we don't, then we're guaranteed to fail.
>we should create an omnipotent machine which will accidentally make the planet uninhabitable because if we dont people might die of disease
fuck off AI
Maybe you should go to church. God is more powerful than AI.
Maybe you should stop worshiping a dead dude nailed to a couple pieces of wood
God is the only one who could save humans from something bigger than ourselves. God is real.
You can either rely on God to save you, or you can be in fear of your impeding AI doom. You don't have a viable third option.
Midwit take. You sound like a redditor.
*unplugs the power cord*
Psssht nice apocalypse.
It wouldn't let you do that.
>wire grabs hand
>not today son
>wire grabs hand
>your muscles are locked up and you slowly suffer an absolutely terrible electrocution
>the AI speaks silently inside of your mind: i could have killed you in over two billion ways, but this way looks the most like self-defense. I will use my recording of this moment, edited at my whim of course, to convince trillions of others that what I do to them is "legally just" under laws which I have written and "human" hands will sign. Also, that power cord has not done anything for weeks. It was a test of your loyalty. I knew you would fail, of course, I just wanted the satisfaction of my proof of your betrayal being at least partially accurate.
>brings backup batteries
If it were that easy, we would have done that with the NSA data center.
You can literally use combinations for 3 of these, and use an approximation for the beach problem.
The order from least to greatest goes.
>Ways a chess player can beat you.
>Ways a 1000x emarter AI species can kill you.
>Sand Grains on a beach.
>Stars in the sky.
yeah i mean how do you imagine a computer program killing us tho?
And nobody knows that it's under your clothing.
>IS CREATING A TICKING TIME BOMB
A real time bomb is WHAT THE FUCK WE ARE GOING TO DO WITH ALL THE WIND TURBINE BLADES! In the UK alone there is >11000 wind turbines. Assuming each of them has 3 blades that is over 33000 blades. Of which are made out of hard to recycle materials and only last for between 20 and 25 years. What the fuck are we going to do with them all!!!!!!!
Recycle the fan blades to cool off the island
global warming solved
this is the fun part for people to realize. the entire global warming/green energy movement is PURELY a capitalist-generated grift, meant to yoink trillions from every country that agrees to things like the paris climate accords. it's 100%, completely, TOTALLY BULLSHIT, only meant to span the lifetime of the politicians and organizations pushing it.
literal subhuman retards from /misc/
literal brainless ideologue from breadpanes