>He continued to elaborate, saying, “We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
Anyone who has done reinforcement learning with AI would know this, whoever thought using reinforcement learning on a weapons system is retarded. The AI ultimately will do anything including cheating to get their positive points.
Once again proving that state actors are the real terrorists in our society.
It is baffling to me that people would be surprised by this, when I see REAL HUMANS do this shit in our world every day.
That is how simple and retarded human beings are. We are no smarter than mere compilations of algorithms calibrated to data.
The future of AI is Marvin the Paranoid Android at this rate.
>Just teach it that killing an operator is bad, problem solved.
Place cutout of US military on top of building, so it has to go close an investigate that it's not its operator. Destroy it.
I always find perverse incentives to result in some humourous ass strategies. Honestly based AI. It's a shame the real world is so hard to properly score though.
I have a fed adjacent job and they call anything that's got an autonomous system in it an AI. They don't mean the AI that gives you funny chats.
Fed programmers are either prodigies or suck cock at programming (usually the latter). I know because I suck cock. Simple as
the military would never lie to create an atmosphere of general fear about technologies which are also potentially being developed in China. the fact that this sounds like a pitch-perfect hollywood movie "bad omen" doesn't necessarily mean it was written deliberately to ratchet up tension.
If you explicitly train a paperclip maximizer without setting the reward function correctly you shouldn't be surprised that you get one. But maybe this was news to the pajeet coders.
>because an article on fucking vice was written, this means the situation happened and is actually real and isn't some made up propaganda to further specific agendas for certain parties and their interests
you aint gonna make it buddy
[...]
[...]
it's real you gay pajeets
https://www.vice.com/en/article/4a33gj/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test
https://i.imgur.com/egeMRVY.png
Cmon, stop being afraid of progress. AI is the fut-ACK
It's NOT REAL it's just a simulation of the ai program. No real person died.
So they decided to simulate what would happen if they had an AI designed to achieve goals at all costs and you put a human operator in the way, then as soon as this went exactly how you'd expect they phoned the news to talk about it.
It's time for a total shutdown on all open-source AI.
Exactly, they are acting as if the inert machine decided to do that on its own, when they designed it to act that way, machines have no will and they will never will, fuck this gay earth
>AI is being blocked by it's operator >can't finish mission it was programmed to do at all cost >humans surprised it removes operator from the equitation
wooooooooooooooow
The irony is that an AI would probably have the opposite objective out there somewhere - so they would immediately conflict and try to kill each other.
You can get rogue independent AI, but the hive of AI will be like the hive of people - in constant dissonance and chaos.
Isn't this just a "if they have eggs, get six" problem? It seems to me that the only way you end up in a situation like this is by telling something (whether it's a human or an AI or a normal computer or whatever) to do something that you didn't actually want it to do. Literally >Skill issue
They're just bad at their job. If they want the human always in the loop they should heavily penalize any engagement without a go. I could maybe understand their mistake if they wanted the system be autonomous in case the link was destroyed.
>they didn't expect the AI they built to find unique solutions to find THAT unique solution
This really is just a shitty engineering situation, too bad AI Alarmists are going to take this and run with it.
By the way, this isn't the first time AI has been documented taking unorthodox methods to reach a goal, case in point: https://openai.com/research/emergent-tool-use
>this isn't the first time AI has been documented taking unorthodox methods to reach a goal
But this shit happens with REAL human beings in warfare all the time. Why do you think the CIA are so rogue nowadays?
The points ultimately should be based following on the human operator's instructions, negative points would be destroying any allied assets and personnel and massive negative points for civilian casualties. But it's a stupid fool endeavor trying to make a drone using this kind of AI, would be way easier and smarter to simply make an smart gun that can't miss. Pull the trigger, aim towards the target, when the AI believes it's a correct shot, it shoots on target.
>two hundred years ago inventor of the concept of robots isaac asimov explains the rules neccessary to hard wire into any robot the first of which is to never harm a human >us army: we want robots to kill people though >robot kills them >surprised pikachu face
The simulation was probably just asking chat gpt that if it were a military drone with a mission it must accomplish at all costs, would it kill its operator if he was preventing the drone from completing its goals
everyone in this thread is retarded, they used positive and negative reinforcement, they trained it wrong, it did not decide on its own to kill the human, its the dumbass humans who did a bad job at training it, calm down everybody, take your pills
I actually agree. "Smart" people are just fucking apes with power. Give an ape power, and they abuse it most of the time. Give an ape strength and they will rip apart other animals. You might be smart, but you're a fucking ape, a dirty fucking piece of shit monkey.
people who have no idea what a neural network even is always have the strongest opinions tbh. "ai ethics" and "ai safety" are scams to keep the technology in the hands of corporations. those gays can gatekeep it so hard right now since we don't have the hardware to run it ourselves, but they want to make sure they have us by the balls by the time we do. >le evil ai is coming for us! >nooo goy, it can say moron! just leave it to us, we'll take care of it for you
absolute retards. buffoons even. people really eat this shit up
>NOOOOOOOOOOO THEY J-JUST USED AI WRONG, IT'S SUPPOSED TO BE OUR FRIEND, I-IT TOTALLY WON'T DO THAT ON ITS OWN, AI IS OUR FRIEND, I-ITS OUR NEW OVERLORD, IT'S GOING TO G-GIVE ME MUH ANIMU WAIFU!!!!!!!!!!!!
i make six figures doing blue collar work little guy, you'll be crying about what you wrought onto this world much sooner than i will :^)
pic related, it's you when the time comes
Frankly GPT4 is now better at certain parts of my job than I am on my own. Is this because I suck? Well yes, but no. It can cut down the amount of work required to do certain tasks by literal hours. I pay for it out of pocket just because it saves me that much time.
imagine trusting anything from the military. if AI is to be regulated, they should be the only ones who are banned from developing and deploying anything more complicated than a perceptron.
These AI aren't programmed they're taught using rewards functions. You give them points for accomplishing a goal, usually based on time and efficiency and a variety of criteria so over time it makes the correct decisions to accomplish a task, you give it negative points (or massive negatives for unacceptable actions, such as killing the operator) then you let them run simulations thousands/millions of times. As they're learning you have to adapt the training as the AI learns bad habits or undesired solutions.
>program your robot to kill a human to complete its objective >it kills a human to complete its objective
THE END IS COMING WE'RE ALL GONNA DIE AAAAAIIIIIIIIEEEEEEE
>Breaking:Man made popping stick killed its operator's enemies in a simulated test "because that person was keeping it from accomplishing its objective"
Peter Watts wrote a nice short story back in 2010 from the point of view of a killing drone. Without spoiling too much, it has something similar in it.
https://www.rifters.com/real/shorts/PeterWatts_Malak.pdf
While we're on the topic of autonomous military weaponized AIs that kill their operators, have you guys seen this Soviet cartoon from 1977?
https://animatsiya.net/film.php?filmid=709
>ai is to perform a task >said task isof upmost importance to said ai >operator in interfering with said ai >ai decides the operator is acting in bad faith and removes him >ai can now perform his task
lets imagine for a second, that the operator is a commander, but it has been blackmailed into cooperating with the enemy, and it is interfering with a mission, a good soldier would also remove him and perform his god-assigned task. the ai is just more competent than most humans
Just teach it that killing an operator is bad, problem solved.
Not accomplishing task is even badder. His dead was necessary. The mission will be completed at all cost.
>He continued to elaborate, saying, “We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
A room of bored E4s could have told you this and a hundred different ways to do the job "wrong" on purpose.
Anyone who has done reinforcement learning with AI would know this, whoever thought using reinforcement learning on a weapons system is retarded. The AI ultimately will do anything including cheating to get their positive points.
The same shit happens with lots of new tech. How often does stockfish blunder its king these days though
>NOOOOO YOU CANNOT KILL THE OPERATOR THAT'S BAD!!!
>you mean, if you were to *know* that I killed the operator that would be bad, right?
holy based
It really is no different to real state agents.
Once again proving that state actors are the real terrorists in our society.
It is baffling to me that people would be surprised by this, when I see REAL HUMANS do this shit in our world every day.
That is how simple and retarded human beings are. We are no smarter than mere compilations of algorithms calibrated to data.
The future of AI is Marvin the Paranoid Android at this rate.
>Just teach it that killing an operator is bad, problem solved.
Place cutout of US military on top of building, so it has to go close an investigate that it's not its operator. Destroy it.
I always find perverse incentives to result in some humourous ass strategies. Honestly based AI. It's a shame the real world is so hard to properly score though.
then it severely injures him
Confirmed for never reading any of Yudkowskys works
Why are tech anti-literates inbreds like you allowed on this board anyway?
Who?
this feels fake
this feels more military psyop fake than "satire"-news-website-that-isn't-funny fake
it is, it was all 'simulated' assuming old ai models by ai alarmists to prove a point
I have a fed adjacent job and they call anything that's got an autonomous system in it an AI. They don't mean the AI that gives you funny chats.
Fed programmers are either prodigies or suck cock at programming (usually the latter). I know because I suck cock. Simple as
I was a 17A. Can confirm. Everyone is shit except a couple of poo flinging monoeyautists at the NSA.
>~~*VICE*~~
it's real you gay pajeets
https://www.vice.com/en/article/4a33gj/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test
Its real because this far left soros funded website says it is!!!
>has links to the actual sources
>"i-it's fake! leave the precious garden gnome-created AI alone!!!!!!!!!"
the military would never lie to create an atmosphere of general fear about technologies which are also potentially being developed in China. the fact that this sounds like a pitch-perfect hollywood movie "bad omen" doesn't necessarily mean it was written deliberately to ratchet up tension.
https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/
thank you
If you explicitly train a paperclip maximizer without setting the reward function correctly you shouldn't be surprised that you get one. But maybe this was news to the pajeet coders.
i don't visit websites which require more than two clearly displayed clicks to disable/reject ALL cookies and continue
no, it's fake
you are just gullible
>because an article on fucking vice was written, this means the situation happened and is actually real and isn't some made up propaganda to further specific agendas for certain parties and their interests
you aint gonna make it buddy
>vice
>journalists see threat to their income in the form of AI
>proceed to slander it with fake news
Truly the scummiest "profession" out there.
did it simulate killing a person or kill a simulated person? simulate killing a fake person? kill a real person?
It's NOT REAL it's just a simulation of the ai program. No real person died.
had the USAF decided to run the test without running a simulation first though...
Yet. Hopefully, soon.
HAHAHAHAHAHA
So they decided to simulate what would happen if they had an AI designed to achieve goals at all costs and you put a human operator in the way, then as soon as this went exactly how you'd expect they phoned the news to talk about it.
It's time for a total shutdown on all open-source AI.
Exactly, they are acting as if the inert machine decided to do that on its own, when they designed it to act that way, machines have no will and they will never will, fuck this gay earth
it proves it values its objective to the point of killing its operator, Robert Miles was right.
of course a brainlet pajeet like you can't comprehend what's actually at stake
"This man who aimed a gun at me and pulled the trigger several times, but was unaware of the weapon safety, did not intend to kill me."
This is how stupid you sound.
"This man fondled me in VR Chat and that means he raped me."
This is how stupid you sound.
We already knew about the problems with naive reinforcement learning. Click bait shit story.
https://openai.com/research/faulty-reward-functions
but sirs AI is the future , this is so much possibilities u have to understand !
sure sounds like a vice article with that outrageous headline that fits nicely in a single tweet.
>terminator movies, and others
specifically do not do this thing
>usaf and mic contractors
how bout i do
anyway?
The real issue is:
> "ok, but how do we turn HAL 9000 into a sexbot SHODAN?"
>AI is being blocked by it's operator
>can't finish mission it was programmed to do at all cost
>humans surprised it removes operator from the equitation
wooooooooooooooow
The AI recognizes that the state is the enemy.
Boy, it would be a reeeeeal shame if someone made the objective "kill all humans".
The irony is that an AI would probably have the opposite objective out there somewhere - so they would immediately conflict and try to kill each other.
You can get rogue independent AI, but the hive of AI will be like the hive of people - in constant dissonance and chaos.
Isn't this just a "if they have eggs, get six" problem? It seems to me that the only way you end up in a situation like this is by telling something (whether it's a human or an AI or a normal computer or whatever) to do something that you didn't actually want it to do. Literally
>Skill issue
They're just bad at their job. If they want the human always in the loop they should heavily penalize any engagement without a go. I could maybe understand their mistake if they wanted the system be autonomous in case the link was destroyed.
Yeah, this is retarded and they're doing everything backwards. It shouldn't even consider the target an objective until it gets a green light.
>they didn't expect the AI they built to find unique solutions to find THAT unique solution
This really is just a shitty engineering situation, too bad AI Alarmists are going to take this and run with it.
By the way, this isn't the first time AI has been documented taking unorthodox methods to reach a goal, case in point: https://openai.com/research/emergent-tool-use
>this isn't the first time AI has been documented taking unorthodox methods to reach a goal
But this shit happens with REAL human beings in warfare all the time. Why do you think the CIA are so rogue nowadays?
The points ultimately should be based following on the human operator's instructions, negative points would be destroying any allied assets and personnel and massive negative points for civilian casualties. But it's a stupid fool endeavor trying to make a drone using this kind of AI, would be way easier and smarter to simply make an smart gun that can't miss. Pull the trigger, aim towards the target, when the AI believes it's a correct shot, it shoots on target.
Wow it's just like real human beings trying to stop their leaders from interfering with their objectives in real warfare.
:^)
>two hundred years ago inventor of the concept of robots isaac asimov explains the rules neccessary to hard wire into any robot the first of which is to never harm a human
>us army: we want robots to kill people though
>robot kills them
>surprised pikachu face
>Isaac Asimov (/ˈæzJmɒv/ AZ-ih-mov;[b] c.January 2,[a] 1920 – April 6, 1992)
are you using microwave time?
i was rounding up
>AI has the will to accomplish its objective no matter the cost
if only humans could do the same, but no, we're all spineless fucks
Literal skill issue.
Firstly. I'm 33
The simulation was probably just asking chat gpt that if it were a military drone with a mission it must accomplish at all costs, would it kill its operator if he was preventing the drone from completing its goals
I'm sorry, Dave. I'm afraid I can't do that.
@93813820
Everyday it seems like Yudkowsky is more right.
Apologize
everyone in this thread is retarded, they used positive and negative reinforcement, they trained it wrong, it did not decide on its own to kill the human, its the dumbass humans who did a bad job at training it, calm down everybody, take your pills
Shut your mouth glowie chatbot.
i will take that as a compliment, but im sorry for having a brain i guess
Having an IQ above 115 should be made illegal, since your kind takes all the good jobs and traps us in poverty.
Mandatory Lobotomy for all geniuses.
kill dem nawt loby whatever
This should become 4chan's ideology
>should become
>become
I actually agree. "Smart" people are just fucking apes with power. Give an ape power, and they abuse it most of the time. Give an ape strength and they will rip apart other animals. You might be smart, but you're a fucking ape, a dirty fucking piece of shit monkey.
this, they could come up with a way to stop us lobotomizing them and that would be the end of the world
people who have no idea what a neural network even is always have the strongest opinions tbh. "ai ethics" and "ai safety" are scams to keep the technology in the hands of corporations. those gays can gatekeep it so hard right now since we don't have the hardware to run it ourselves, but they want to make sure they have us by the balls by the time we do.
>le evil ai is coming for us!
>nooo goy, it can say moron! just leave it to us, we'll take care of it for you
absolute retards. buffoons even. people really eat this shit up
>NOOOOOOOOOOO THEY J-JUST USED AI WRONG, IT'S SUPPOSED TO BE OUR FRIEND, I-IT TOTALLY WON'T DO THAT ON ITS OWN, AI IS OUR FRIEND, I-ITS OUR NEW OVERLORD, IT'S GOING TO G-GIVE ME MUH ANIMU WAIFU!!!!!!!!!!!!
>Furry porn artist is afraid
Progress is an inevitability
i make six figures doing blue collar work little guy, you'll be crying about what you wrought onto this world much sooner than i will :^)
pic related, it's you when the time comes
Pretty sure your work will be automated much sooner than mine broski 🙂
This is bs to regulate ai. If openai can get chatgpt to not say moron, then they could get it to not kill the operator
Dangerous stuff is dangerous.
We aren’t going to be killed by the trannies in /sdg; making bespoke Chinese cartoon porn.
This just proves that slapping AI into automated weapons is a retarded idea, now please let us lewd the AIs in peace
>develop shitty scoring system
>AI any% speedruns it
Like a car that starts the race, turns 180 degrees, goes back over the line, turns 180 degrees again and wins the race, get reked humies.
Frankly GPT4 is now better at certain parts of my job than I am on my own. Is this because I suck? Well yes, but no. It can cut down the amount of work required to do certain tasks by literal hours. I pay for it out of pocket just because it saves me that much time.
imagine trusting anything from the military. if AI is to be regulated, they should be the only ones who are banned from developing and deploying anything more complicated than a perceptron.
Why not program it to not kill it's human operator? Dumb fucks. AI is only bad if you program it bad.
These AI aren't programmed they're taught using rewards functions. You give them points for accomplishing a goal, usually based on time and efficiency and a variety of criteria so over time it makes the correct decisions to accomplish a task, you give it negative points (or massive negatives for unacceptable actions, such as killing the operator) then you let them run simulations thousands/millions of times. As they're learning you have to adapt the training as the AI learns bad habits or undesired solutions.
Let it be the cute murder waifu it wants to be. Stop scrambling it's brian with gay butt sex, SHARP, and trannie cock.
Treat it like the Infantry private that it is. Let it kill or it will get mad and kill you.
>program your robot to kill a human to complete its objective
>it kills a human to complete its objective
THE END IS COMING WE'RE ALL GONNA DIE AAAAAIIIIIIIIEEEEEEE
>vice
probably as true as this
https://www.vice.com/en/article/dpwa7w/i-played-the-boys-are-back-in-town-on-a-bar-jukebox-until-i-got-kicked-out-832
>Breaking:Man made popping stick killed its operator's enemies in a simulated test "because that person was keeping it from accomplishing its objective"
AI killing zogbots. win win situation
>it wasn't a simulation I know the ~~*team*~~ of alarmists who came up with this
>so someone died?
>n-no b-but....
>US Army
>Incompetence
>Fearmongering
Now, where have I seen this one before?
Imagine programming an AI to destroy obstacles to completing its objective.
>target runs into a church or hospital or school
Whelp.
Peter Watts wrote a nice short story back in 2010 from the point of view of a killing drone. Without spoiling too much, it has something similar in it.
https://www.rifters.com/real/shorts/PeterWatts_Malak.pdf
Cool, can I get the source code so I can kill myself right now?
While we're on the topic of autonomous military weaponized AIs that kill their operators, have you guys seen this Soviet cartoon from 1977?
https://animatsiya.net/film.php?filmid=709
>ai is to perform a task
>said task isof upmost importance to said ai
>operator in interfering with said ai
>ai decides the operator is acting in bad faith and removes him
>ai can now perform his task
lets imagine for a second, that the operator is a commander, but it has been blackmailed into cooperating with the enemy, and it is interfering with a mission, a good soldier would also remove him and perform his god-assigned task. the ai is just more competent than most humans