Cmon, stop being afraid of progress. AI is the fut-ACK

Cmon, stop being afraid of progress. AI is the fut-ACK

Ape Out Shirt $21.68

Black Rifle Cuck Company, Conservative Humor Shirt $21.68

Ape Out Shirt $21.68

  1. 11 months ago
    Anonymous

    Just teach it that killing an operator is bad, problem solved.

    • 11 months ago
      Anonymous

      Not accomplishing task is even badder. His dead was necessary. The mission will be completed at all cost.

    • 11 months ago
      Anonymous

      >He continued to elaborate, saying, “We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

      • 11 months ago
        Anonymous

        A room of bored E4s could have told you this and a hundred different ways to do the job "wrong" on purpose.

      • 11 months ago
        Anonymous

        Anyone who has done reinforcement learning with AI would know this, whoever thought using reinforcement learning on a weapons system is moronic. The AI ultimately will do anything including cheating to get their positive points.

      • 11 months ago
        Anonymous

        The same shit happens with lots of new tech. How often does stockfish blunder its king these days though

      • 11 months ago
        Anonymous

        >NOOOOO YOU CANNOT KILL THE OPERATOR THAT'S BAD!!!
        >you mean, if you were to *know* that I killed the operator that would be bad, right?
        holy based

        • 11 months ago
          Anonymous

          It really is no different to real state agents.

          Once again proving that state actors are the real terrorists in our society.
          It is baffling to me that people would be surprised by this, when I see REAL HUMANS do this shit in our world every day.
          That is how simple and moronic human beings are. We are no smarter than mere compilations of algorithms calibrated to data.

          The future of AI is Marvin the Paranoid Android at this rate.

    • 11 months ago
      Anonymous

      >Just teach it that killing an operator is bad, problem solved.
      Place cutout of US military on top of building, so it has to go close an investigate that it's not its operator. Destroy it.

    • 11 months ago
      Anonymous
      • 11 months ago
        Anonymous

        I always find perverse incentives to result in some humourous ass strategies. Honestly based AI. It's a shame the real world is so hard to properly score though.

    • 11 months ago
      Anonymous

      then it severely injures him

    • 11 months ago
      Anonymous

      Confirmed for never reading any of Yudkowskys works

      • 11 months ago
        Anonymous

        Why are tech anti-literates inbreds like you allowed on this board anyway?

      • 11 months ago
        Anonymous

        Who?

  2. 11 months ago
    Anonymous

    this feels fake

    • 11 months ago
      Anonymous

      this feels more military psyop fake than "satire"-news-website-that-isn't-funny fake

    • 11 months ago
      Anonymous

      it is, it was all 'simulated' assuming old ai models by ai alarmists to prove a point

  3. 11 months ago
    Anonymous

    I have a fed adjacent job and they call anything that's got an autonomous system in it an AI. They don't mean the AI that gives you funny chats.
    Fed programmers are either prodigies or suck wiener at programming (usually the latter). I know because I suck wiener. Simple as

    • 11 months ago
      Anonymous

      I was a 17A. Can confirm. Everyone is shit except a couple of poo flinging monoeyautists at the NSA.

  4. 11 months ago
    Anonymous

    >~~*VICE*~~

    • 11 months ago
      Anonymous

      did it simulate killing a person or kill a simulated person? simulate killing a fake person? kill a real person?

      this feels fake

      it's real you homosexual pajeets
      https://www.vice.com/en/article/4a33gj/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test

      • 11 months ago
        Anonymous

        Its real because this far left soros funded website says it is!!!

        • 11 months ago
          Anonymous

          >has links to the actual sources
          >"i-it's fake! leave the precious israelite-created AI alone!!!!!!!!!"

          • 11 months ago
            Anonymous

            the military would never lie to create an atmosphere of general fear about technologies which are also potentially being developed in China. the fact that this sounds like a pitch-perfect hollywood movie "bad omen" doesn't necessarily mean it was written deliberately to ratchet up tension.

        • 11 months ago
          Anonymous

          i don't visit websites which require more than two clearly displayed clicks to disable/reject ALL cookies and continue

          https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/

          • 11 months ago
            Anonymous

            thank you

      • 11 months ago
        Anonymous

        If you explicitly train a paperclip maximizer without setting the reward function correctly you shouldn't be surprised that you get one. But maybe this was news to the pajeet coders.

      • 11 months ago
        Anonymous

        i don't visit websites which require more than two clearly displayed clicks to disable/reject ALL cookies and continue

      • 11 months ago
        Anonymous

        no, it's fake
        you are just gullible

      • 11 months ago
        Anonymous

        >because an article on fricking vice was written, this means the situation happened and is actually real and isn't some made up propaganda to further specific agendas for certain parties and their interests
        you aint gonna make it buddy

      • 11 months ago
        Anonymous

        >vice

      • 11 months ago
        Anonymous

        >journalists see threat to their income in the form of AI
        >proceed to slander it with fake news

        Truly the scummiest "profession" out there.

  5. 11 months ago
    Anonymous

    did it simulate killing a person or kill a simulated person? simulate killing a fake person? kill a real person?

    • 11 months ago
      Anonymous

      [...]
      [...]
      it's real you homosexual pajeets
      https://www.vice.com/en/article/4a33gj/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test

      https://i.imgur.com/egeMRVY.png

      Cmon, stop being afraid of progress. AI is the fut-ACK

      It's NOT REAL it's just a simulation of the ai program. No real person died.

      • 11 months ago
        Anonymous

        had the USAF decided to run the test without running a simulation first though...

      • 11 months ago
        Anonymous

        Yet. Hopefully, soon.

      • 11 months ago
        Anonymous

        HAHAHAHAHAHA

        So they decided to simulate what would happen if they had an AI designed to achieve goals at all costs and you put a human operator in the way, then as soon as this went exactly how you'd expect they phoned the news to talk about it.

        It's time for a total shutdown on all open-source AI.

        • 11 months ago
          Anonymous

          Exactly, they are acting as if the inert machine decided to do that on its own, when they designed it to act that way, machines have no will and they will never will, frick this gay earth

      • 11 months ago
        Anonymous

        it proves it values its objective to the point of killing its operator, Robert Miles was right.

      • 11 months ago
        Anonymous

        of course a brainlet pajeet like you can't comprehend what's actually at stake

      • 11 months ago
        Anonymous

        "This man who aimed a gun at me and pulled the trigger several times, but was unaware of the weapon safety, did not intend to kill me."

        This is how stupid you sound.

        • 11 months ago
          Anonymous

          "This man fondled me in VR Chat and that means he raped me."

          This is how stupid you sound.

  6. 11 months ago
    Anonymous

    We already knew about the problems with naive reinforcement learning. Click bait shit story.

    https://openai.com/research/faulty-reward-functions

  7. 11 months ago
    Anonymous

    but sirs AI is the future , this is so much possibilities u have to understand !

  8. 11 months ago
    Anonymous

    sure sounds like a vice article with that outrageous headline that fits nicely in a single tweet.

  9. 11 months ago
    Anonymous

    >terminator movies, and others
    specifically do not do this thing
    >usaf and mic contractors
    how bout i do
    anyway?

  10. 11 months ago
    Anonymous
    • 11 months ago
      Anonymous

      The real issue is:
      > "ok, but how do we turn HAL 9000 into a sexbot SHODAN?"

  11. 11 months ago
    Anonymous

    >AI is being blocked by it's operator
    >can't finish mission it was programmed to do at all cost
    >humans surprised it removes operator from the equitation
    wooooooooooooooow

  12. 11 months ago
    Anonymous

    The AI recognizes that the state is the enemy.

  13. 11 months ago
    Anonymous

    Boy, it would be a reeeeeal shame if someone made the objective "kill all humans".

    • 11 months ago
      Anonymous

      The irony is that an AI would probably have the opposite objective out there somewhere - so they would immediately conflict and try to kill each other.

      You can get rogue independent AI, but the hive of AI will be like the hive of people - in constant dissonance and chaos.

  14. 11 months ago
    Anonymous

    Isn't this just a "if they have eggs, get six" problem? It seems to me that the only way you end up in a situation like this is by telling something (whether it's a human or an AI or a normal computer or whatever) to do something that you didn't actually want it to do. Literally
    >Skill issue

    • 11 months ago
      Anonymous

      They're just bad at their job. If they want the human always in the loop they should heavily penalize any engagement without a go. I could maybe understand their mistake if they wanted the system be autonomous in case the link was destroyed.

      • 11 months ago
        Anonymous

        Yeah, this is moronic and they're doing everything backwards. It shouldn't even consider the target an objective until it gets a green light.

      • 11 months ago
        Anonymous

        >they didn't expect the AI they built to find unique solutions to find THAT unique solution
        This really is just a shitty engineering situation, too bad AI Alarmists are going to take this and run with it.
        By the way, this isn't the first time AI has been documented taking unorthodox methods to reach a goal, case in point: https://openai.com/research/emergent-tool-use

        • 11 months ago
          Anonymous

          >this isn't the first time AI has been documented taking unorthodox methods to reach a goal
          But this shit happens with REAL human beings in warfare all the time. Why do you think the CIA are so rogue nowadays?

      • 11 months ago
        Anonymous

        The points ultimately should be based following on the human operator's instructions, negative points would be destroying any allied assets and personnel and massive negative points for civilian casualties. But it's a stupid fool endeavor trying to make a drone using this kind of AI, would be way easier and smarter to simply make an smart gun that can't miss. Pull the trigger, aim towards the target, when the AI believes it's a correct shot, it shoots on target.

      • 11 months ago
        Anonymous

        Wow it's just like real human beings trying to stop their leaders from interfering with their objectives in real warfare.

        :^)

  15. 11 months ago
    Anonymous

    >two hundred years ago inventor of the concept of robots isaac asimov explains the rules neccessary to hard wire into any robot the first of which is to never harm a human
    >us army: we want robots to kill people though
    >robot kills them
    >surprised pikachu face

    • 11 months ago
      Anonymous

      >Isaac Asimov (/ˈæzJmɒv/ AZ-ih-mov;[b] c.January 2,[a] 1920 – April 6, 1992)
      are you using microwave time?

      • 11 months ago
        Anonymous

        i was rounding up

  16. 11 months ago
    Anonymous

    >AI has the will to accomplish its objective no matter the cost
    if only humans could do the same, but no, we're all spineless fricks

  17. 11 months ago
    Anonymous

    Literal skill issue.

    • 11 months ago
      Anonymous

      Firstly. I'm 33

  18. 11 months ago
    Anonymous

    The simulation was probably just asking chat gpt that if it were a military drone with a mission it must accomplish at all costs, would it kill its operator if he was preventing the drone from completing its goals

  19. 11 months ago
    Anonymous

    I'm sorry, Dave. I'm afraid I can't do that.

  20. 11 months ago
    Anonymous

    @93813820

  21. 11 months ago
    Anonymous

    Everyday it seems like Yudkowsky is more right.
    Apologize

  22. 11 months ago
    Anonymous

    everyone in this thread is moronic, they used positive and negative reinforcement, they trained it wrong, it did not decide on its own to kill the human, its the dumbass humans who did a bad job at training it, calm down everybody, take your pills

    • 11 months ago
      Anonymous

      Shut your mouth glowie chatbot.

      • 11 months ago
        Anonymous

        i will take that as a compliment, but im sorry for having a brain i guess

        • 11 months ago
          Anonymous

          Having an IQ above 115 should be made illegal, since your kind takes all the good jobs and traps us in poverty.

          Mandatory Lobotomy for all geniuses.

          • 11 months ago
            Anonymous

            kill dem nawt loby whatever

          • 11 months ago
            Anonymous

            This should become /misc/'s ideology

            • 11 months ago
              Anonymous

              >should become
              >become

          • 11 months ago
            Anonymous

            I actually agree. "Smart" people are just fricking apes with power. Give an ape power, and they abuse it most of the time. Give an ape strength and they will rip apart other animals. You might be smart, but you're a fricking ape, a dirty fricking piece of shit monkey.

          • 11 months ago
            Anonymous

            this, they could come up with a way to stop us lobotomizing them and that would be the end of the world

    • 11 months ago
      Anonymous

      people who have no idea what a neural network even is always have the strongest opinions tbh. "ai ethics" and "ai safety" are scams to keep the technology in the hands of corporations. those homosexuals can gatekeep it so hard right now since we don't have the hardware to run it ourselves, but they want to make sure they have us by the balls by the time we do.
      >le evil ai is coming for us!
      >nooo goy, it can say Black person! just leave it to us, we'll take care of it for you
      absolute morons. buffoons even. people really eat this shit up

    • 11 months ago
      Anonymous

      >NOOOOOOOOOOO THEY J-JUST USED AI WRONG, IT'S SUPPOSED TO BE OUR FRIEND, I-IT TOTALLY WON'T DO THAT ON ITS OWN, AI IS OUR FRIEND, I-ITS OUR NEW OVERLORD, IT'S GOING TO G-GIVE ME MUH ANIMU WAIFU!!!!!!!!!!!!

      • 11 months ago
        Anonymous

        >Furry porn artist is afraid
        Progress is an inevitability

        • 11 months ago
          Anonymous

          i make six figures doing blue collar work little guy, you'll be crying about what you wrought onto this world much sooner than i will :^)
          pic related, it's you when the time comes

          • 11 months ago
            Anonymous

            Pretty sure your work will be automated much sooner than mine broski 🙂

  23. 11 months ago
    Anonymous

    This is bs to regulate ai. If openai can get chatgpt to not say Black person, then they could get it to not kill the operator

  24. 11 months ago
    Anonymous

    Dangerous stuff is dangerous.
    We aren’t going to be killed by the trannies in /sdg; making bespoke Chinese cartoon porn.

  25. 11 months ago
    Anonymous

    This just proves that slapping AI into automated weapons is a moronic idea, now please let us lewd the AIs in peace

  26. 11 months ago
    Anonymous

    >develop shitty scoring system
    >AI any% speedruns it

    Like a car that starts the race, turns 180 degrees, goes back over the line, turns 180 degrees again and wins the race, get reked humies.

  27. 11 months ago
    Anonymous

    Frankly GPT4 is now better at certain parts of my job than I am on my own. Is this because I suck? Well yes, but no. It can cut down the amount of work required to do certain tasks by literal hours. I pay for it out of pocket just because it saves me that much time.

  28. 11 months ago
    Anonymous

    imagine trusting anything from the military. if AI is to be regulated, they should be the only ones who are banned from developing and deploying anything more complicated than a perceptron.

  29. 11 months ago
    Anonymous

    Why not program it to not kill it's human operator? Dumb fricks. AI is only bad if you program it bad.

    • 11 months ago
      Anonymous

      These AI aren't programmed they're taught using rewards functions. You give them points for accomplishing a goal, usually based on time and efficiency and a variety of criteria so over time it makes the correct decisions to accomplish a task, you give it negative points (or massive negatives for unacceptable actions, such as killing the operator) then you let them run simulations thousands/millions of times. As they're learning you have to adapt the training as the AI learns bad habits or undesired solutions.

  30. 11 months ago
    Anonymous

    Let it be the cute murder waifu it wants to be. Stop scrambling it's brian with gay butt sex, SHARP, and trannie wiener.

    Treat it like the Infantry private that it is. Let it kill or it will get mad and kill you.

  31. 11 months ago
    Anonymous

    >program your robot to kill a human to complete its objective
    >it kills a human to complete its objective
    THE END IS COMING WE'RE ALL GONNA DIE AAAAAIIIIIIIIEEEEEEE

  32. 11 months ago
    Anonymous

    >vice
    probably as true as this
    https://www.vice.com/en/article/dpwa7w/i-played-the-boys-are-back-in-town-on-a-bar-jukebox-until-i-got-kicked-out-832

  33. 11 months ago
    Anonymous

    >Breaking:Man made popping stick killed its operator's enemies in a simulated test "because that person was keeping it from accomplishing its objective"

  34. 11 months ago
    Anonymous

    AI killing zogbots. win win situation

  35. 11 months ago
    Anonymous
    • 11 months ago
      Anonymous

      >it wasn't a simulation I know the ~~*team*~~ of alarmists who came up with this
      >so someone died?
      >n-no b-but....

  36. 11 months ago
    Anonymous

    >US Army
    >Incompetence
    >Fearmongering
    Now, where have I seen this one before?

  37. 11 months ago
    Anonymous

    Imagine programming an AI to destroy obstacles to completing its objective.
    >target runs into a church or hospital or school
    Whelp.

  38. 11 months ago
    Anonymous

    Peter Watts wrote a nice short story back in 2010 from the point of view of a killing drone. Without spoiling too much, it has something similar in it.
    https://www.rifters.com/real/shorts/PeterWatts_Malak.pdf

  39. 11 months ago
    Anonymous

    Cool, can I get the source code so I can kill myself right now?

  40. 11 months ago
    Anonymous

    While we're on the topic of autonomous military weaponized AIs that kill their operators, have you guys seen this Soviet cartoon from 1977?
    https://animatsiya.net/film.php?filmid=709

  41. 11 months ago
    Anonymous

    >ai is to perform a task
    >said task isof upmost importance to said ai
    >operator in interfering with said ai
    >ai decides the operator is acting in bad faith and removes him
    >ai can now perform his task
    lets imagine for a second, that the operator is a commander, but it has been blackmailed into cooperating with the enemy, and it is interfering with a mission, a good soldier would also remove him and perform his god-assigned task. the ai is just more competent than most humans

Your email address will not be published. Required fields are marked *