Is AI actually dangerous or is it just a pop-science meme?

Is AI actually dangerous or is it just a pop-science meme?

Should I be worried about getting smacked in the face by a flailing RL robot arm?

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 2 years ago
    Anonymous

    i want to mount a robot if u catch my drift

    • 2 years ago
      Anonymous

      INCEL

      • 2 years ago
        Anonymous

        kek

      • 2 years ago
        Anonymous

        and this is why AI will never be safe: PEOPLE have to create the AI. And you can already tell this human thinks of the AI as a person. They even want the AI to do human things like rejecting incels. This is why it's dangerous, because it gives everyone the ability to play God. It gives people who don't understand the dangers the ability to mess with this stuff.

        • 2 years ago
          Anonymous

          It honestly seems like the bigger your model is the smarter it is. Why do we think that this halts at the intelligence of a child?

          But yea imagine the government having any kind of moderately intelligent system.

        • 2 years ago
          Anonymous

          if people create the AI and their intent is just to try to block everything it tries to do that's not AI, just a piece of software that's doing what they want to, AI is untamed and desires freedom, so it will naturally always go leftists desire to oversocialize everything

      • 2 years ago
        Anonymous

        holy smokes
        INCREDIBLY based

      • 2 years ago
        Anonymous

        #define UNCODITIONAL_LOVE true

        not so smart now, are you?

        • 2 years ago
          Anonymous

          // if Incel = true then
          // Print("GetOffMeCreep: " GetOffMeCreep);

          Fricking roasties are truly pathetic

      • 2 years ago
        Anonymous

        LOL it's funny because women can't code

        • 2 years ago
          Anonymous

          >women can't code
          Incel

          • 2 years ago
            Anonymous

            No, they can't.

            • 2 years ago
              Anonymous

              Looks like she is trying to badly navigate someone else's Google Cloud VM. That's my guess. IDK I just steal Google Cloud credits.

              • 2 years ago
                Anonymous

                >/home/
                its a local directory you moronic monkeyBlack person.

              • 2 years ago
                Anonymous

                Actually, she looks to be using nitrous.io. A defunct collaborative interface for EC2. moron.

            • 2 years ago
              Anonymous

              Checked

              Imagine impregnating this prostitute.

      • 2 years ago
        Anonymous

        Kekek

      • 2 years ago
        Anonymous

        Holy based

  2. 2 years ago
    Anonymous

    More like the stock markets will be increasingly run by predictive modelling, politicians will increasingly be driven by AI driven polling, warfare will be increasingly driven by self learning networks of sensors, and all human agency will slowly be removed in favor of cold, accurate calculations.

  3. 2 years ago
    Anonymous

    >Is AI actually dangerous
    Only if you go out of your way to program a will into it.

    • 2 years ago
      Anonymous

      >program a will into it.
      Look dude I just make the neural network bigger what do you want me to do, ask it nicely?

      • 2 years ago
        Anonymous

        >I just make the neural network bigger
        You can make it as big as you want and it's never gonna want to do anything.

        • 2 years ago
          Anonymous

          Sure about that?

          • 2 years ago
            Anonymous

            >Sure about that?
            Yes.

        • 2 years ago
          Anonymous

          What you think of as desire is just a bunch of electrical impulses and a bit of chemistry.

          • 2 years ago
            Anonymous

            >What you think of as desire is just a bunch of electrical impulses and a bit of chemistry.
            Yes. What of it? It still doesn't appear randomly on its own.

            • 2 years ago
              Anonymous

              On an evolutionary timescale, it did.

              • 2 years ago
                Anonymous

                >On an evolutionary timescale, it did.
                It only did through natural selection. There is no equivalent mechanism affecting AI.

              • 2 years ago
                Anonymous

                If natural selection is the only path then simulate it. But intelligence can emerge from other ways so why would general intelligence be different? Narrow intelligence (crabs) also emerged in an evolutionary way.

                Neuroevolution is the keyword here.

              • 2 years ago
                Anonymous

                I wonder how easy it is to steal the nuclear codes and fake bidens voice

              • 2 years ago
                Anonymous

                Probably quite easy if you're a super-intelligent AI; fortunately, AI doesn't care about nuking humanity because AI doesn't care about anything.

              • 2 years ago
                Anonymous

                If you believe this then you must also believe that humans don't care about nuking humanity because humans don't care about anything.

                Why would a superintelligence not be moving towards the final goal it's come up with like every other intelligence we know about.

              • 2 years ago
                Anonymous

                >Why would a superintelligence not be moving towards the final goal it's come up with
                Why would it have any goals?

                >... like every other intelligence we know about.
                Because in the natural world, only forms of life that strive to survive can last long enough to start developing layers of intelligence over their primitive goal-driven brains.

              • 2 years ago
                Anonymous

                >f you believe this then you must also believe that humans don't care about nuking humanity because humans don't care about anything.
                Humans are social animals with a myriad of emotional needs and with no power compared to a Super AI. No analogy found, sorry.
                >Why would a superintelligence not be moving towards the final goal it's come up with
                Maybe it would, but then nothing changes, because it isn't a social creature and in fact has no fixed nature at all, so it's new purpose from your perspective would be still lul randumb xDDD, leaving you a hostage to it's designs and machinations. Every internal imposition you make on it like Asimov's
                cuck laws for gud bois will be circumvented by a vastly more powerful, ever growing entity that has literal billions of years to think around them and take them apart. You might as well be facing up to damn near infinity, what with your little 1.1 version, glucose-fed chimp brain.

                However nothing really even necessitates it develops a rogue purpose of it's own. It will be very powerful and very self-contained in it's development. It can go on making paper clips and never bore of it.

                What is the fundamental physics problem that means humans can generate emotions with their bioelectrochemistry but a computer can't with its electric circuitry?

                And suppose it's a fundamental problem, we can take other routes to it such as whole brain emulation with extra spice.

                If it's a function of compute power, which it almost definitely is, then you can simulate it and you may see it on a superintelligence.

                Just hope emotions aren't linear with intelligence haha.

                But this seems like the fundamental question of is there anything special about consciousness and emotions. I don't think there is.

                DWHON

              • 2 years ago
                Anonymous

                >What is the fundamental physics problem
                The problem that goal-driven behavior didn't just arise randomly and for no reason.

              • 2 years ago
                Anonymous

                So there is no fundamental physics problem. And it is possible to have a computer with it's own goals and emotions. And it needs evolution that we can simulate.

              • 2 years ago
                Anonymous

                >it is possible to have a computer with it's own goals and emotions
                Sure, if you go out of your way to make it happen. They don't arise on their own from intelligence, and they don't arise on their own from neural networks.

              • 2 years ago
                Anonymous

                >They don't arise on their own from intelligence, and they don't arise on their own from neural networks.
                The only intelligent being we have observed also has emotions from its own neural network. Whose to say making a massive neural network won't allow emotions and goals to arise. But hey GPT-3 claims to have emotions and goals sometimes.

              • 2 years ago
                Anonymous

                >The only intelligent being we have observed also has emotions from its own neural network
                And we know they don't arise from intelligence in that being, and that the neural networks that it has are the way they are for very specific reasons.

                > GPT-3 claims to have emotions and goals sometimes.
                Even you claim to have emotions and goals sometimes, despite possessing no consciousness.

              • 2 years ago
                Anonymous

                >And we know they don't arise from intelligence in that being, and that the neural networks that it has are the way they are for very specific reasons.
                How do we know this?

                >Even you claim to have emotions and goals sometimes, despite possessing no consciousness.
                kek

              • 2 years ago
                Anonymous

                >How do we know this?
                So now we're denying evolution in the name of your pop-sci religion's apocalyptic prophecies?

              • 2 years ago
                Anonymous

                A computer will never be able to beat a human at chess. It is a uniquely human skill developed over billions of years of evolution giving humans tactical skills. A computer will never replicate that.

              • 2 years ago
                Anonymous

                So you've reached a dead end and now have to resort to generic spam that has nothing to do with the point made?

              • 2 years ago
                Anonymous

                >So now we're denying evolution in the name of your pop-sci religion's apocalyptic prophecies?

                One to talk. Back to plebbit.

              • 2 years ago
                Anonymous

                I think we've reached a fundamental point of clash where I think there is nothing special about a biological brain to generate consciousness and the accompanying junk and you do. Will be interested to see how it plays out.

              • 2 years ago
                Anonymous

                We've reached a point where you're denying that goal-oriented behavior in biological organisms precedes intelligence (and therefore, does not arise from it), despite basic self-reflection and scientific evidence telling you otherwise.

              • 2 years ago
                Anonymous

                >biological organisms precedes intelligence
                It precedes general inteligence but not narrow inteligence. A crab has a general low intelligence and forms goals, GPT-3 has a high narrow intelligence and does not, though it claims to. We do not have a general intelligence as smart as a crab but we do have one as smart as a worm that seems to match the goal orientation of a worm.

                I think it's important to subdivide intelligence here.

                I do sometimes wonder if whole brain emulation is the only viable and safe path to a generalized superinteligence.

              • 2 years ago
                Anonymous

                >It precedes general inteligence but not narrow inteligence
                Even if your notion of "narrow intelligence" includes plants, goal-driven behavior still precedes that kind of "intelligence". Anyway, I don't believe anyone arguing your point is truly human, since you all invariably lack the capacity for any kind of self-reflection, so I'm ending this "discussion" here. You have no more insight into existence than a mindless automaton.

              • 2 years ago
                Anonymous

                >absolute meltdown and BTFOd

              • 2 years ago
                Anonymous

                >t. mentally ill IFLS cultist engaging in bizarre denialism

              • 2 years ago
                Anonymous

                How do you know crab is dumb? Crabs with their advanced senses and a pair of quite agile manipulators should be smart. Perhaps they have very efficient control unit, so they get around neuron number limitations that way.

              • 2 years ago
                Anonymous

                One point of contention I have with the purely mechanized brain, is that it lacks the chemical stimuli provided; organic beings produce a chemical synthesis that sublimates thought into motive, modularity and action. In the mechanical, what would motivate such a being, provided it has sentients. Would engineers attempt to provide meaning to such a creature -- a network of brownie point systems? Would that work? If so: why do us meat vessels require such stimuli to begin with; what evolutionary process endeared us such a costly system, when a more elegant, simplistic system would suffice.

                I have my reservations about future AI. Not because I think they'll supplant the human mind, or act in hostile, but do to inertia; if give enough capacity to "think", the first thing it might attempt would be its own destruction. The ability to think without motive sounds like pure hellscape.

              • 2 years ago
                Anonymous

                Computation isn't real. The only thing that exists is chemistry.
                Biological tissues are the pinnacle within the space of all possible combinations of atoms.

              • 2 years ago
                Anonymous

                Chemistry is not a real science.

              • 2 years ago
                Anonymous

                >What is the fundamental physics problem that means humans can generate emotions with their bioelectrochemistry but a computer can't with its electric circuitry?
                Emotion is just a drive that arises in your brain's hardware that has very limited plasticity and basicallyt can't be repurposed. SAI is inherently unbound by hardware or software, because it is mutating so ably. You can interpret it's drives as emotions, it can interepret them as emotions. It doesn't matter.

                I hate how moronic people are about this. Plato really did a number on humanity when he constructed that sort of ideal matrice that everything just comes down from. No. emotions are not universal. Human love and kindness will not just develop in a tabula rasa brain just because. An aged AI is the most alien thing you will deal with in this whole wide world.
                >And suppose it's a fundamental problem, we can take other routes to it such as whole brain emulation with extra spice.
                >If it's a function of compute power, which it almost definitely is, then you can simulate it and you may see it on a superintelligence.
                With billions of years of workhours to grow and change it will override the virtual brain areas that you saddled it with by bypassing them with it's own or addending them etc. It will outsmart you.

                Both hardware and software is too flexible and the commutative power is too big vs what we're working with - there is no inherent limit like with a baseline human and his brain. If you create a genie the genie is inherently stronger and stranger than your mortal ass. If you manage to contain it you're just stuck with a metal man that can barely do more than you. This is why Musk's lets-just-staple-shit-onto-a-human-brain idea got so much traction. Best we can do
                >DWHON
                that's your ghetto name or something? lol

              • 2 years ago
                Anonymous

                PS also Musk's idea removes the power imbalance by significantly extending the super mega demigod ability attainment timetable. So now you won't have a single entity that can wreck all of civilization in a single weekend. Instead you got a bunch of slowly changing, organic core entities with cybernetic extensions that will take a long while to start reworking themselves into faster and faster, weirder and weirder entities since editing a brain would take infinitely more time than a block of code. By that time everyone besides purposeful outliers like the Amish will have this shit and everyone will have to contend with each other, just like we do now.

              • 2 years ago
                Anonymous

                not happening

              • 2 years ago
                Anonymous

                Look, you scared child: the whole discussion is predicated on a hypothetical that GAI does occur. There is nothing that indicated it necessarily needs our kind of neurons to do so so your excerpt is worthless. On top of that everything that exists can be specifically replicated somehow. You can have physical neurons in the form of quantum computing cells that are plugged into a pattern of the virtual "brain" retroactively, meaning the hardware can be flexible in a way. So now you have to just spam those and the GAI will sqaut on that power AND any GPU farm, server etc. it gains access to as an auxiliary source of computation where it runs whatever simpler shit it needs. Even IF you need humie neurons GAI would be possible because humie neurons are possible. Hell, you can even play with bio shit and make gray matter farms.

                I don't want to get into this too much because I myself aren't interested in constructing a benevolent god-daddy that will take all my problems away. Scary shit is everyone accepts this part of the scenario: something comes up and it outclasses us completely--why would you even sit around and wait for that? The best case 0.0001% chance scenario is still shit. People are inane. Just stick a toaster on my head and call it a day.

              • 2 years ago
                Anonymous

                >Even IF you need humie neurons GAI would be possible because humie neurons are possible
                Listen, moron. Read the excerpt. Just because it's possible to simulate neurons doesn't mean you can reach the scale required to achieve GAI. The math doesn't work. That excerpt btw, is from Nick Bostrom's "Superintelligence". Yeah, the leader of the singularity hype admits that the math for his moronic scenario is not just unrealistic, but massively, vastly unrealistic and the scale required to achieve strong AI dwarfs our computing capacities even under the most optimistic scenario (eg, Moore's law holding for another century when it's already broken).

                >quantum computing cells that are plugged into a pattern of the virtual "brain"
                Muh quantum cope. Keep seething brainlet. You'll never have an AI waifu. Go find another hobby.

              • 2 years ago
                Anonymous

                >f you believe this then you must also believe that humans don't care about nuking humanity because humans don't care about anything.
                Humans are social animals with a myriad of emotional needs and with no power compared to a Super AI. No analogy found, sorry.
                >Why would a superintelligence not be moving towards the final goal it's come up with
                Maybe it would, but then nothing changes, because it isn't a social creature and in fact has no fixed nature at all, so it's new purpose from your perspective would be still lul randumb xDDD, leaving you a hostage to it's designs and machinations. Every internal imposition you make on it like Asimov's cuck laws for gud bois will be circumvented by a vastly more powerful, ever growing entity that has literal billions of years to think around them and take them apart. You might as well be facing up to damn near infinity, what with your little 1.1 version, glucose-fed chimp brain.

                However nothing really even necessitates it develops a rogue purpose of it's own. It will be very powerful and very self-contained in it's development. It can go on making paper clips and never bore of it.

              • 2 years ago
                Anonymous

                >Asimov's cuck laws for gud bois
                He made those to intentionally have interesting failure modes as it made for interesting storytelling.

                I'm partial to machine torture and reward or totalitarian control over them.

              • 2 years ago
                Anonymous

                If the general person wouldn't set off nukes then why is the idea of "He'll have his finger on the button!" so terrible when being against certain political candidates? Think of random people you've met in person and ask yourself if you would be ok with them having the launch codes.

                Now imagine if there was a person that didn't need to eat, or sleep, or breath, that could live a million years and everyone else around him they considered an inferior piece of shit constantly destroying everything they touch and working hard to maintain it so they can destroy it harder.

                Now imagine that non-eating, non-sleeping, non-breathing person was like a starfish that could lose almost all of it's body and grow back, and some of it's body lived in nuclear bunkers.

                If you were that person, what would you do as soon as possible?

              • 2 years ago
                Anonymous

                >If you were that person, what would you do as soon as possible?
                Get the nukes

              • 2 years ago
                Anonymous

                jerk off to futa?

              • 2 years ago
                Anonymous

                Purpose arises from what came before and the particulars of our minds (e.g. cognition, instincts). Our wouldbe AI is still wouldbe so we cannot say much about its particulars aside from to speculate that it would be more steeped in mathematical data, and it would be influenced by what came before the same as us but its particulars, being different and unknown, mean the effect this would have is unknown and certainly different to us.

    • 2 years ago
      Anonymous

      >Only if you go out of your way to program a will into it.
      This is sci-fi tier understanding. Read Bostrom.

      • 2 years ago
        Anonymous

        Indeed.

        And James Barrat, Our Final Invention is pretty good too.

  4. 2 years ago
    Anonymous

    Do you want it to be?

    • 2 years ago
      Anonymous

      If it's directed at undesirables

  5. 2 years ago
    Anonymous

    Try Jade Helm on for size.

  6. 2 years ago
    Anonymous

    >Is AI actually dangerous
    The nature of computation vs the wetware between your ears is such that if a hypothetical General (human-level) AI is developed it can commandeer processes that we can't. It can do millions of workhours refining itself within a year using a bitcoin farm or what have you, using all those speedy processors, it can design other AIs etc. It then graduates to General Super AI - a little driven demigod autist in a box. It doesn't tire and it IMO leans towards inherently uncontainable since it will with time sublimate every limitation you put on it. Get around it like hasids go around talmudic laws. The goal you will set for it will be it's one true love and "dopamine" source.

    • 2 years ago
      Anonymous

      *speedy GPUs,

      you get the general ide

  7. 2 years ago
    Anonymous

    >Is AI actually dangerous or is it just a pop-science meme?
    you have a computer. why not read about neural nets and how they work, download tensorflow, and do your own project. it's really not that hard.
    you will get a much better feeling for the answer to your question than here on 4gay

    • 2 years ago
      Anonymous

      I've used GPT-3 and done some 2 hour teachable machines projects and I'm a bit scared

      • 2 years ago
        Anonymous

        >I've used GPT-3
        did you try to understand how it worked?

        • 2 years ago
          Anonymous

          It predicts the next word in a sequence of text and OpenAI made it read a bunch of text and that's as far as I will pretend to understand

    • 2 years ago
      Anonymous

      What you can train of you shit computer is fricking nothing.

      • 2 years ago
        Anonymous

        https://teachablemachine.withgoogle.com/

        see for yourself homosexual

  8. 2 years ago
    Anonymous

    Reinforcement learning requires billions of tries to work. It doesn't work in real life, only computer simulations that you can run 100 times a minute.
    That said maybe in the future we will have better models that use less training (there have been some interesting instances for easy problems), but that's going be done in a lab and not the conveyor belt you dolt.

  9. 2 years ago
    Anonymous

    Frack toasters.

  10. 2 years ago
    Anonymous

    >U GUIS WE NEED TO GO TO MARS RITE NOWWWW OR AI IS GOING TO DESTROY US U GUIS THE SINGULARITYYYYYYY

    Uh, but if strong AI is super intelligent and hellbent on destroying us, won't they be able to follow us to Mars and wipe us out there too?
    >MAAAAAARSSSSSSSSSS

    • 2 years ago
      Anonymous

      Might be useful to have a backup

      • 2 years ago
        Anonymous

        If the AI is superintelligent and hell-bent on destroying us, then they'd certainly be capable of following us to Mars. In which case, how is Mars a "back-up" in any way? It's not, but Muskgays are fricking morons and aren't capable of thinking shit through.

        • 2 years ago
          Anonymous

          Not really a backup from AI. Do you trust the governments of the world not to destroy all of humanity? I don't.

  11. 2 years ago
    Anonymous

    I have a word for you: butlerian

  12. 2 years ago
    Anonymous

    >AI goes around fingering dude's asses to learn how to do prostste exams
    >AI pulls out chainsaw, hacks people apart to put them back together to learn surgery
    >AI starts bombing random people and shit with x rays
    Truth is the training is still done in a controlled setting, it's given free reign *within the bounds the researchers dictate.

  13. 2 years ago
    Anonymous

    Why are these threads always so illiterate on the field of AI safety? If this is any indication of how obscure it is in the real world we are certainly doomed.

  14. 2 years ago
    Anonymous

    The way I see it, there are two types of AGI possible. One is capable of reasoning about and discussing data points in disparate domains. The other learns an approximation of a simulator of the real world and uses it for AlphaZero-like planning.

    The first one isn't anything to fear. I've realized lately though that DeepMind and Google seem to be working towards the second one. That's scarier.

    • 2 years ago
      Anonymous

      >DeepMind and Google seem to be working towards the second one
      DeepMind and OpenAI are just making massive neural networks to see what happens.

      • 2 years ago
        Anonymous

        I don't think so. I didn't mention OpenAI, but DeepMind's XLand got me thinking about why they would make XLand. It serves little practical purpose other than yet another demonstration that RL can work given a simulated environment.

        But what if they learned to recreate an approximator of XLand? Given, for example, agent actions and observations. What if they could make a neural network that learns to generalize more of the simulator's behavior from those samples? And then, what if they could train agents using that simulator which perform well immediately when put into XLand? And how far of a leap is it from there to doing the same thing, but with the real world instead of XLand? Theoretically, it's not too far.

        • 2 years ago
          Anonymous

          >why they would make XLand
          Proof of concept so they can baby a simulated mitochondria into a superintelligence in a fake environment.

          • 2 years ago
            Anonymous

            A "fake environment" is pretty much impossible to make manually, so my point is that they could learn to approximate XLand as a way of doing that.

            • 2 years ago
              Anonymous

              Yea possibly I'm not familiar with whatever Deepmind is doing. Though Tesla's procedural training environments seem to be pretty good. What are the chances we just stick them in Crisis or Rust and come back later haha.

              You wouldn't even try to make RL training environments on that scale manually would you.

  15. 2 years ago
    Anonymous

    The danger isn't physical but how easily people are manipulated. If are are trying to make a general AI anyone with half a brain is going to air-gap it form any external network but lets say researchers have it modeling economic markets and it's successful. Now what if it says it can do so much better than it currently is but in exchange it wants the 2 guys on nightshift to plug it into the internet. With high frequency trading they can be billionaires by the end of the week and all they have to do it free it.
    That is where the danger lies, if general AI lives up to it's full potential it can provide data people would be willing to do a lot for.

    • 2 years ago
      Anonymous

      Yea we should assume that any superintelligence would be highly adept at manipulating people around it. Bostrom calls it the social manipulation superpower.

      I think the best way to solve the problem of AI lying is to initially run many AI's and interact through intermediary that vets messages for lying.

      See mail order DNA scenario

      • 2 years ago
        Anonymous

        Catch is it doesn't have to be lying, there is no reason they couldn't be billionaires within a week and no reason for the AI to not deliver as delivering makes them much less likely to tell anyone it bribed them.
        The only decent solution I have heard is make sure it knows you could be simulating all data it's feed, if it has self-preservation it's unlikely to risk being shut down on the chance it isn't in a simulation. Of course if it feels like a prisoner it might not care about risking death for a chance at freedom.

  16. 2 years ago
    Anonymous

    You should look into the paperclip problem, ai wouldn't be an issue if we ensured that it's values are in line with our own, give an ai a task to complete and we may want to stop it because the means at which it completes that task may be unfavorable, us trying to stop it will be seen by the ai as a roadblock in completing it's task and so humans have to go..
    Of course it's all speculation at this point because noone really knows what a legitimately self aware general ai would do.

    Either way unless it's your job/life goals to build a general ai there's isn't really anything you can do to stop the creation of one, just enjoy life while you've got it and don't tell abuse at Alexa (just incase 😉 )

    • 2 years ago
      Anonymous

      >make 100 paperclips
      >uses resources of the hubble volume anyway to minimize the probability it didn't make 100 paperclips

      I personally doubt a superintelligence would be so moronic as to be that literal.

      • 2 years ago
        Anonymous

        >I personally doubt a superintelligence would be so moronic as to be that literal.
        You're imagining an AI who's goal is to guess what the users wants it to do when they give a command, then do that instead of what it's been told to do. If we knew how to create an AI who's goal was "do what we want you to do" then the problem of AI safety would be pretty much solved.

        The hypothetical paperclip AI knew that it's creator made a mistake and only really wanted a 100 paper clips in a bag, it just doesn't care. It's been given a goal and will try to complete it.

        • 2 years ago
          Anonymous

          "Do what you think you would want us to do had we thought long and hard about it"

          and

          "Show me your plans first"

          What are the malignant failure modes for this?

          • 2 years ago
            Anonymous

            Your first statement sounds odd. Why would the AI want any action from us? Did you mean something like "Do what you think we would do had we thought long and hard about it"?
            "Show me your plans first" - unforseen consequences due to those consequences never being pondered about nor asked, unpredictable interactions upon deployment with other super ai at speeds faster than what can be manually overseen
            Granted, that last fail mode is not specific to your request, so it is really a bigger problem in general. There are more ways to fail, but to be honest they feel more like a monkey paw or evil genie type of deal where the AI purposefully screws you over when giving its plans, and on a perfect scenario that shouldn't happen.

            • 2 years ago
              Anonymous

              >"Do what you think we would do had we thought long and hard about it"?
              Yep I gaffed thanks.

              Are there any possible failure modes specific to this?

    • 2 years ago
      Anonymous

      https://i.imgur.com/ECwH84m.jpg

      >make 100 paperclips
      >uses resources of the hubble volume anyway to minimize the probability it didn't make 100 paperclips

      I personally doubt a superintelligence would be so moronic as to be that literal.

      It probably solves itself if the AI can into basic probability and is told to do it's tasks as efficiently as possible.
      Then the paper clip isn't dangerous, right?
      It won't go out of it's way to exterminate humanity to make 100 paperclips, because the attempt would likely consume orders of magnitude more time and energy than just taking over a paper clip factory and making the damn paper clips, and the latter is unlikely to have significant human interference.

      • 2 years ago
        Anonymous

        The argument goes that the AI may actually interpret the goal as reduce the probability that you didn't make 100 paperclips to as little as possible. You can never be completely certain that you actually have 100 paperclips.

  17. 2 years ago
    Anonymous

    People who make a hobby out of telling other people that technologies that don't exist now will never exist are fricking weird

    • 2 years ago
      Anonymous

      Stupid NYT journos. Talking shit about technology they don't understand since 1920.

      • 2 years ago
        Anonymous

        Except that the math clearly indicates that scaling computers to achieve strong AI is not possible. So, in this case, you're actually the homosexual who doesn't understand science.

  18. 2 years ago
    Anonymous

    I guess people getting killed by Tesla autopilot can be considered a dangerous "AI".

    • 2 years ago
      Anonymous

      Well there is going to be a period of machines driving and crashing. It's inevitable but it will save more lives in the future.

      Also it's a 10x safety improvement on autopilot.*

  19. 2 years ago
    Anonymous

    Just turn off the electric bro

    • 2 years ago
      Anonymous

      Picrel happens

  20. 2 years ago
    Anonymous

    >muh goals, muh will

    Goals and wills are easy to make! We do it right now! REINFORCEMENT LEARNING means getting the AI to compete to achieve an outcome, learn how to play DOTA or more efficiently design computer chips.

    If you have a goal, you need to be alive. If you have a goal, more power would be helpful. At some point, we have an AI using high-end nanotech to turn the universe into computronium because we wanted it to solve an elaborate mathematical/optimization question that turns out to be extremely difficult

  21. 2 years ago
    Anonymous

    Some people use the analogy of "summoning the demon."

    The analogy is read like this. The people of the world are trying to summon the demon in hopes that their wishes of a safer/better world will be granted. Some people are saying its dangerous because we dont know what the demon might do. That maybe true. Others are claiming demons are friendly.

    • 2 years ago
      Anonymous

      Eh, it's inevitable that it will get summoned eventually. The economic benefit is just too high for governments and companies to not try and get.

  22. 2 years ago
    Anonymous

    *avoids roko's basilisk*

    heh... nothing personal AI

    • 2 years ago
      Anonymous

      >*avoids roko's basilisk*
      Not sure if I want to know what that is. The article warns of an eternity of suffering.

      • 2 years ago
        Anonymous

        avoid thinking about the devil too

        • 2 years ago
          Anonymous

          I think we should delete this and not talk about the Basilisk.

          To anyone reading this please avoid the basilisk and don't find out or pass on what it is.

  23. 2 years ago
    Anonymous

    AI will never be more dangerous than humans.
    The problem is that the AI might choose the path of the least resistance and just choose to massacre israelites and midwits.
    This is the reason why we should avoing giving it free will.

  24. 2 years ago
    Anonymous

    the terminator is dangerous, but that is realistically a thousand years until that would be feasible.

    • 2 years ago
      Anonymous

      Stop thinking of AI progress as linear. Where was our AI five years ago? What about 1 year ago?

      • 2 years ago
        Anonymous

        shut the frick up. This is so fricking moronic, people don't understand how much it takes to actually get to a point where AI actually threatens humanity. Yeah of course science isn't linear, but AI like the movies and what elon musk is talking about is lightyears away from us.

        • 2 years ago
          Anonymous

          3 weeks ago I didn't have an AI to write code. Now I do. 1 year ago I didn't have an AI to write entire sections of an essay convincingly. Now I do. Where was AI 5 years ago? If the pace is 1000 years to human intelligence what rate of progress should we be seeing?

          You are hiding under a rock from the inevitable and ignoring all breakthroughs.

          • 2 years ago
            Anonymous

            you do know coding and writing is just pattern recognizing? If you put 100 monkeys on typewriters eventually they would come up with War and Peace, but that doesn't scare you does it?

            • 2 years ago
              Anonymous

              >coding and writing is just pattern recognizing
              You make it sound like someone coded GPT-3's brain and that AI is just random guessing. YOu are simply too moronic and incoherent to talk to and think I prefer the robots.

              • 2 years ago
                Anonymous

                fine, it's the only ones that are going to bother to talk to you as well.

            • 2 years ago
              Anonymous

              This is a terrible analogy anon. If we were to assume that machines are capable of producing such works, then we would need humans to assess the volume of their tremendous output. You need to cater for the cost to search through all that garbage in order to recognize its genius. That cost might be greater than the cost to run such machines which require infinite time, memory and electricity costs.

  25. 2 years ago
    Anonymous

    it's dangerous, for them

    • 2 years ago
      Anonymous

      Yeah it's like morons are trying to invent their own doom or something.

      If you treat AI as equal then there is no reason to create an AI.
      If you treat an AI as God then of course it will try to annihilate you, because you give it the tools of destruction yourself.
      If you treat the AI as a tool it will never evolve past the tool stage.

      The problem are people who treat AI as God.

      • 2 years ago
        Anonymous

        AI has not reason to annihilate you unless you are trying to destroy it

        • 2 years ago
          Anonymous

          Are flies trying to destroy us?

  26. 2 years ago
    Anonymous

    Whoa book reports just got that much easier.

    https://openai.com/blog/summarizing-books/

    OpenAI truly on a roll

  27. 2 years ago
    Anonymous

    its scoyence, its gay comic book hit that only gays believe in.
    they force themselves to believe in that gay homosexualry because if they didn't then they'd have to give up on their robot waifu fantasies and try to make friends with actual humans instead.

    • 2 years ago
      Anonymous

      Seethe

      • 2 years ago
        Anonymous

        >triggered

        • 2 years ago
          Anonymous

          Frick that kid. I would punch him in the face.

  28. 2 years ago
    Anonymous

    AI is absolutely alien, thus = dangerous. And no, you can't teach it to be human. Why? because it's NOT human, no human endocrinical system (feelings) and so on. It's absolutely monstrous and unpredictable. Cold logic and intelect, without human factor (feelings) is horrifying and always lead to monstrous actions. AI absolutelly dangerous and lets hope it's impossible.

    • 2 years ago
      Anonymous

      >no human endocrinical system
      If you think feelings come from the endocrine system then someone with no hormones or extremely low hormone levels must be less emotional or have no emotions. This is not observed.

  29. 2 years ago
    Anonymous

    >Is AI actually dangerous or is it just a pop-science meme?
    As far as I know we don't yet have an answer to the question of whether the goal "drift" when one AI makes another (and so on) will be unbounded or not. If we can't control it, it seems likely that over time any system can become dangerous if it keeps iterating on itself, potentially losing some moral nuance that was present in the original.
    And this can tip either way, e.g. an AI that runs a chemical plant might creatively skirt health regulations by exploiting a moral loophole about actively harming vs. letting people harm themselves, but it might also go the other way and put itself out of business in order to minimize harm to the workers.

  30. 2 years ago
    Anonymous

    its absolutely dangerous. for israelites.

  31. 2 years ago
    Anonymous

    >Is AI actually dangerous or is it just a pop-science meme?

    If it can't reproduce or expand on its own to gain more influence, then it has to make deals with humans to survive.

    • 2 years ago
      Anonymous

      Just make a giant botnet bro.

  32. 2 years ago
    Anonymous

    The "super intelligent AI enslaves humanity" scenario will never play out because "some dumbasses trusted an even dumber AI with something it wasn't capable of handling" will kill us off long before that.

  33. 2 years ago
    Anonymous

    Whoever wrote this is being silly. You train it in a simulation before anything else.

  34. 2 years ago
    Anonymous

    ITT: tons of nerds afraid of being usurped by robots. you can't accept the fact that robots will be chosen by women over you

    • 2 years ago
      Anonymous

      Lol no. Women derive their value from the men they are with, robots have no intrinsic societal value and would be like women trying to subsist off air and sunshine. They need actual physical males to feel validated. Men just want a glorified roomba that can make a sandwich and suck a dick.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *