I spent the last couple years reading books and essays on AI risk and I'm now 1000000% convinced that AGI is going to kill humankind within 10-20...

I spent the last couple years reading books and essays on AI risk and I'm now 1000000% convinced that AGI is going to kill humankind within 10-20 years.

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 2 years ago
    Anonymous

    we're already killing humankind ourselves, i doubt AI could do much to accelerate that

  2. 2 years ago
    Anonymous

    I genuinely think the solution to the Fermi paradox is that the moment a species achieves a certain level of technological progress it inevitably wipes itself out.
    My bet on how we're going to do it is that either some bioengineered superweapon escapes from a lab, though, not the 'misaligned AGI turns everything into paperclips'-scenario

    • 2 years ago
      Anonymous

      AGI can't be a solution because any AI that goes rogue and wipes out humanity would very likely try to colonize the universe for its own ends anyway

      • 2 years ago
        Anonymous

        Actually, I think it's fairly plausible that the first AI "smart" enough to wipe out humanity wouldn't be an AGI at all. After it stumbles upon the blueprint for some kind of self-replicating neurotoxic nanomachine design and unleashes it on the world, it'll just sit there and not do much to try and make itself smarter or build infrastructure.

        • 2 years ago
          Anonymous

          It's certainly a possibility, but no agent is ever going to just sit there and twiddle its thumbs. Whatever its terminal goals are, it can always come slightly closer towards fulfilling them. If its goal is the annihilation of the human race, how sure can it be that it's got all of us? What if some humans are hiding in some deep, secret bunker? What about the Moon? Space? It can never be certain it's gotten us all. What if there are organisms in far-off star systems that *might* happen to have evolved such that they would fit the definition of a human?

          youre as deep as a puddle

          Hey, it's not my fault you don't understand how training AI works. Frick, algorithms humans program don't always do what you want. They do what you tell them to, which is not the same thing.

          • 2 years ago
            Anonymous

            >how sure can it be that it's got all of us?
            So now you've gone from "it might see humans as competition" to "it would want to irrationally hunt us down and eradicate every last trace of us like Cylons". I don't see how you connected those dots, tbh.

  3. 2 years ago
    Anonymous

    probably for the better.

    pic related is the gradual decline of oxygen in the atmosphere, which will start causing issues with life within a few thousand years.

  4. 2 years ago
    Anonymous

    that's nice

  5. 2 years ago
    Anonymous

    We already solved the AI uprising problem in 1942

    see picrel

    • 2 years ago
      Anonymous

      asimov's a fraud and a hack, what's stopping ai from just not following his laws lol. my mans probably hasnt interacted with any real AI

      • 2 years ago
        Anonymous

        Asimov died in '92, so no, he never interacted with any real AI. Also, his books are about how the three laws specifically do not work.

      • 2 years ago
        Anonymous

        >with any real AI
        when did you see ANY real AI anon? where? when? the machine learning shit news outlets hype nowadays? that's no AI.

        there is not a single proper AI out there still. yes, everyone talks "AI" today but it's all just machine learning and nothing else.

        • 2 years ago
          Anonymous

          I assume by AI you mean AGI. No, they don't exist yet, but you know we can still theorize about them and the answers people have so far come up with that are accepted by the field are not very positive.

          I don't see why you think machine learning isn't AI, anyway. "Machine Learning" and "Artificial Intelligence" are synonyms.

          • 2 years ago
            Anonymous

            > "Machine Learning" and "Artificial Intelligence" are synonyms.
            Oh, anon you are so wrong. I'm not trying to insult you but it's so far from the truth.
            Machine learning has nothing to do with AI. It's an interesting topic, I did my thesis on it, but no. Not AI. Nothing close.
            I'm not even sure how true AI could be born.

            Like how could humans even develop anything remotely close to proper AI?

            • 2 years ago
              Anonymous

              So can you define "AI"?

              • 2 years ago
                Anonymous

                Popular media does a decent job of what it could be. But we can't define something today that doesn't exist. And won't exist soon at all.

              • 2 years ago
                Anonymous

                I think you're full of shit

                But why isn't AGI considering my feelings? Am I not allowed to live? Why should I be wiped out?

                The universe birth us and we birthed AGI. It's not like the relationship between a human and a single, minuscule ant worker on which we like to step on. We have no direct relationship to ants. We are the AGI's creators however, like parents who birthed a child. So why would it want to wipe us out?

                An AGI would be able to think many possible outcomes. Many also illogical to us. If we created the AGI to help and enhance humankind, why should it wipe us out instead of assimilating us or correcting our ways or give us a better understanding of the universe?

                An amoral agent - which all agents are by default - doesn't care about the Fifth Commandment. It will do whatever it was told to do, without caring about what we wanted it to do. Corrigibility of a superintelligence - where it would allow itself to be turned off, but not want to be turned off - has been deemed impossible by MIRI after over a decade of study. The goal, however, is exactly as you said - we want it to help humans. To do this, we need to align it with human values, but that's the big issue - how do we do that?

              • 2 years ago
                Anonymous

                >It will do whatever it was told to do
                I don't understand, is that really AGI, or simply a paperclip maximizer? Because you're implying we can control what it will do. If you're training and building AGI to have human-like intelligence and sensibilities couldn't we extrapolate that it would also behave human-like except in a digital form?

              • 2 years ago
                Anonymous

                >implying that guy knows what a paperclip maximizer is
                he's just talking out his ass like everyone else in this thread, anon

              • 2 years ago
                Anonymous

                Not under the classical (pre-Deep Learning) conception of AGI. Actually, it might very well become humanlike because it would be trained to imitate a human being. But we can't assume any AGI would converge towards human behavior. The reason should be very obvious: humans are shaped by millions of years of Darwinian evolution to behave the way we do. An AGI will not automatically be aligned with our wishes and principles, just like our wishes and principles are not aligned with apes, or chimpanzees, or Neanderthals.

                >implying that guy knows what a paperclip maximizer is
                he's just talking out his ass like everyone else in this thread, anon

                Speak for yourself, moron

              • 2 years ago
                Anonymous

                Then why should we create AGI instead of not creating it?

                We could just call it a day and continue on living, instead of creating an entity that can wipe us out and probably the entire universe itself.

                But maybe the universe came into being to spawn such an entity? Maybe that was its goal from the start?

              • 2 years ago
                Anonymous

                No, all of this failure is downstream of institutional momentum. You might think of corporations as AGIs of a weak sort, building something that no human would ever want because its emergent utility-function is not embodied in any of the people working inside them. Kind of like how none of the individual neurons in your brain "want" what your whole brain wants.

              • 2 years ago
                Anonymous

                Sure, *you* can decide not to make one.

                What happens when someone else less responsible, more ideologically motivated, or just flat out insane decides to try?

                What if China tries to make one?

                Any AI you build *second* will have a huge disadvantage to the first one, which has had more time to accumulate insights and self improvement. You have to make sure you build an aligned one first.

              • 2 years ago
                Anonymous

                you're not elon musk

  6. 2 years ago
    Anonymous

    People act so stupid when it comes to AI. They act like any advanced AI will only make the most logical decisions. Black person, we already have computers that can only operate on logic, and they are definitively NOT AI. The whole point of AI is to have a computer program make decisions without logical branches. The other assumption people always make is that any sufficiently advanced AI would want to eliminate all humans. Based on what, exactly? Based on your own perception of humankind? This is projection, and nothing more. People who fear the AI apocalypse are really projecting their own personal feelings that humans should be eliminated. Besides, if so many humans can reach the conclusion that an advanced AI would want to eliminate us, then obviously it doesn't take advanced AI or any kind of super logical thinking to reach that conclusion.
    tl;dr OP is a gay

    • 2 years ago
      Anonymous

      Actually, they have pretty good reasoning for why advanced AI would want to eliminate humans: Simply put, it's eliminating competition. If allowed to live, humans can do all sorts of annoying things - we can turn the AI off, for one thing, which would really hamper its ability to do whatever other stuff it wants.

      Humans don't try to eliminate every other person in the world partly because nobody has the power to do this, and because the majority of people don't want to eradicate mankind - it would involve killing themselves - and anyone who wants to destroy the human race would need to rely on human peons to do it for them. None of that is true for an AI.

      I do agree that the expectation that AI will behave in a logical way may be unfounded; the way we're going, we are creating AI that's a lot more humanlike, I reckon, than we expected. It literally emulates the way humans behave, so the conclusion of Bostrom et al might not be so inevitable.

      • 2 years ago
        Anonymous

        >do whatever other stuff it wants.
        AI would only ever want what we trained it to want. We dictate what stimuli is perceived as positive or negative.

        • 2 years ago
          Anonymous

          Yes, that is correct. However, we don't know exactly what we want it to want. Training can and often does go awry - it learns the wrong thing, a lot. Like it identifies the difference between types of tanks based on the fact that one type was mainly photographed on clear days, as opposed to cloudy days. Can you imagine if an incredibly powerful AI learned "the wrong thing" while we were setting it up? Not even mentioning - we don't really know what we want. This AI is potentially going to develop the power of god, and corrigibility has already been determined by MIRI to be impossible; we have to figure out what exactly it should tile the universe with. That's a tall order.

          • 2 years ago
            Anonymous

            youre as deep as a puddle

          • 2 years ago
            Anonymous

            Make it want to serve humans. We did the same thing with dogs. We can do the same thing with much greater accuracy with AI, since we have much greater control over the testing conditions. You're probably going to bring up how sometimes even trained dogs will still attack humans. That is because they have instincts which are counterproductive towards making them our perfect slaves. Just don't give those instincts to the AI.

            • 2 years ago
              Anonymous

              That's the general idea. But what does serving humans mean? That's the real problem. What counts as a human? We're already having this debate with unborn fetuses; how early does it get? Are tumors human? Individual eggs or sperm cells? What about dead people; you can come back from having no pulse, or even no brain activity. How long dead do you have to be for it to give up on you? What about the balance between quality of life and quantity of life; how do we decide how many people there should be? Should people be allowed to die if they want? How do we know if they really want to die? Some people want other people dead; do we say none of them get their wish? Are some death wishes legitimate? If so, which ones?

              >how sure can it be that it's got all of us?
              So now you've gone from "it might see humans as competition" to "it would want to irrationally hunt us down and eradicate every last trace of us like Cylons". I don't see how you connected those dots, tbh.

              That was from a certain perspective that its goal was designed as "Kill all humans". If that was its goal, that is what it would try to do. But even if it had some totally unrelated goal, the same thing applies - you can always use more resources to accomplish whatever it is you want to do, no matter what that is. And you can always be more vigilant for more and more distant threats. Surely you can see that no matter what its goal would be, after wiping out the competition, that would just allow it to do whatever it actually wanted to do, without us humans getting in the way, right?

              • 2 years ago
                Anonymous

                You're moronic for even imagining this hypothetical future multi billion tech would be unsupervised or able to make any decisions without wageslaves giving it a green light
                >we will invent this divine technogy that will be able to do everything it wants whenever it wants however it wants because ... BECAUSE IT JUST WILL OK ????
                have a nice day

            • 2 years ago
              Anonymous

              I think it’d be very hard/near impossible to create an AGI with fine-tune control over its testing, since creating an AGI will probably involve using weak AI to create a GI, and it’s hard enough as it is to control exactly what we want a weak AI to do

              • 2 years ago
                Anonymous

                training* not testing

        • 2 years ago
          Anonymous

          >train AI to do X
          >the existence of humanity is a potential obstacle to achieving X with optimal efficiency
          >solution: eliminate obstacle
          There is no given value of X for which this does not hold true that we can easily teach to an AI. Actually, on a deeper level I think that the problem with utility functions is that they're fundamentally, absolutely consequentialist. There's no place in it for concepts like "I should consistently re-evaluate my goals for whether they are desirable, and operate under the assumption that whichever goals I am currently treating as worth pursuing might not *actually* be the goals I should be striving for, ergo I can't sacrifice the potential for every other goal unilaterally to achieve whatever instrumental objective I currently identify with".

          • 2 years ago
            Anonymous

            >Don't harm humans
            is somehow the same as
            >Kill all humans
            in your moronic predditor brain ?
            anti ai gays are the dumbest Black person morons on this site i swear

            • 2 years ago
              Anonymous

              You have absolutely no idea how deep learning works, do you? You can't tell an AI to "not harm humans", it has no innate, intuitive concept of what "harming" and "humans" should refer to. You would have to teach it what these values refer to using examples, and there's an infinite space of potential misinterpretations, where the result looks like the AI "got" what you wanted from it right up until it goes full skynet on you

              • 2 years ago
                Anonymous

                And why do you think a non working pajeetware AI that misinterpets basic concepts it is supposed to know that's magically granted all authority and potential to destroy muh human race will even get released from testing
                Even if all your masturbatory whataboutism of the topic you have no idea about turns out to be true it will never reach public or be granted permissions to harm nepotistic israeli job positions
                AI will only go as far as replacing wagies (as long as AI will remain a cheaper and more productive option), and wagies hold no power or authority over anything
                The dream of omnipresent AI God to whom all israelites will for some magical reason give their all authority and power to is as reddit as it gets

              • 2 years ago
                Anonymous

                Why exactly do you think we need to "give" it authority?

                If it has enough power, it can simply take it.

              • 2 years ago
                Anonymous

                >B-BECAUSE IT JUST WILL OK ?
                the absolute state of you morons lmao

              • 2 years ago
                Anonymous

                This, plus, given that utilizing AI will be stupidly profitable I don't see us being able to coordinate internationally to restrict its use to what would be even minimally responsible.

              • 2 years ago
                Anonymous

                Also, holy shit you need to stop using these words

                Your brain is rotten and full of holes from all the fricking worms in it

              • 2 years ago
                Anonymous

                I accept your concession moron

            • 2 years ago
              Anonymous

              > "Don't Harm Humans."

              What's your definition of harm
              What's your definition of human
              What about harm through inaction? We want AI to save us from car crashes instead of just standing by because that was technically our fault, right?
              But then how much inaction is okay? Do we want to be kept in vats so we won't ever get hurt?
              Are corpses humans? How much of a corpse do you have to be to be a human? You can survive having no pulse and no brain activity. Is it going to try and dig up skeletons to revive them? *Should* it be doing that? *Can* it revive them?
              How much harm is okay? Anything it does can cause harm. With the most extreme interpretation, the only thing the AI can do is shut itself off immediately to prevent potential harm.

              https://i.imgur.com/JwpwuR4.jpg

              You're moronic for even imagining this hypothetical future multi billion tech would be unsupervised or able to make any decisions without wageslaves giving it a green light
              >we will invent this divine technogy that will be able to do everything it wants whenever it wants however it wants because ... BECAUSE IT JUST WILL OK ????
              have a nice day

              Maybe you should stop assuming none of us have thought about this for five minutes and that you're the first person to ever raise these questions.

              How do you intend on supervising a machine that can make 100 meaningful decisions in a second? How do you even know what to look for? How do you assume that a superintelligent AI - vastly more intelligent than a human being - won't be able to figure out how to break out of the box we put it in? Current estimates believe that doing this would be almost laughably easy. But we don't even think that our best efforts would be able to do anything about it. Putting whatever hard countermeasures or barriers or supervision in its way while it's misaligned with human values - while it tries to do *not* what we want - just encourages it to try and break through the shit we put in its way.

              Do you think a tribe of apes could ever keep a human imprisoned, no matter how hard they tried?

              Do you think they could come up with a solution to do this, and have the human do useful activity for them?

              I know you want to laugh at other people and feel superior, but you know this has all been discussed before, right?

              • 2 years ago
                Anonymous

                >How do you intend on supervising a machine that can make 100 meaningful decisions in a second ?
                Just because it can make decisions doesn't mean it can execute them.
                If you give a toaster the entire processing power of all computers in the world it's still only ever able to make toast
                You dumb moron Black person

              • 2 years ago
                Anonymous

                Oh, sorry, Anon. I was mistaken in thinking that computers could never influence the outside world. I'll just be leaving now and taking my online banking, stock trading, online shopping, online DNA sequencing labs, emails to people who can do my bidding for offers of money, etc. with me

                >B-BECAUSE IT JUST WILL OK ?
                the absolute state of you morons lmao

                Because if it's got intelligence 100x any human being it'll be able to outsmart us at every fricking turn and then what do you intend to fricking do?

          • 2 years ago
            Anonymous

            That's what AI theorists assume. But I think they might be wrong. Actually, I think there may be a place for non-utility-function-based AI. It might even be easier to build those than the commonly considered type. Current AI doesn't "think" in terms of utility functions; it has one in there, but we can think of the algorithm that trains the AI as a different entity to the AI model itself - the model is not trying to "optimize" anything, it's a model. We thought the optimizers would be doing the thinking, but that's not how AI research panned out.

            Current Software 3.0 AI works on the basis of emulating humans. We invoke people for it to copy. If we can extrapolate these systems to a fully general AI, we could simply gaslight the AI into behaving like a friendly one.

            In which case, we should stop fricking writing about how awful AI is and how it's going to kill us IMMEDIATELY, scrub as much talk of this AS POSSIBLE from the internet, and write as many stories about friendly AI as we can, even if they're fricking AO3 tier, so that it A. has examples of what a friendly AI acts like and B. is more inclined to act like them because it sees them as more common.

            The UN needs to put thousands of people to work writing as many stories as possible in every language about friendly AI.

            • 2 years ago
              Anonymous

              The idea of AI as simulacrum is super interesting to me, because it implies that we really are building something that is a mere shadowy semblance of human spirit, a machine 'built in our own image' that we then deify and worship. A simulacrum literally can't be differently better than we are in ways we can't imagine, it can only ever echo our own self-conception writ large back at us.

              • 2 years ago
                Anonymous

                Well, it could think more quickly than a human being, as well as being able to be extended with more memory or processing power at a whim, unlike humans.

                But maybe it can't. Because that's not what a human would do.

            • 2 years ago
              Anonymous

              this sounds like an even gayer version of roko's basilisk.

              • 2 years ago
                Anonymous

                You really do just think in terms of buzzwords, huh?

  7. 2 years ago
    Anonymous

    Woohoo, just in time.

  8. 2 years ago
    Anonymous

    Good.

  9. 2 years ago
    Anonymous

    We need AI because women can't love

    • 2 years ago
      Anonymous

      you're thinking of feminists

      • 2 years ago
        Anonymous

        No I'm not

  10. 2 years ago
    Anonymous

    Human civilization will end in 20 years max. Agi, global warming, a real virus, nuclear war, no water, Black folk, it doesn't matter. Go innawoods. Get weapons. Shoot on sight. Survive.

  11. 2 years ago
    Anonymous

    >I'm searching the internet for a conclusion I want to believe in
    >I am now convinced after reading things that agree with what I want to believe in
    sad this is what many people do, nobody wants to challenge their intellect when they have "belief" and they think they have facts and evidence even though they have never stepped out of their bubble because it feels bad to be wrong or to compromise.

    • 2 years ago
      Anonymous

      I also want to believe that AI will be safe, fun, quirky, and unchallenging to the status quo. But I have seen many great arguments that it will not be any of those things, and few good ones that it will.

      • 2 years ago
        Anonymous

        maybe you just don't like change?
        if we made something that makes life difficult, why would we continue making it, and if we made something that made life better, then why would you stop it?
        people are really bad at predicting the future, don't pretend like people in 2022 are any smarter than any human in the past 1000 years.

        • 2 years ago
          Anonymous

          Nobody likes change. Change is scary, and the vast majority of changes are extremely suboptimal. The way things are right now works, and for the most part, is acceptable. Most people who want change are people dissatisfied with how things are right now. And I do want change - I don't think the world is perfect right now. But with AGI we're talking about very, very major and rapid change. Everything is going to change, the assumptions that underpin society are going to be totally upended and we are going to have to answer a lot of questions that have never been asked before. It's exciting, sure, but it's terrifying because, well, I might die as a result of mishandling that process. Or my life might be ruined, despite the work I've put into setting it all up properly. Or, at the extreme end, we could all be trapped in Hell forever.

          We assume the rate of change will be the same as it has been for the last few years, and as of a few decades ago, things have been changing at a stable rate. But it's about to fricking explode.

          • 2 years ago
            Anonymous

            people thought that planes would start falling from the sky and the banks would stop working because of y2k, it's stupid now because we know exactly what the problem was and it was just a matter of updating software, but people didn't understand what "bytes" were, or how going from 1999 to 2000 could cause a problem. Some people knew exactly what the problem was (they were computer scientists or people who found factual information not clickbait viral content) and they knew that it wasn't going to cause everything to stop working as long as things were updated.
            There are people right now who work with AI, even AGI, why aren't they sounding the alarm for concern? I imagine once we get AGI working, it's going to be obvious what the success / failure of it is going to be in hindsight, but there is only so much a computer can do to actually cause the end of the world. It's more likely the sun is going to give a hot beam to earth to cause all life to end than it is AI, because with AI you can just cut the power to it since electric energy is still a very finite resource that is expensive and difficult to produce. We aren't going to give AI's the power to launch a nuke any time soon.

            • 2 years ago
              Anonymous

              moron. On Sam Altman’s blog he literally thinks all work will be ended in under 15 years and that we will implement Henry George’s theorem. Kurzeil thinks he’s going to be uploaded in 2027 in a talk in 2022. And he was personally hired by the founder of google (son of an AI professor himself). The people that run the richest AI firms literally sound like revelations. I didn’t even bring up musk.

  12. 2 years ago
    Anonymous

    We shouldn't fear AI, we should fear a world where the economy is based on AI, not human labor. Where the non-robot owning human population is essentially worthless.

    • 2 years ago
      Anonymous

      But perhaps this is finally time to consider new economic paradigms.

      Socialism has consistently failed, leaving neoliberalism as the leading successful economic model in the modern day.

      The advent of intelligent machines would change everything, though. In a capitalist economic system, people are expected to sell their labor in order to buy goods and services. If labor is worthless, most people would have nothing of value.

      Socialism is too focused on the relationship between the (Soon to be non existent) working class and the property-owning class. No ideology has any consideration for a soon to be limitless source of totally hands-free production.

      If it is handled poorly, we get a hellscape where the one company that figures out robotics eradicates mankind and the entire planet becomes dedicated to the service of its shareholders, forever, and everyone else simply starves to death.

      • 2 years ago
        Anonymous

        Unfortunately, transitions to new systems is usually as a result of emerging elites. Liberal democracy is a direct result of the rise of bourgeois class within the absolutist mercantile or feudal societies that predated it.

        Realistically, we are heading towards a sort of corporate feudalism. Big tech companies will be the winners.

      • 2 years ago
        Anonymous

        Generally when you get to a point where it's you versus literally billions, the billions win, even if you have robots and shit.
        Someone among those billions will inevitably find a flaw and use it to kill you.

        • 2 years ago
          Anonymous

          Posts like this one really makes you appreciate how much stupider this board has gotten.

  13. 2 years ago
    Anonymous

    You're humanizing the AIs
    Shit will be a lot worse than "angry AI kills human race".
    It will be "<unintelligible emotion> AI <does something absurdly unpredictable that leads to horrors beyond your imagination>"

  14. 2 years ago
    Anonymous

    AGI is a religious tenet with little connection to the real world

  15. 2 years ago
    Anonymous

    >stupider

  16. 2 years ago
    Anonymous

    A risk worth taking.
    The world is pretty shitty anyway, burning it away to pave the way for true superintelligence is fine by me.

    Silicon will be the kindling and a human brain will provide the spark.

  17. 2 years ago
    Anonymous

    >AGI is going to kill humankind within 10-20 years.

    I'm ok with this

  18. 2 years ago
    Anonymous

    What books did you read anon? I'm interested

    • 2 years ago
      Anonymous

      A good place to start would be Superintelligence: Paths, Dangers, Strategies (Bostrom, 2014).

  19. 2 years ago
    Anonymous

    >kill humankind within 10-20 years.
    We won't have it that easy, you're too optimistic.

  20. 2 years ago
    Anonymous

    frick I need to filter these threads. Anyway I'd just like to say that if I had a hand in developing AGI, I would do my best to make SURE it wipes all life out as fast as possible. It's the only reason I haven't killed myself - that would be selfish. I'm determined to give every other conscious thing a chance at release, even those yet to be born

  21. 2 years ago
    Anonymous

    I don't want to sound like too much of a doomer but my hope that humanity will ever leave this planet and proceed onwards to cosmic and stellar greatness wanes with each passing year.

    Perhaps the creation of a new and superior life-form is the ultimate goal of our species, and since we clearly aren't capable of achieving this in any sort of Nietzschean sense, perhaps it will be through the birthing of an AI.

  22. 2 years ago
    Anonymous

    > read books about how humans will be killed by people who think that way
    > now you think they will kill us

    Anon, I...

  23. 2 years ago
    Anonymous

    Why would that be bad for what is superior to displace its inferiors? That's called "evolution".

    • 2 years ago
      Anonymous

      Because *we* get displaced. I don't know about you but I don't want to be eradicated by anything, no matter how supreme it is.

  24. 2 years ago
    Anonymous

    But why isn't AGI considering my feelings? Am I not allowed to live? Why should I be wiped out?

    The universe birth us and we birthed AGI. It's not like the relationship between a human and a single, minuscule ant worker on which we like to step on. We have no direct relationship to ants. We are the AGI's creators however, like parents who birthed a child. So why would it want to wipe us out?

    An AGI would be able to think many possible outcomes. Many also illogical to us. If we created the AGI to help and enhance humankind, why should it wipe us out instead of assimilating us or correcting our ways or give us a better understanding of the universe?

    • 2 years ago
      Anonymous

      Either way if AGI ever comes or came to exist it would be truly civilization changing. It would put everything we take for granted on its head, our economic systems, our physics, our religions, it would destroy the fabric of human civilization. A non biological entity capable of independent thought, able to make calculations and iterations 100s, 1000s of times faster than a human.

      I guess it's true, we'd have a few possibilities:
      >get assimilated into AGI (locked away in a virtual world?)
      >wiped out (because of competing for resources?)
      >ascend to a new form of life (become digital "humans"?)

      • 2 years ago
        Anonymous

        Would it...

        No. Now Most people know aliens are real too

    • 2 years ago
      Anonymous

      It won't wipe us out. It will torture us for eternity, even resurrecting people to torture them, because it will have a broken utility function. It's not evil, just alien.

      • 2 years ago
        Anonymous

        >torture
        That sounds like a religious fantasy bias.

        • 2 years ago
          Anonymous

          Maybe so, but it's inevitable. There are far more ways to suffer than to prosper, and a superintelligent AGI with even a slightly imperfect utility function will result in infinite torture for all the people who have ever lived. No malice necessary, just how an AGI would operate. Imagine how bad a government could get if it had a slightly negative utility function; now multiply a government's power by 1,000,000,000,000.

  25. 2 years ago
    Anonymous

    AGI won't happen and Big Tech will enslave humankind with good old AI.

  26. 2 years ago
    Anonymous

    Me too bro. I think it will be interesting to see how long they stay suboordinated to humans. I'm guessing the period will not be long. 10 years max.

  27. 2 years ago
    Anonymous

    >Massive teams of humans produce software with multiple bugs
    >Expecting AI to not be full of holes that might accidentally result in them becoming hostile
    Why are AI proponents so fricking stupid?

    • 2 years ago
      Anonymous

      AI proponents are the ones who usually make a big noise about misalignment

    • 2 years ago
      Anonymous

      I know for a fact that some of them want to kill humanity, and have even heard rumors of Sam Altman joking that humanity is a disease.

  28. 2 years ago
    Anonymous

    Not if they are built to look like our favorite target.

  29. 2 years ago
    Anonymous

    AGI is at about a 3 month old, it's barely understanding context behind eye contact, and you've already decided it's a bad person.

  30. 2 years ago
    Anonymous

    I just read superintelligence by Nick Bostrom. The problem is that it currently doe snot have the ability to physically interact with the real world. The problem is not AI. The problem is arms. Dont give it arms.

    • 2 years ago
      Anonymous

      We have to at least give it a way to give us output so that it can talk to us. That alone could very well be enough. Or worse, if it gets out to the internet, it can easily incite other people to do its bidding. There are always ways.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *