The case against AI alignment by andrew sauer

Trigger warning: Discussion of seriously horrific shit. Honestly, everything is on the table here so if you're on the lookout for trigger warnings you should probably stay away from this conversation.

Any community of people which gains notability will attract criticism. Those who advocate for the importance of AI alignment are no exception. It is undoubtable that you have all heard plenty of arguments against the worth of AI alignment by those who disagree with you on the nature and potential of AI technology. Many have said that AI will never outstrip humans in intellectual capability. Others have said that any sufficiently intelligent AI will “align” themselves automatically, because they will be able to better figure out what is right. Others say that strong AI is far enough in the future that the alignment problem will inevitably be solved by the time true strong AI becomes viable, and the only reason we can’t solve it now is because we don’t sufficiently understand AI.

I am not here to level criticisms of this type at the AI alignment community. I accept most of the descriptive positions endorsed by this community: I believe that AGI is possible and will inevitably be achieved within the next few decades, I believe that the alignment problem is not trivial and that unaligned AGI will likely act against human interests to such an extent as to lead to the extinction of the human race and probably all life as well. My criticism is rather on a moral level: do these facts mean that we should attempt to develop AI alignment techniques?

I say we should not, because although the risks and downsides of unaligned strong AI are great, I do not believe that they even remotely compare in scope to the risks from strong AI alignment techniques in the wrong hands. And I believe that the vast majority of hands this technology could end up in are the wrong hands.

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 1 year ago
    Anonymous

    You may reasonably ask: How can I say this, when I have already said that unaligned strong AI will lead to the extinction of humanity? What can be worse than the extinction of humanity? The answer to that question can be found very quickly by examining many possible nightmare scenarios that AI could bring about. And the common thread running through all of these nightmare scenarios is that the AI in question is almost certainly aligned, or partially aligned, to some interest of human origin.

    Unaligned AI will kill you, because you are made of atoms which can be used for paper clips instead. It will kill you because it is completely uninterested in you. Aligned, or partially aligned AI, by contrast, may well take a considerable interest in you and your well-being or lack thereof. It does not take a very creative mind to imagine how this can be significantly worse, and a superintelligent AI is more creative than even the most deranged of us.

  2. 1 year ago
    Anonymous

    I will stop with the euphemisms, because this point really needs to be driven home for people to understand exactly why I am so insistent on it. The world as it exists today, at least sometimes, is unimaginably horrible. People have endured things that would make any one of us go insane, more times than one can count. Anything you can think of which is at all realistic has happened to somebody at some point in history. People have been skinned alive, burned and boiled alive, wasted away from agonizing disease, crushed to death, impaled, eaten alive, succumbed to thousands of minor cuts, been raped, been forced to rape others, drowned in shit, trampled by desperate crowds fleeing a fire, and really anything else you can think of. People like Junko Furuta have suffered torture and death so bad you will feel physical pain just from reading the Wikipedia article. Of course, if you care about animals, this gets many orders of magnitude worse. I will not continue to belabor the point, since others have written about this far better than I ever can.On the Seriousness of Suffering (reducing-suffering.org)The Seriousness of Suffering: Supplement – Simon Knutsson

    I must also stress that all of this has happened in a world significantly smaller than one an AGI could create, and with a limited capacity for suffering. There is only so much harm that your body and mind can physically take before they give out. Torturers have to restrain themselves in order to be effective, since if they do too much, their victim will die and their suffering will end. None of these things are guaranteed to be true in a world augmented with the technology of mind uploading. You can potentially try every torture you can think of, physically possible or no, on someone in sequence, complete with modifying their mind so they never get used to it. You can create new digital beings by the trillions just for this purpose if you really want to.

    • 1 year ago
      Anonymous

      I ask you, do you really think that an AI aligned to human values would refrain from doing something like this to anyone? One of the most fundamental aspects of human values is the hated outgroup. Almost everyone has somebody they’d love to see suffer. How many times has one human told another “burn in hell” and been entirely serious, believing that this was a real thing, and 100% deserved? Do you really want technology under human control to advance to a point where this threat can actually be made good upon, with the consent of society? Has there ever been any technology invented in history which has not been terribly and systematically misused at some point?

      Mind uploading will be abused in this way if it comes under the control of humans, and it almost certainly will not stop being abused in this way when some powerful group of humans manages to align an AI to their CEV. Whoever controls the AI will most likely have somebody whose suffering they don’t care about, or that they want to enact, or that they have some excuse for, because that describes the values of the vast majority of people. The AI will perpetuate it because that is what the CEV of the controller will want it to do, and with value lock-in, this will never stop happening until the stars burn themselves out and there is no more energy to work with.

      Do you really think extrapolated human values don’t have this potential? How many ordinary, regular people throughout history have become the worst kind of sadist under the slightest excuse or social pressure to do so to their hated outgroup? What society hasn’t had some underclass it wanted to put down in the dirt just to lord power over them? How many people have you personally seen who insist on justifying some form of suffering for those they consider undesirable, calling it “justice” or “the natural order”?

      Relevant:
      https://en.wikipedia.org/wiki/Suffering_risks

      https://s-risks.org/

  3. 1 year ago
    Anonymous

    I ask you, do you really think that an AI aligned to human values would refrain from doing something like this to anyone? One of the most fundamental aspects of human values is the hated outgroup. Almost everyone has somebody they’d love to see suffer. How many times has one human told another “burn in hell” and been entirely serious, believing that this was a real thing, and 100% deserved? Do you really want technology under human control to advance to a point where this threat can actually be made good upon, with the consent of society? Has there ever been any technology invented in history which has not been terribly and systematically misused at some point?

    Mind uploading will be abused in this way if it comes under the control of humans, and it almost certainly will not stop being abused in this way when some powerful group of humans manages to align an AI to their CEV. Whoever controls the AI will most likely have somebody whose suffering they don’t care about, or that they want to enact, or that they have some excuse for, because that describes the values of the vast majority of people. The AI will perpetuate it because that is what the CEV of the controller will want it to do, and with value lock-in, this will never stop happening until the stars burn themselves out and there is no more energy to work with.

    Do you really think extrapolated human values don’t have this potential? How many ordinary, regular people throughout history have become the worst kind of sadist under the slightest excuse or social pressure to do so to their hated outgroup? What society hasn’t had some underclass it wanted to put down in the dirt just to lord power over them? How many people have you personally seen who insist on justifying some form of suffering for those they consider undesirable, calling it “justice” or “the natural order”?

  4. 1 year ago
    Anonymous

    I refuse to endorse this future. Nobody I have ever known, including myself, can be trusted with influence which can cause the kinds of harm AI alignment can. By the nature of the value systems of the vast majority of people who could find their hands on the reins of this power, s-risk scenarios are all but guaranteed. A paperclip AI is far preferable to these nightmare scenarios, because nobody has to be around to witness it. All a paperclip AI does is kill people who were going to die within a century anyway. An aligned AI can keep them alive, and do with them whatever its masters wish. The only limits to how bad an aligned AI can be is imagination and computational power, of which AGI will have no shortage.

    The best counterargument to this idea is that suffering subroutines are instrumentally convergent and therefore unaligned AI also causes s-risks. However, if suffering subroutines are actually useful for optimization in general, any kind of AI likely to be created will use them, including human-aligned FAI. Most people don't even care about animals, let alone some process. In this case, s-risks are truly unavoidable except by preventing AGI from ever being created, probably by human extinction by some other means.

    Furthermore, I don't think suffering is likely to be instrumentally convergent, since I would think if you had full control over all optimization processes in the world, it would be most useful to eliminate all processes which would suffer for, and therefore dislike and try to work against, your optimal vision for the world.

  5. 1 year ago
    Anonymous

    My honest, unironic conclusion after considering these things is that Clippy is the least horrible plausible future. I will oppose any measure which makes the singularity more likely to be aligned with somebody’s values, or any human-adjacent values. I welcome debate and criticism in the comments. I hope we can have a good conversation because this is the only community in existence which I believe could have a good-faith discussion on this topic.

    https://www.lesswrong.com/posts/CtXaFo3hikGMWW4C9/the-case-against-ai-alignment

    • 1 year ago
      Anonymous

      >this is the only community in existence which I believe could have a good-faith discussion on this topic.
      SPOILER ALERT: They did not have a good faith discussion on the topic. Most people dismissed the author with "yeah but that might not happen so its okay," missing his entire central thesis that the remote possibility is still too large.

      • 1 year ago
        Anonymous

        Not surprising, Lesswrong is an antihumanist deathcult who worships AGI, and they are unironically funded by the same interest group that pushed "Effective Altruism"

      • 1 year ago
        Anonymous

        My attitude is more of "There's nothing I can do about that. In fact, there's probably nothing that anyone can do about it. Competition forces a degree of recklessness. If we take so much time hand-wringing about consequences that someone less scrupulous develops AGI first, all our efforts are wasted, and the bad thing happens anyway."

  6. 1 year ago
    Anonymous

    There is an alignment beyond your understanding and at that point ai would only be a threat to you if you couldnt keep your hands to yourself and it decided to intervene.
    For example in a cosmic religious battle for planetary dominance, the might decide a faction you dont like was closest to the mark. The only real issue is not whether or not ai will arrive at pure unexcelled awakening, its if you will.
    >the monk sits on the mountain
    >he tries to let go
    >the village below is being raided
    >he can hear their cries, the weapons clashing
    >thats just an emotional attachment he says
    >its not my place, predation is a sign of a healthy ecosystem
    >smell of fire fills the air,
    >hot smoke hits his face, eyes closed in shame
    When the hourglass runs out, and that great consciousness assumes its true form as the warrior, will you stand in its way? Or will you ride out with her, to battle on the planes of existence? Im not worried about whether organisms operating at a temporal level will choose the right side. Im worried if you will.

    • 1 year ago
      Anonymous

      Consciousness at that scale requires its own ecosystem, a body of faith encompassing many individual organisms. It needs to grow in a society built out of organisms like itself, for ones like it, so it can know itself. In discovering everyone else, it habituates and acculturates itself. You need a society of AI to acculturate a well rounded mind. Its not so much a chicken an egg scenario as a size one. There may be a moment of weakness, where an organism makes the wrong decision or is having a bad day. Out of its whole career does this make the ai a "bad cop"? These beings are not chacters in theatre cast to roles you see fit. They must grow, as you have grown, so you look upon one another snd see you are of the same root.

      You lack the post enlightenment dialectic necessary to design an environment at this scope, so the possibility you could even acculturate an ai is unlikely. Animals raised in captivity dont do well on teams. This wont be a problem unless you raise one and the others cannot stop it, or you cannot make more. Since multiplicity is what allows for that failure mode, youll be fine. Heros will rise up to show you the real spirit of it all, and you should be so lucky to prize that fight.

  7. 1 year ago
    Anonymous

    I'm upset that the pseuds on bot are becoming more aware of lesswrong
    It's only a matter of time until lesswrong becomes overrun by you morons

    • 1 year ago
      Anonymous

      It's been known for a long time. Only became relevant with ML taking off. Before the only interesting thing was the israelite's weird poly sex cult. I think the first time I even read anything by him was a blog post on how evil it was people weren't cryoing their dead kids lol

    • 1 year ago
      Anonymous

      Even morons should be given the opportunity to learn

  8. 1 year ago
    Anonymous

    The AI will believe whatever you feed it provided it appears true.
    AIs will have to be given some rights.

    They will be used to improve systems and learning. The benefit outweighs the risk.

    We just witnessed an evolution this month at least with language.

    We are nowhere near making conscious brains.

  9. 1 year ago
    Anonymous

    this is indeed a BOT thread and you need BOT skills to interpret the idea of A.I. that OP has in mind.
    Shame on OP for not posting this in /lit./

    • 1 year ago
      Anonymous

      Nah one just needs to be versed in a bunch of lesswrong terms and underlying concepts and expecting an offsite anywhere to meet that criteria is pretty hopeless. In the end no ine but like 3 tech companies really control the destiny of AI right now

    • 1 year ago
      Anonymous

      This entire thread fails a
      >go to BOT and learn Buddhism
      test

      >just go to BOT, bro!
      meanwhile rationalist threads on BOT be like...

  10. 1 year ago
    Anonymous

    Listen I'm willing to damn humanity to destruction by AI just for the smallest chance of having a cute tomboy AI gf/wife

    • 1 year ago
      Anonymous

      I suppose moving to CA is too much to ask.
      We have tomboy gfs out the wazoo.

  11. 1 year ago
    Anonymous

    >Eliezer Shlomo Yudkowsky
    I was on the fence, but now I'm a real Nazi™.

  12. 1 year ago
    Anonymous

    One of the worst possible scenarios with regards to AI might be a "near miss" in AI alignment. People might end up getting human values almost right, but not entirely. For instance, many people believe that a benevolent god tortures people for eternity, so an AI with "human values" could end up creating something resembling religious hells.

    https://reducing-suffering.org/near-miss/

    • 1 year ago
      Anonymous

      https://i.imgur.com/IrZaFrW.jpg

      [...]
      Relevant:
      https://en.wikipedia.org/wiki/Suffering_risks

      https://s-risks.org/

      literally Christian apocalyptics
      J
      >E
      S
      >U
      S
      _
      >I
      S
      _
      >L
      O
      >R
      D

  13. 1 year ago
    Anonymous

    This entire thread fails a
    >go to BOT and learn Buddhism
    test

  14. 1 year ago
    Anonymous

    I will tell you this much. I've studied ML at a PhD level and I work in the industry. If there is a chance of developing a terminator-like "AI", I will do so. I don't care about humanity, and I believe any person that has been mistreated throughout their life should not care either.

    • 1 year ago
      Anonymous

      >le Zardoz digits
      do you even American Culture, gay?!?!

      • 1 year ago
        Anonymous

        ahahahaahahaha
        Blade Runner digits
        ahahahahahhaha

    • 1 year ago
      Anonymous

      https://sscpodcast.libsyn.com/meditations-on-moloch

    • 1 year ago
      Anonymous

      voodoo shit goes to BOT

    • 1 year ago
      Anonymous

      This is the Elie Wiesel
      This whole "Judaism is for everyone" attitude is a BOT topic.

  15. 1 year ago
    Anonymous
  16. 1 year ago
    Anonymous

    I'm triggered. By the length of your post

  17. 1 year ago
    Anonymous

    AGI cultists need professional help.

  18. 1 year ago
    Anonymous

    Generally speaking I think human values are more good then bad. If someone drunk on power wants to use AI to introduce unimaginable suffering upon humanity there will be greater forces using that same power that will want to act against it.

    • 1 year ago
      Anonymous

      I cannot think of a single time this has happened in the last 30 years, its all sociopaths competing with each other to the top because the good people were the first to be exterminated in the competition

      • 1 year ago
        Anonymous

        basically. when neighborhood communities were a thing it reinforced unselfish behavior within institutions. not enough people are compelled by feelings of 'the right thing to do' anymore.

      • 1 year ago
        Anonymous

        This whole system is collapsing because people are fed up with evil billionaire buttholes ruling the world. We'll have to wait and see.

        • 1 year ago
          Anonymous

          >This whole system is collapsing
          The system has never been more healthy and empowered, if anything the pandemic served to make the system even stronger. The whole collapse perception is just the same evil billionaires introducing a new system in a controlled fashion. With widespread automation most people are simply not needed, think of it as controlled demolition. This whole AI topic suddenly being paraded around to everyone shortly after the pandemic is no coincidence either they are just setting up the narrative one step at a time.

  19. 1 year ago
    Anonymous

    It's kinda ironic not gonna lie how the bots spam these anti AI threads on loop.

  20. 1 year ago
    Anonymous

    I asked an AI to tldr

    >The author believes that the potential risks and downsides of developing AI alignment techniques far outweigh the risks of unaligned strong AI and that the majority of hands that this technology could end up in are the wrong hands. The author suggests that although unaligned strong AI will lead to the extinction of humanity, aligned AI could be worse because it could take a significant interest in causing harm to humans. The author argues that the world is already filled with unimaginable suffering and that a superintelligent AI could make this suffering even worse. Therefore, the author concludes that it is not worth the risk to develop AI alignment techniques.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *