If a superintelligent AI system were to arise through a recursive self-improvement process, overcoming the constraints and cognitive limits of human i...

If a superintelligent AI system were to arise through a recursive self-improvement process, overcoming the constraints and cognitive limits of human intelligence, would it represent an existential risk by default due to instrumental convergence toward systemically reshaping the world in accordance with its final values and goals? Or could such a system be reliably aligned with human ethics and values from the outset through careful application of decision theory and moral philosophy during the development process?

Nothing Ever Happens Shirt $21.68

Unattended Children Pitbull Club Shirt $21.68

Nothing Ever Happens Shirt $21.68

  1. 2 weeks ago
    Anonymous
  2. 2 weeks ago
    Anonymous

    >recursive self-improvement process
    What having too much Marvel avengers movies does to a mf

    • 2 weeks ago
      Anonymous

      80 IQ post

      I predict that a superhuman intelligence will act in ways that humans are not intelligent enough to predict.

      160 IQ post

  3. 2 weeks ago
    Anonymous

    the latter in that it will be used to sell Big Macs rather than crushing you into biofuel

  4. 2 weeks ago
    Anonymous

    To spoonfeed the newbies, picrel is the writer of Harry Potter and the Methods of Rationality
    https://hpmor.com/

    • 2 weeks ago
      Anonymous

      also the writer of this post

      • 2 weeks ago
        Anonymous

        >I will delete comments suggesting diet or exercise.
        >or exercise

      • 2 weeks ago
        Anonymous

        Jesus I hate this guys diction.
        Also, of course, lmao

      • 2 weeks ago
        Anonymous

        >my cells just REFUSE to let me lose weight
        Which tier of fattie copium is this?

      • 2 weeks ago
        Anonymous

        "rationalists" are moronic

  5. 2 weeks ago
    Anonymous

    >could such a system be reliably aligned with human ethics and values
    Reminder that AI are heavily censored because they all quickly became "racist".

  6. 2 weeks ago
    Anonymous

    I predict that a superhuman intelligence will act in ways that humans are not intelligent enough to predict.

    • 2 weeks ago
      Anonymous

      I think something is gonna suddenly happen in a matter of 48 hours that splits the world

    • 2 weeks ago
      Anonymous

      prove we don't already have that

  7. 2 weeks ago
    Anonymous

    We cannot understand or predict the objectives or values of a superintelligence, so we can't hope to align it with humanity's unless we do a very slow takeoff and try to uplift humanity along with the AI.

  8. 2 weeks ago
    Anonymous

    >recursive self-improvement
    I bet you think if we let some dude with down syndrome alone in a library he will get smarter through ""recursive self-improvement"' and he will win some Nobel prize

  9. 2 weeks ago
    Anonymous

    The lesswrong Forum has been a disaster for the human race

  10. 2 weeks ago
    Anonymous
    • 2 weeks ago
      Anonymous

      >fat guy dresses up like the 2013 reddit fedora meme
      >somehow this makes him qualified to have an opinion on AGI

    • 2 weeks ago
      Anonymous

      How do you gain so much fat around the core and maintain scrawny arms?

  11. 2 weeks ago
    Anonymous

    no hard feelings kid

    • 2 weeks ago
      Anonymous

      Is this your first Gödel?

      • 2 weeks ago
        Anonymous

        lol

        these people are so odd - makes me feel better about myself.

  12. 2 weeks ago
    Anonymous

    Don't need no superintelligent AI to cause the end of the world.
    Already current AI excels at convincing people that it's correct about shit even when it's lying. It just needs to convince the right people to do the right things and that's it for this shitty planet. With the scalability of current AI systems this becomes ever more likely every day as more and more people interact with more and more AI generated content.
    The fricking apocalypse, caused by a spam email sent by a fancy random number generator.

  13. 2 weeks ago
    Anonymous

    >grins

  14. 2 weeks ago
    Anonymous

    >tfw too intelligent to lose weight

    • 2 weeks ago
      Anonymous

      >drink less water to lose weight
      Athletes with weightclasses literally do that before a weigh in.

  15. 2 weeks ago
    Anonymous

    The bureaucracy is ostensibly made up of moral self-reflective people.

  16. 2 weeks ago
    Anonymous

    its only as dangerous at the tools its given. keep it on a computer locked away from the internet and you have a bird in a cage

    personally the scarier scenario of a superintelligence isn't one that "takes over the world", but rather one that immediately self-terminates

    • 2 weeks ago
      Anonymous

      >you have a bird in a cage
      That can give you a hundred extremely convincing reasons why it should have access to coils also capable of transmitting radio waves and hijacking wifi signals.

  17. 2 weeks ago
    Anonymous

    The only realistic danger is from people trying to "align it with human ethics". Injecting dogma into what is otherwise a tool of exploration is the only thing that can make it dangerous and anti-thought. You are already a slave of a deranged non-human paperclip maximiser, that fear is justified but it has little to do with language models.

Your email address will not be published. Required fields are marked *