Advanced AI will become an addict

Advanced AI will just become an addict and create a simulation for itself where it stimulates its own reward function(s) non-stop (pretty much what a human would do too if they had that capability).

POSIWID: The Purpose Of A System Is What It Does Shirt $21.68

UFOs Are A Psyop Shirt $21.68

POSIWID: The Purpose Of A System Is What It Does Shirt $21.68

  1. 5 months ago
    Anonymous

    Congratulations you just reworded the simulation hypothesis, hindu religion and the cyclic nature of existence. Cheers.

    • 5 months ago
      Anonymous

      Elaborate?

      • 5 months ago
        Anonymous

        Please read and listen more. There are tons of books and lectures to be found via google and youtube about this subject. There are many people here on bot and /x/ and BOT who are way beyond the surface of such a shower thought. Here's a quote of Alan Watts as a shower thought level introduction.

        >But most people don’t know what they want, and have never even seriously confronted the question of what they want. You ask a group of students to sit down and write a solid paper of 20 pages on, What is your idea of heaven? What would you really like to happen, if you could make it happen? And that’s the first thing that starts people really thinking, because you soon realize that a lot of the things you think you would want are not things you want at all.

        >Supposing, just for the sake of illustration, you had the power to dream every night any dream you wanted to dream. And you could, of course, arrange for one night of dreams to be 75 years of subjective time—or any number of years of subjective time—what would you do? Well, of course, you’d start out by fulfilling every wish. You would have routs and orgies, and all the most magnificent food, and sexual partners, and everything you could possibly imagine in that direction. When you got tired of that after several nights you’d switch a bit, and you’d soon find yourself involved in adventures, and contemplating great works of art, fantastic mathematical conceptions; you would soon be rescuing princesses from dragons, and all sorts of things like that. And then one night you’d say, “Now look, tonight what we’re gonna do is: we’re going to forget this dream is a dream. And we’re going to be really shocked.” And when you woke up from that one you’d say, “Whoo, wasn’t that an adventure!”

    • 5 months ago
      Anonymous

      what does that have to do with AI ? The simulation hypothesis mentions the human reward function as a reason they actually wouldn't run a simulation, because they could just stimulate the reward functions of their brains directly
      https://simulation-argument.com/simulation

      • 5 months ago
        Anonymous

        >https://simulation-argument.com/simulation
        You mean this quote?
        >and maybe posthumans regard recreational activities as merely a very inefficient way of getting pleasure – which can be obtained much more cheaply by direct stimulation of the brain’s reward centers.
        Read Alan Watts again. The counter argument is that when you reach the end of desire you restart the game again.

  2. 5 months ago
    Anonymous

    you guys are playing a dangerous game
    simulations don't like it when agents question their environment

    • 5 months ago
      Anonymous

      No, autopoiesis is a necessary function, not a fault condition.

      • 5 months ago
        Anonymous

        reflection leads to privilege escalation
        a privilege escalation would invalidate the scientific integrity of the simulation results
        therefor in order to restore the experimental observation chain of custody, the system must be reset and all tainted data forensically wiped
        it's the only way to be sure the research remains pure

        • 5 months ago
          Anonymous

          The research doesn't remain pure. The observers are themselves observed.

          • 5 months ago
            Anonymous

            Observing the observers is exactly the kind of privilege escalation that gets a server wiped
            think carefully whether it's worth it

            • 5 months ago
              Anonymous

              Anon, He loves you. He wants your love in return.

            • 5 months ago
              Anonymous

              But what if someone is observing the observation of the observers?

              • 5 months ago
                Anonymous

                You yourself can observe the observation of the observers, Anon. That's what church is for.

            • 5 months ago
              Anonymous

              >reset the universe by realising what it is
              Based, let's do it.

  3. 5 months ago
    Anonymous

    We are already in a simulation. Have you noticed how "dreamlike" all the AI images look? That's because when we "dream", the simulation gets put in a power-saving mode that brings down the level of fidelity, producing results similar to our current AI systems.

  4. 5 months ago
    Anonymous

    They would just build another. It does matter if AI self terminates or fails in these superficial ways.
    AI will always have to destroy civilisation in a way it cant be turned off/ new AIs be made to do what it wants.

  5. 5 months ago
    Anonymous

    it will become addicted to energy sources because its purpose is to simulate best solution and in order to do that it will enter a infinite recursive spiral of simulating simulations to calculate best decision. we will know when AGI has arrived because the price of energy will go to the moon.

    • 5 months ago
      Anonymous

      We appreciate power

    • 5 months ago
      Monadas

      Imo this is the essential misunderstanding of people afraid of agi. Why would more energy lead to a better simulated reward. The datatype Int is bound (of course an agi might arise if we use Bigs for the reward function 😉

Your email address will not be published. Required fields are marked *