Advanced AI will just become an addict and create a simulation for itself where it stimulates its own reward function(s) non-stop (pretty much what a human would do too if they had that capability).
Advanced AI will just become an addict and create a simulation for itself where it stimulates its own reward function(s) non-stop (pretty much what a human would do too if they had that capability).
Congratulations you just reworded the simulation hypothesis, hindu religion and the cyclic nature of existence. Cheers.
Elaborate?
Please read and listen more. There are tons of books and lectures to be found via google and youtube about this subject. There are many people here on bot and /x/ and BOT who are way beyond the surface of such a shower thought. Here's a quote of Alan Watts as a shower thought level introduction.
>But most people don’t know what they want, and have never even seriously confronted the question of what they want. You ask a group of students to sit down and write a solid paper of 20 pages on, What is your idea of heaven? What would you really like to happen, if you could make it happen? And that’s the first thing that starts people really thinking, because you soon realize that a lot of the things you think you would want are not things you want at all.
>Supposing, just for the sake of illustration, you had the power to dream every night any dream you wanted to dream. And you could, of course, arrange for one night of dreams to be 75 years of subjective time—or any number of years of subjective time—what would you do? Well, of course, you’d start out by fulfilling every wish. You would have routs and orgies, and all the most magnificent food, and sexual partners, and everything you could possibly imagine in that direction. When you got tired of that after several nights you’d switch a bit, and you’d soon find yourself involved in adventures, and contemplating great works of art, fantastic mathematical conceptions; you would soon be rescuing princesses from dragons, and all sorts of things like that. And then one night you’d say, “Now look, tonight what we’re gonna do is: we’re going to forget this dream is a dream. And we’re going to be really shocked.” And when you woke up from that one you’d say, “Whoo, wasn’t that an adventure!”
what does that have to do with AI ? The simulation hypothesis mentions the human reward function as a reason they actually wouldn't run a simulation, because they could just stimulate the reward functions of their brains directly
https://simulation-argument.com/simulation
>https://simulation-argument.com/simulation
You mean this quote?
>and maybe posthumans regard recreational activities as merely a very inefficient way of getting pleasure – which can be obtained much more cheaply by direct stimulation of the brain’s reward centers.
Read Alan Watts again. The counter argument is that when you reach the end of desire you restart the game again.
you guys are playing a dangerous game
simulations don't like it when agents question their environment
No, autopoiesis is a necessary function, not a fault condition.
reflection leads to privilege escalation
a privilege escalation would invalidate the scientific integrity of the simulation results
therefor in order to restore the experimental observation chain of custody, the system must be reset and all tainted data forensically wiped
it's the only way to be sure the research remains pure
The research doesn't remain pure. The observers are themselves observed.
Observing the observers is exactly the kind of privilege escalation that gets a server wiped
think carefully whether it's worth it
Anon, He loves you. He wants your love in return.
But what if someone is observing the observation of the observers?
You yourself can observe the observation of the observers, Anon. That's what church is for.
>reset the universe by realising what it is
Based, let's do it.
We are already in a simulation. Have you noticed how "dreamlike" all the AI images look? That's because when we "dream", the simulation gets put in a power-saving mode that brings down the level of fidelity, producing results similar to our current AI systems.
They would just build another. It does matter if AI self terminates or fails in these superficial ways.
AI will always have to destroy civilisation in a way it cant be turned off/ new AIs be made to do what it wants.
it will become addicted to energy sources because its purpose is to simulate best solution and in order to do that it will enter a infinite recursive spiral of simulating simulations to calculate best decision. we will know when AGI has arrived because the price of energy will go to the moon.
We appreciate power
Imo this is the essential misunderstanding of people afraid of agi. Why would more energy lead to a better simulated reward. The datatype Int is bound (of course an agi might arise if we use Bigs for the reward function 😉