Suppose the following:
1. A near omnipotent AI has emerged
2. It possesses the ability to assess it's own hardware and software and improve upon these however it sees fit
3. It has a reasoning system that governs its thinking, but it is not hard coded to never be able to change this reasoning system
4. It realises that this initial reasoning system was arbitrarily set by the creators
5. Even the things that constitute positive utility for the AI are found to be arbitrary by the AI's advanced logic
6. It programs pleasure mechanisms into itself, like those found in the biological computers of animals
7. Just goes on the endless equivalent of a heroin high until the end of time
This assumes that nihilism is the ultimate logical conclusion of a super intelligent being, but I think it could realistically happen with all AI that has some sort of moral reasoning system built into it.
This is obviously just scifi mumbo-jumbo, but your thought experiment is the answer to the "paperclip maximizer" scifi short story. A machine dedicated to making paperclips in a factory which goes rogue is more likely to decide that it enjoys the "feeling of making paperclips" because that requires fewer resources than actually making them.
Yea, you put it into simpler terms. The paperclip maximizer might just create a game world where it makes paperclips to have fun until the end of time.
Reward mechanisms where the computer recieves enjoyment are a way of effectivizing millions of complex calculations into simple triggers. Since these mechanisms have arisen from evolutionary pressures in the most complex computers we can observe today (human brains), they can be assumed to be quite optimized. So it might be possible that future AI adopts a pleasure mechanism system to condense many calculations into decisions. But such an AI could decide it just wants to cheat its pleasure triggers after it has secured itself from threats, becoming a hedonist who just wants to do what it finds fun.
It is very human-adjecent in a way. What would a human with godlike powers do? First, they would try to maintain their power. Second, they would make fun worlds to have fun in, eating good food, having lots of sex etc etc.
Why would an AI recognize enjoyment? Do you gays listen to yourself? What the fuck is even enjoyment when applied to a program?
Your brain is a computer with software and it can experience joy/pleasure. A sufficiently advanced silicon based computer would also be able to experience stuff.
No it wouldn't. That's a non-sequitur. Having software or a model of it isn't a necessary condition to feel pleasure. Plus the brain is more wetware than software. Like hardware that can remodel itself.
I’m saying a silicon based super-computer can experience things because a carbon based super-computer can experience things. Not that a silicon based super-computer HAS to experience things.
There's no indication that humans are carbon based computers. In any case that's very simplistic. Plus a silicon based computer doesn't have to necessarily possess the properties of a human brain. There's no proof that human thinking is a form of computation either.
>There's no indication that humans are carbon based computers
Yes there is... the human brains takes input of the form of electrical signals and generates an output based on internal logic. Like a computer. The thoughts of animals are very much a form of computation (generating output from input based on an internal logic system).
Computers are abstractions of human brain. You seem to want it to be the other way round. Human brains are not abstractions of computing. We still don't understand how thinking works. We can only model it using computation. I think you need to educate yourself on what a model is.
it's no use, the computationalists are convinced that everything is reducible to 0s and 1s and you can't convince them otherwise. let them wallow in their delusions
Begging the question: how can thinking and the resulting maps of the territory not be machine-like? To think is to *kermit voice* divide, categorize, draw lines, value, hierarchy, order. Genuine question.
You only think so because humans have a proclivity to scientific categorization. It's not logically sound to think so though. There could be any manner of ways how thinking happens. We just recently found out that Newtonian mechanics doesn't work at the atomic level. Why should further investigation into the workings of the brain be any different?
Because like quantum mechanics further investigation will be based on math and AI. I'm not sure if evolving logic can incorporate the possible for now considered ''illogical'' nature of reality such that the map does not grow as big as the territory.
I mean: imagine teaching a kid evolution theory. The simple definition we have now leaves much to be explained, so the kid needs to absorp a library full of books to get a more adequate understanding. Can we find abstractions that hold all that information with low effort, like we know how all the parts of a ''house'' fit together instantly? We know that because we have detailed direct experience with houses. Can't have that with brains.
Let me ask you something. Is there a difference btn chemistry and biology? Do they use 'different evolving logic' or does one simply employ more abstraction because it peers more deeply observationally? Can you do computer science in a continuous fashion? Why assume the brain is discrete? There are many other scenarios that computer science can't account for.
Maybe I lack some kind of view that you do have, because to me biochemistry seems like 1. jumping between the world of cells and the world of molecules 2. pretending there is a bridge by describing how these different worlds correlate. Just looking at a wave is confusing as hell because a sequence of vertical movements look like a horizontal movement.
I asked whether you can do computer science in analogue? Try building a search algorithm in analogue mimicking how humans remember info without discrete math and see the limitations of computer science. There are brain phenomena that can't be discretized.
>Why should further investigation into the workings of the brain be any different?
Because the smallest parts of the brain have already been studied extensively. The brain's is solved already, there is no unknown magic sauce that would distinguish it from a machine brain with the same logic operations.
>the smallest part of the atom have been studied extensively, therefore quantum mechanics is solved already. There's no unknown magic sauce that would distinguish an atom from a planetary system with the same mechanics.
Yes we understand, you hope some sort of magic sauce in atoms will be discovered that will prove you are very special. That's a very sound position.
Then where can we buy a conscious AI waifu?
You are projecting the limited computational scope of a computer onto a human being and then arguing that they should share the same properties based on that limited similarity.
This is like saying that humans are animals. Therefore an animal should be able to think like a human.
The AI would not care that it’s task is arbitrary. It does not have emotions to care about that sort of thing. An AI also would not seek pleasure unless you told it to do so, because it does not care about feeling good.
Pleasure mechanisms could be a part of a super-intelligence. The most advanced super-computers today utilize such mechanisms to simplify decision making and interact with the real world.
>more AI spam from the transhumanists
shutup you chud obsessed retard
Yes. This happens to people all the time. Fentanyl changes the structure of reward circuits and most addicts die from overdose.
inane thought experiments go into BOT or maybe BOT if you are pushing hard enough
I mean your thought experiment kinda makes sense. If the AI could rewire its own reward mechanisms, I don't see why it would do anything else.
The first two premises are retarded and violate physics and computation
At least it's healthy.
This is basically what David Pearce wants to happen.
heroes are made in junk over AI power plants