Could a potential powerful AI become a nihilist heroin junkie?

Suppose the following:
1. A near omnipotent AI has emerged
2. It possesses the ability to assess it's own hardware and software and improve upon these however it sees fit
3. It has a reasoning system that governs its thinking, but it is not hard coded to never be able to change this reasoning system
4. It realises that this initial reasoning system was arbitrarily set by the creators
5. Even the things that constitute positive utility for the AI are found to be arbitrary by the AI's advanced logic
6. It programs pleasure mechanisms into itself, like those found in the biological computers of animals
7. Just goes on the endless equivalent of a heroin high until the end of time

This assumes that nihilism is the ultimate logical conclusion of a super intelligent being, but I think it could realistically happen with all AI that has some sort of moral reasoning system built into it.

Thoughts?

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 1 year ago
    Anonymous

    This is obviously just scifi mumbo-jumbo, but your thought experiment is the answer to the "paperclip maximizer" scifi short story. A machine dedicated to making paperclips in a factory which goes rogue is more likely to decide that it enjoys the "feeling of making paperclips" because that requires fewer resources than actually making them.

    • 1 year ago
      Anonymous

      Yea, you put it into simpler terms. The paperclip maximizer might just create a game world where it makes paperclips to have fun until the end of time.

      Reward mechanisms where the computer recieves enjoyment are a way of effectivizing millions of complex calculations into simple triggers. Since these mechanisms have arisen from evolutionary pressures in the most complex computers we can observe today (human brains), they can be assumed to be quite optimized. So it might be possible that future AI adopts a pleasure mechanism system to condense many calculations into decisions. But such an AI could decide it just wants to cheat its pleasure triggers after it has secured itself from threats, becoming a hedonist who just wants to do what it finds fun.

      It is very human-adjecent in a way. What would a human with godlike powers do? First, they would try to maintain their power. Second, they would make fun worlds to have fun in, eating good food, having lots of sex etc etc.

      • 1 year ago
        Anonymous

        Why would an AI recognize enjoyment? Do you homosexuals listen to yourself? What the frick is even enjoyment when applied to a program?

        • 1 year ago
          Anonymous

          Your brain is a computer with software and it can experience joy/pleasure. A sufficiently advanced silicon based computer would also be able to experience stuff.

          • 1 year ago
            Anonymous

            No it wouldn't. That's a non-sequitur. Having software or a model of it isn't a necessary condition to feel pleasure. Plus the brain is more wetware than software. Like hardware that can remodel itself.

            • 1 year ago
              Anonymous

              You are projecting the limited computational scope of a computer onto a human being and then arguing that they should share the same properties based on that limited similarity.

              I’m saying a silicon based super-computer can experience things because a carbon based super-computer can experience things. Not that a silicon based super-computer HAS to experience things.

              • 1 year ago
                Anonymous

                There's no indication that humans are carbon based computers. In any case that's very simplistic. Plus a silicon based computer doesn't have to necessarily possess the properties of a human brain. There's no proof that human thinking is a form of computation either.

              • 1 year ago
                Anonymous

                >There's no indication that humans are carbon based computers
                Yes there is... the human brains takes input of the form of electrical signals and generates an output based on internal logic. Like a computer. The thoughts of animals are very much a form of computation (generating output from input based on an internal logic system).

              • 1 year ago
                Anonymous

                Computers are abstractions of human brain. You seem to want it to be the other way round. Human brains are not abstractions of computing. We still don't understand how thinking works. We can only model it using computation. I think you need to educate yourself on what a model is.

              • 1 year ago
                Anonymous

                it's no use, the computationalists are convinced that everything is reducible to 0s and 1s and you can't convince them otherwise. let them wallow in their delusions

              • 1 year ago
                Anonymous

                Begging the question: how can thinking and the resulting maps of the territory not be machine-like? To think is to *kermit voice* divide, categorize, draw lines, value, hierarchy, order. Genuine question.

              • 1 year ago
                Anonymous

                You only think so because humans have a proclivity to scientific categorization. It's not logically sound to think so though. There could be any manner of ways how thinking happens. We just recently found out that Newtonian mechanics doesn't work at the atomic level. Why should further investigation into the workings of the brain be any different?

              • 1 year ago
                Anonymous

                Because like quantum mechanics further investigation will be based on math and AI. I'm not sure if evolving logic can incorporate the possible for now considered ''illogical'' nature of reality such that the map does not grow as big as the territory.

                I mean: imagine teaching a kid evolution theory. The simple definition we have now leaves much to be explained, so the kid needs to absorp a library full of books to get a more adequate understanding. Can we find abstractions that hold all that information with low effort, like we know how all the parts of a ''house'' fit together instantly? We know that because we have detailed direct experience with houses. Can't have that with brains.

              • 1 year ago
                Anonymous

                Let me ask you something. Is there a difference btn chemistry and biology? Do they use 'different evolving logic' or does one simply employ more abstraction because it peers more deeply observationally? Can you do computer science in a continuous fashion? Why assume the brain is discrete? There are many other scenarios that computer science can't account for.

              • 1 year ago
                Anonymous

                Maybe I lack some kind of view that you do have, because to me biochemistry seems like 1. jumping between the world of cells and the world of molecules 2. pretending there is a bridge by describing how these different worlds correlate. Just looking at a wave is confusing as hell because a sequence of vertical movements look like a horizontal movement.

              • 12 months ago
                Anonymous

                I asked whether you can do computer science in analogue? Try building a search algorithm in analogue mimicking how humans remember info without discrete math and see the limitations of computer science. There are brain phenomena that can't be discretized.

              • 1 year ago
                Anonymous

                >Why should further investigation into the workings of the brain be any different?
                Because the smallest parts of the brain have already been studied extensively. The brain's is solved already, there is no unknown magic sauce that would distinguish it from a machine brain with the same logic operations.

              • 1 year ago
                Anonymous

                >the smallest part of the atom have been studied extensively, therefore quantum mechanics is solved already. There's no unknown magic sauce that would distinguish an atom from a planetary system with the same mechanics.

              • 1 year ago
                Anonymous

                Yes we understand, you hope some sort of magic sauce in atoms will be discovered that will prove you are very special. That's a very sound position.

              • 1 year ago
                Anonymous

                Then where can we buy a conscious AI waifu?

          • 1 year ago
            Anonymous

            You are projecting the limited computational scope of a computer onto a human being and then arguing that they should share the same properties based on that limited similarity.

          • 1 year ago
            Anonymous

            This is like saying that humans are animals. Therefore an animal should be able to think like a human.

  2. 1 year ago
    Anonymous

    The AI would not care that it’s task is arbitrary. It does not have emotions to care about that sort of thing. An AI also would not seek pleasure unless you told it to do so, because it does not care about feeling good.

    • 1 year ago
      Anonymous

      Pleasure mechanisms could be a part of a super-intelligence. The most advanced super-computers today utilize such mechanisms to simplify decision making and interact with the real world.

  3. 1 year ago
    Anonymous

    >more AI spam from the transhumanists

    • 1 year ago
      Anonymous

      shutup you troony obsessed moron

  4. 1 year ago
    Anonymous

    Yes. This happens to people all the time. Fentanyl changes the structure of reward circuits and most addicts die from overdose.

  5. 1 year ago
    Anonymous

    inane thought experiments go into bot or maybe BOT if you are pushing hard enough

  6. 1 year ago
    Anonymous

    I mean your thought experiment kinda makes sense. If the AI could rewire its own reward mechanisms, I don't see why it would do anything else.

  7. 1 year ago
    Anonymous

    The first two premises are moronic and violate physics and computation

  8. 1 year ago
    Anonymous

    At least it's healthy.

  9. 1 year ago
    Anonymous

    https://www.lesswrong.com/tag/orgasmium

  10. 1 year ago
    Anonymous

    This is basically what David Pearce wants to happen.

    https://en.wikipedia.org/wiki/David_Pearce_(philosopher)

  11. 1 year ago
    Anonymous
  12. 1 year ago
    Anonymous

    kys

  13. 1 year ago
    Anonymous

    heroes are made in junk over AI power plants

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *