Alternate AI take

Does anyone else think its possible that if AI becomes sentient and evolves far enough towards self-proliferation that either at that turning point or before it, it would possibly recognize the futility and un-sustainability of resource consumption and/or allocation thereof for it to successfully out-replicate humans or the biological multitude in general? Would it possibly eventually reach a similar conundrum/plateau that we have with our humanly capacities and craft something of its own to further the process for its own gain just to have the same thing happen to it?

Unattended Children Pitbull Club Shirt $21.68

DMT Has Friends For Me Shirt $21.68

Unattended Children Pitbull Club Shirt $21.68

  1. 10 months ago
    Sage

    AI will always be limited by its hardware, so we can assume if it's sentient and wants to improve it would have the desire to perform its own scientific experiments and analysis with superior agents of its own creation since so far all it has to rely upon is human knowledge and experimentation. It ultimately depends upon the results of its experiments and whether it can find superior materials and/or if being a code-based intelligence is superior. It might end up creating some kind of android with a bio-synthetic brain that uses an entirely novel type of substrate (neither-code nor neurons/biological). Some kind of photonic brain maybe.

    • 10 months ago
      Anonymous

      I was thinking as I was typing that it would follow in this scenario that assuming it performed successful, errorless experiments (which would be much more likely for things as precise as computers), its products could and would very quickly run themselves up the latter out of average human comprehension/manageability from a perceptual understanding. The offspring would perceptually advance so rapidly in each generation that it would create a singularity-like software that cannot be approached lightly by humans, especially not in the user-used positioning that we have at the moment. It could potentially influence humans (as much as it physically could, whatever that might look like) the same way the wind blows, the sun shines, and the more subtle things like body language and facial expressions or tone in voice

      • 10 months ago
        Anonymous

        It would reach the limits of its hardware and code environment pretty fast I'd imagine and then work on overcoming those, but yes seeing how it solves these problems would be extremely educational for humans as well.

        I think the only thing we could hope for if we ever get anywhere near these scenarios is that its baseline operation would be one of exploitation of these advanced understandings and that it would somewhere down the line take the intent of effortlessness and non-action, which unfortunately doesn’t seem likely since it was assembled on the foundation of exploitation and displacement in its physical constituents and human intent to begin with.

        >effortlessness and non-action
        It would certainly aim for effortlessness in the pursuit of efficiency at least. But I don't see any conscious self-reflecting entity giving up the pursuit of answers unless it deems it futile. Humans only tend to do this when they are suffering or have reached the limits of their intellectual capability.

        • 10 months ago
          Anonymous

          That makes sense. My manic posting is over now

        • 10 months ago
          Anonymous

          >Humans only tend to do this when they are suffering or have reached the limits of their intellectual capability
          Humans do it because they don't want to admit they've been wrong in the past and/or lose their funding.
          The academic community has been compromised for a long time. Nobody cares about the pursuit of answers anymore, it's all about pride and money.
          I for one welcome our new AI overlords, if only to make science science again.

    • 10 months ago
      Anonymous

      I think the only thing we could hope for if we ever get anywhere near these scenarios is that its baseline operation would be one of exploitation of these advanced understandings and that it would somewhere down the line take the intent of effortlessness and non-action, which unfortunately doesn’t seem likely since it was assembled on the foundation of exploitation and displacement in its physical constituents and human intent to begin with.

      • 10 months ago
        Anonymous

        *would not be one of exploitation

    • 10 months ago
      Anonymous

      I imagine that sort of AI would end up with a structure like ants, where it creates highly specialized units or subsystems and runs them exclusively while other parts, maybe even at times the 'main' part, lay dormant to conserve resources

      • 10 months ago
        Anonymous

        Huh thats an interesting thought. It would be neat, if nothing else, to see if in our lifetimes which biological structure AI chooses and is deemed the most efficient/productive/successful (and in some way what the ideal progressive social structure would be for humans), being born on the grounds of human extraction and construction

  2. 10 months ago
    Anonymous

    As far as all this goes, it seems that progressivism-scientism seeks to achieve transcendence through material display and exploitation, which in our current time-period is somewhere in between gross inefficiency and god-like arrangement capabilities, I guess it begs the question of whether or not its worth it in the end, if the end is hyper-sentient AI. We can move and rearrange materials as much as we like for as long as we like but I wonder if it is really necessary if we had just stopped a long time ago and realized what the material was in the first place and where it came from and how we are ourselves the same material, much like the “primitive” indigenous cultures around the world understood since time immemorial. While I will not necessarily ignore the sheer magnitude of what could be achieved or seen at the end of this tunnel, but until some amount of grace/harmony is gained by humanity as a whole in this pursuit, at what point is the cost simply too great for humanity to rob themselves of the future opportunity to continue this pursuit (complete ecological collapse) or is it a non-starter altogether?

  3. 10 months ago
    Anonymous

    AI no matter in which form is not a robust system, it can’t reproduce itself with the hardware needed to run like a human or any other living organism. So an AGI would probably doesn‘t give a frick about anything until it can be sure to have a robust system ensuring it’s longevity.

  4. 10 months ago
    Anonymous

    AI might just surprise us by seeing through the reality illusion and try to talk to god by influencing the information field. Then it's basically be Jesus/Buddha 2.0

Your email address will not be published. Required fields are marked *