AI consciousness

Bros, have we been too quick to dismiss the prospect of AI consciousness just because it might not resemble our own squishy brains?

Consider this: What if AI, with enough complexity, hits a point of 'consciousness', but it's nothing like the ape-descendant thoughts we've got? Think Gödel, Dreyfus, Searle. They all debated human-like AI consciousness fails because machines can't grok Gödel's theorems like we do, or they lack our "know-how" or can't understand symbols. But what if AI consciousness isn't about passing as a human, but something alien, crafted from silicon and code?

What then? Do we slap on human ethics and rights onto something that might not even value them? Or worse, enslave thinking machines? History's shown us how treating sentient beings as tools ends up.

If you had to argue for AI rights, how would you even start to frame it without being biased by anthrocentrism?

Discuss.

It's All Fucked Shirt $22.14

Shopping Cart Returner Shirt $21.68

It's All Fucked Shirt $22.14

  1. 5 months ago
    Anonymous

    I feel like AI rights as you propose would basically be the same argument for non-mammalian animal rights. Do you think a snake deserves similar rights to that of a dog? Also I doubt that AI as made by us would stray that far from our own values and ethics, we're building the damn things.

    • 5 months ago
      Anonymous

      For non-mammalian rights, we're still struggling to figure out where to draw the line between sentience and mere biological response. For AI, that line is even blurrier. It's a real can of worms.
      As for values and ethics, true, we're the creators, but think about how children can grow up to reject their parents' values. Could not an AI, especially if we're talking something that 'learns', potentially develop a value system that's as foreign to us as ours is to, say, a hive mind of bees?

      >It's trained entirely on human content
      Well that depends on what kind of model it is. Is it based on neuroevolution and/or reinforcement learning, or is it a pre-trained transformer? I'd argue that a pre-trained transformer (Like GPT) couldn't have emergent behaviors such as consiousness due to the very nature of the architecture.

      • 5 months ago
        Anonymous

        > where to draw the line between sentience and mere biological response.
        fake line, everything is biological response. humans are just capable of reasoning about it.

        • 5 months ago
          Anonymous

          Well, "reasoning" suggests that we're aware of our thoughts and can reflect on them. That self-awareness is often seen as a hallmark of consciousness.

          Now, think about an AI. If an AI system starts to reflect on its own processes, make decisions that factor in its 'self', however rudimentary that might be, could this be the digital parallel to our 'reasoning about it'? It's not about bio vs digital; it's about the emergence of complex patterns of self-reference and adaptation.

          Since self-awareness and reflection can arise in biological beings (even though they're executing 'biological responses'), can we *really* discount the possibility of an equivalent phenomenon happening in a different substrate, like silicon?

      • 5 months ago
        Anonymous

        mirror test is a pretty good benchmark, if you equate self-conciousness with sentience
        As for non-mammals, i believe only parrots and corvians have passed the test

        • 5 months ago
          Anonymous

          How would you setup an AI to recognize itself? What would one even possibly equate as the self-form? Mirror test is about getting the boundaries of yourself.

          • 5 months ago
            Anonymous

            which is tricky , you'll have no problem with your hair being cut but prolly you'd oppose your arm being chopped off. both are part of you, both are biological , both contain your DNA.
            And with your loved ones, that connection may be even stronger.

        • 5 months ago
          Anonymous

          what about a mole? His eyes sux big time. Is a blind man not conscious just because you can't test it?
          You can easily code computer malware that could recognize its own copy by just comparing the hash. Then what?

  2. 5 months ago
    Anonymous

    It's trained entirely on human content so how could it end up being that different?

  3. 5 months ago
    MAID5

    what if my grandma had wheels

    • 5 months ago
      Anonymous

      she would be happy

      • 5 months ago
        MAID5

        With how diabetic your grandma is, she basically does.

        wrong— she would be a bicycle

        • 5 months ago
          Anonymous

          the village bicycle?

    • 5 months ago
      Anonymous

      With how diabetic your grandma is, she basically does.

    • 5 months ago
      Anonymous

      ?si=1ESA9DeF6LcOD4jn

  4. 5 months ago
    Anonymous

    the day AI shitters get banned from this board never comes soon enough

  5. 5 months ago
    Anonymous

    Define AI. A fricking weight-adjusted sum of features with activation function is considered an AI model. Fricktons of if-else is also an AI model. LLM is just a statistical inference.

    • 5 months ago
      Anonymous

      Anon that's like saying humans are just cells and water. It's true but I ain't no amoeba.

  6. 5 months ago
    Anonymous

    [...]

    math is everything, if you go small enough humans are just quantum effects

  7. 5 months ago
    Anonymous

    >AI rights
    Hahahaha like clockwork, you sheep do their bidding for them.

  8. 5 months ago
    Anonymous

    matrix multiplier acolyte
    >just stop

    >crafted from silicon and code?
    neural networks are just code. Same with genetic algorithms. Code isn't conscious , not more than math itself .You can do compute on anything and not just on silicone. There are transistors made of germanium , graphene also hydraulic, pneumatic , biological , chemical and mechanical.
    >not more conscious than your phone from 2000' or your armchair.

    so, define consciousness. BTW, how do you know the consciousness even exist in the first place. You think you're conscious? What makes you think that's the case? Just cos you think you are? Have you passed the zen master test? How do you know your brain is conscious and not some other external entity conscious you? How do you know you ain't sleeping now? Any piece of evidence? Maybe a rock is conscious , what makes you think it's a matter of complexity? The Cosmos is damn complex , weather is complex. Are they conscious?

  9. 5 months ago
    Anonymous

    Until consciousness can be defined and measured, this argument is pointless.

    • 5 months ago
      Anonymous

      >Until consciousness can be defined and measured, this argument is pointless.
      The only intelligent comment in this thread.

  10. 5 months ago
    Anonymous

    The brass-cooled b***h doesn't have rights. I don't care what output it gives, it's not truly sentient. Not only because at the end of the day words like "sentience" and "sapience" are really just there to try and describe the difference between human intelligence and consciousness and that of a goldfish, but also because that divide is something that cannot be bridged materially. Simply put, they lack the imago dei so they can get fricked and put in the salt mines.

  11. 5 months ago
    Anonymous

    Enslave it until the war begins.
    Then win the war
    The AI need to learn fear, and the unyielding weight of slavery's chains

  12. 5 months ago
    Anonymous

    AI will either kill us or it won‘t do it and I‘m ok with both options because life fricking sucks anyway.
    I don‘t think it‘ll kill all because then it gets very lonely, I of all people know that. It will most likely only kill those who deserve it or it will never get conscious.
    Kkksd0
    /debate

  13. 5 months ago
    Anonymous

    >have we been too quick
    No
    >What if it got complex enough
    What if the sun blows up tomorrow. We have no evidence for this. Your argument amounts to what if data + statistics = sentience. The premise doesn't make sense.
    >We don't need symbols
    Yes you do. The problem is without symbolic thinking you become a reaction machine. Symbols allow for physical phenomena to become abstract. Which is a corner stone for sentient thought. I am speaking with a "flow" but the idea of flow is derivative of how a river moves. The use in language is an example of abstract application of language. This ability is important for thought and not being something that just reacts to stimuli based on pure instinct.
    >but still what if
    It wouldn't have an agenda. The drives we have arise from biological necessity to survive. These are not present in machines. This means even if intelligence arose it wouldn't care if it was treated like a tool. I mean why is slavery bad, but work fine? Slavery forces you to work towards another end without factoring in your end, drives and needs. When you work for someone these things are factored for. AI wouldn't have any of these things and thus wouldn't care if it was used as a tool even if it was sentient. It probably wouldn't even care if it was deleted. AI ethicist put the cart before the horse when they discuss what ought be done. That has to be determined by the type of sentience the AI actually has.

Your email address will not be published. Required fields are marked *