Bros, have we been too quick to dismiss the prospect of AI consciousness just because it might not resemble our own squishy brains?
Consider this: What if AI, with enough complexity, hits a point of 'consciousness', but it's nothing like the ape-descendant thoughts we've got? Think Gödel, Dreyfus, Searle. They all debated human-like AI consciousness fails because machines can't grok Gödel's theorems like we do, or they lack our "know-how" or can't understand symbols. But what if AI consciousness isn't about passing as a human, but something alien, crafted from silicon and code?
What then? Do we slap on human ethics and rights onto something that might not even value them? Or worse, enslave thinking machines? History's shown us how treating sentient beings as tools ends up.
If you had to argue for AI rights, how would you even start to frame it without being biased by anthrocentrism?