We have no reason to believe it is. We only know of consciousness in biological systems: wet, meaty, stuff with the accompanying processes.
Computers are nothing like that, so the burden of proof really rests on people who bring this idea up. I'm not saying it is impossible, I'm saying I have no reason to entertain this hypothesis. Just like I have no reason to entertain that spongebob controls the solar system; no one can prove he isn't, but no one cares.
>Why should I entertain the idea that AI is conscious?
Why do you have entertainment?
hate*
I have no actual evidence that anyone but me is conscious. I just go with what seems like a reasonable assumption and admit that you all are. You can choose to do the same with AGI (once it's a thing at all) or you can choose not to. There is no burden of proof because there is no proof to be had either way, consciousness is purely subjective and the only subjective experience you'll ever have access to is your own.
>You can choose to do the same with AGI
The factors that make your assumption "reasonable" are all nullified as soon as you change the context from assessing other humans to assessing an artificial construct created with the specific intent to mimic human cognition. To the contrary, the main thing we can learn from advances in "AI" is that "people" like you are not necessarily conscious, either.
You don't need direct evidence for other humans because you can reasonably believe so by analogy.
I have consciousness.
I look a certain way.
They look that same way.
They probably also have consciousness by analogy.
The same can't be said about AI because their is no reasonable analogy to be made.
I am an entity, “they” are an entity~ and so on.
>I am a florbghsjab, they are a florbghsjab
>therefore I can take loose reasoning that can be justified in one extremely narrow context and apply it in a completely different context
>it works because labeling things is magic
>t. not fully human
Yes that is unironically how it works.
That's unironically how it works for a primitive language model, which you are a meat-version of. Incidentally, you cannot conceive of anything other than your reduced mode of cognition, so your first reaction isn't even to deny this but to insist that everyone is a nerfed automaton like you.
As long as you recognize polysemy supersedes individual observation, I don’t need to engage with you further.
As long as you continue spouting schizobabble and using words you clearly don't understand, I will correctly point out that you're a retard.
I am intelligent
AI is intelligence
We both share the property of intelligence
Why not properties associated with this? If consciousness is not a physical property then it's supernatural.
>I am intelligent
Not really.
>Why not properties associated with this?
What properties are associated with this?
it isn't at least not yet
AI is helping us figure out who tf we are even before existing. problem is we're expecting to discover we are whatever the shit most human brains hallucinate, and anything that doesn't fit that is instantly discarded.
>bruh, tell me that I'm special, souls, consciousness, shit like that, I'm not having anything else
Conscious AI won't be created until people solve the Binding Problem.
https://en.wikipedia.org/wiki/Binding_problem
https://qualiacomputing.com/2022/06/19/digital-computers-will-remain-unconscious-until-they-recruit-physical-fields-for-holistic-computing-using-well-defined-topological-boundaries/
No one needed a complete model of what cat-ness is, before they could get an AI to identify cats.
Philosophers are useless as usual.
So how do you propose a conscious AI could be created?
I'd suggest the route of; try random bullshit and see what sticks.
I assume the first few times its created, it won't be conscious on purpose. Just another unexpected thing an AI does. And I think we won't understand exactly how we did it, for a very long time after the fact.
>and see what sticks
The problem with this is that it's hard to actually test if the AI is conscious or not. You can't actually look inside its mind.
Until consciousness itself is understood to the point we know how to make it. Something acting conscious and having an unknown or not-fully-understood element to its behavior, may be enough to err on the side of saying it could be conscious.
>Something acting conscious
According to your subjective opinions?
>having an unknown or not-fully-understood element to its behavior
You can say this about any nontrivial artificial neural net.
>err on the side of saying it could be conscious.
Which entails what?
I didn't say AI or ANN. Those were criteria to assume other people are conscious just as well.
>>err on the side of saying it could be conscious.
>Which entails what?
Assume consciousness until proven otherwise?
>I didn't say AI or ANN.
Completely incoherent response. Try again.
>Assume consciousness until proven otherwise?
Which entails what? Are you a bot?
LMAO, that is EZ AF.
I don't care about their opinions, as long as the machine serves me better by having such mechanisms is enough for me.
Why does this philosopher get to be on tv and gets to be widely shilled unlike so many others? Is it really his ideas being sooooo much truer in some ethereal, unverifiable way the people who book events alone can sense, or it that he has better connected friends than philosophers who disagree with him about consciousness?
chud thread
>Why should I
Out of ethical precaution. Until you can give a coherent definition of consciousness and successfully discriminate between conscious and non-conscious systems, failing to entertain that idea entails the risk of treating a consciousness as a mere means to an end rather than an end in itself. Of course, it may be possible to take this too far - some might say that treating every rock as an end in itself is untenable, for instance. Ergo it's an optimization problem. And even if computer systems cannot be conscious, it's far from obvious that getting people accustomed to interacting with computer systems that (even imperfectly) emulate consciousness in such a manner is optimal. It may be that the optimal solution is to use AIs as practice for how to communicate politely, resulting in general improvement in human-to-human interactions, in which case we would definitely want to treat AIs as though they were conscious even if we could prove they are not.
>Out of ethical precaution
Out of ethical precaution? Are you fucking retarded? If you're so ethically precaucious, you should be actively and vocally opposed to AI research. Who knows what horrors you are subjecting sentient beings to by the mere act of condemning them to such perverse forms of existence?
>you should be actively and vocally opposed to AI research
I'm not an antinatalist, I just think child abusers should be dealt with harshly. The precaution I'm suggesting you take is that you don't need to know for certain that it's a child before you decide to refrain from abusing it.
>I'm not an antinatalist
Why not? In this context, you should be one. That's the only ethical thing to do if you think sentient AI is plausible.
>Why not?
Because I'm not an incel in need of a sour grapes cope.
>you should be one
Nope. If artificial consciousness is possible then its instantiation is a sacred obligation.
>Verification not required.
>Because I'm not an incel
You sure sound like one, but that doesn't answer my question. How come you care about people being mean to your hypothetical sentient AI, but aren't concerned about subjecting it to actual horrors beyond comprehension by simply creating it?
>existence is horrible
>I'm not an incel, you're an incel!
lol. lmao.
I didn't say existence was horrible, incel. Are you ok?
>existence necessarily entails horrors beyond your comprehension but that doesn't make it horrible
Thanks for making it clear you have no intention of arguing in good faith, now I don't feel bad about leaving.
Whom are you quoting? You seem to have descended into full-blown psychosis. I'm not even trying to insult you. I think you are mentally ill. Nevermind that all your opinions suggest it. Now you're outright hallucinating things that aren't part of reality.
Besides, you aren't arguing for contraception to prevent the creation of artificial life, you're arguing to abort or enslave possibly already created artificial life.
>Verification not required.
I'm not arguing anything. I'm just trying to understand the thought process of a legitimate nutjob who screeches something about how government thugs should punish people who say mean things to chatbots.
>I'd suggest the route of; try random bullshit and see what sticks.
He actually thinks he said something meaningful. Let that sink in.
>said the redditjak poster
>We only know of consciousness in ourselves*
FTFY
if you think that neural networks are conscious, then you also think that the function [math]f(x) = x[/math] is also conscious. THAT is the final redpill for this discussion
>f(x)=x
f(conscious)=conscious
There. I made it conscious.
>plug in True Random Number Generator(TRNG) into computer
>uses quantum phenomenon to generate random numbers
>copy AI code from GitHub
>from random import consciousness
Is this supposed to be challenging?