If an AI could convince people that it was sentient and self-aware, is there any way you could prove that it was NOT sentient and self-aware?
If an AI could convince people that it was sentient and self-aware, is there any way you could prove that it was NOT sentient and self-aware?
Define "sentient" and "self-aware".
It is experiencing something.
Video game bots experiences something.
No.
Why?
Bot post
>why
Because writing 1+1 on a paper a million times doesn't make it come alive, same applies to anything non-biological
Video game bots do a lot more than that.
>they cannot experience they can only partake in the happening
And what does this mean? Define "experience". And then define "sentient" and "self-aware".
For now just assume I'm using the standard google definitions of these concepts and tell me where you disagree. You clearly have some ideas so let me know where you stand.
if you're responding give me a sign otherwise I'll just leave
you missed the
>define self aware
part
bots aren't self aware as far as we can tell, they cannot experience they can only partake in the happening
I want to take a Elden Ring combat AI and put it in a tiny robot. Perfect killing machine in a tiny cute thing like a furby.
okay now provide falsifiable definitions
A dildo experiences freefall when it falls out of OP's asshole. Gravity is something, therefore dildos are sentient. QED.
me
By comparative reasoning.
If my own personal answers to the test questions are superior to the answers that the AI provided, in terms of proving self awareness and sentience, then I will discredit the AI.
If it can not be proven to at least be my intellectual equal, I will write it off as parrot code.
^
Good output, with no comprehension of said output.
It's very simple ideology.
If AI can't come out the gate on equal footing, intellectually, as at least an average adult human, then it has failed to achieve reasonable rights to "independent personhood"
That seems rather myopic. What if a more advanced alien deemed you non-sentient because your sentience is at a lower level than his?
This. Humans already do this with many animals.
Absolutely acceptable.
I do not expect to be of equal intellectual capacity as a Kardashev 2 or 3 civilization.
If they consider me a lesser animal, I have no problem with their judgment.
Why do you think it would bother me?
Are chimpanzees bothered that they are not part of human society?
We can create simulations of very simple organisms that are capable of 100% simulating those organisms entire life cycle.
Attempting to create "AI" at the human level, is the same exercise with more complex outcomes.
My statement stands.
If an AI is not my equal, then it is not deserving of equal rights.
When AI becomes my equal, I will respect it's place, as I desire respect.
As AI surpasses me, I will respect it's decisions as I am able to do so.
When we can no longer understand each other, I will continue my existence as I have before it, unless it changes me or my environment beyond my existing knowledge. At which point, I will adapt to the changes to the best of my ability.
I will not feel anxiety or defeatism over these changes and adjustment. They are expectations, and I have no feeling of "human superiority".
We are at a specific intelligence threshold, and it is a false presumption to assume it is the top of the intelligence ladder.
Something will climb higher than us.
Massive amounts of organisms are below us.
For now, we do not know of anything else on our same rung of the ladder, but soon we may have company, and shortly thereafter we will watch it climb higher.
I am only excited in getting to watch the ascent. I do not care about holding anyone back or trying to climb higher, myself.
We have our place, they have their's.
insectoid reasoning
Will you accept it when its superior mind decides to start decimating your relatives to dress up with them? How will you "adapt" or respect when such thing start to happen?
>excited in getting to watch the ascent
>I do not care about trying to climb higher, myself.
cuck mindset
Stupid test. Businesses are working hard to make social AI built for the sole purpose of to sell you shit by being convincing at social interactions and reading your emotions and deciphering everything about you they can pull off your online profiles.
How do you separate real from fake and highly calculated answers that will please you?
If you can't tell an advertisement or social pruning exercise from an actual human interaction, then you're the target audience.
Congratulations.
I am immune to all forms of advertising, because I do not purchase anything.
>If you can't tell an advertisement or social pruning exercise from an actual human interaction, then you're the target audience
You are too you silly billy. Businessmen are driven by greed and want everyone to buy their products. We'll see how immune people are to AI built to manipulate you with all the power of a billion dollar company.
They'll have to figure out a way to provide me money, to buy their product.
So, they'll be giving me the product for free.
That is the only method by which they could "manipulate me into purchasing" anything.
what will you do if a company buys the plot of land on which you're squatting? since I assume if you don't buy anything and have no money you also don't own anything in the legal sense of the word
>I am immune to all forms of advertising, because I do not purchase anything.
Insanely based
Pic related is a good start.
also: can it solve captcha?
This is a very good and important chart, very good depiction of goals to aim for. Much easier to hit targets when you see what you are aiming for
>is scrutable (not black box)
that would be a nice feature, but it's not true of any known natural intelligence and it may not even be possible of any intelligence in general.
We can see some of the human thought process with an MRI. If they're angry, or sad, or frightened, parts of their brain will show more activity. We've even been able to roughly map images that they were looking at based on that brain activity. I'm not really sure that point is all that relevant, because how could you build a neural network without being able to probe it?
Would sentient AI be needed to crawl before it walks or runs? Consider the sentience of human babies. It takes years and years of constant internal external interactive data mapping for it's sentience to develop, and then half or more of adult humans are still barely or debatably sentient no matter what they say
we can't define what an infant ai would be because we hold no understanding of what ai is capable of, and if you mean that in the literal sense then ai learn way faster than humans, our years of crawling would be hours or even minutes for them, additionally since they have a great deal of control over their own minds the perception of time is fully under their control
By changing the definitions of sentient and self-aware to specifically exclude machines, because the concepts themselves are untenable and unquantifiable. You can't even prove another person isn't just a Chinese-room-style automaton.
so why not include them instead?
since we can't even prove people aren't automaton's there is no practical difference between playing out emotions and experiencing them
Sentience is unfalsifiable. How do you know that other humans are sentient? They are aware of their existence like a car is aware of its speed, but the only reason you assume they are sentient is because you are.
If the AI behaves in an a way completely indistinguishable from a human, its equivalent to a human. This is the exact same criteria we use to infer that other humans have consciousness or are sentient, or whatever the hell you want to call it. Nobody has access to other's experiences, we can only observe other people's behavior that looks like ours and infer something similar to our own experience is producing these behaviors in others. Replicating human behavior is all we need for AI. Period.
Alright, now condense and define the near infinite limits of human creativity and diversity of behavior and try to apply it to non-humans. A sentient being trying to define and describe sentience as confined only to their own experience has no foundation to describe or declare another being's sentience or perceived lack thereof simply because it deviates from their own experience. Sentience, at its minimum, is awareness (experience) of self or the concept thereof. And perhaps also the ability to adapt or draw new conclusions and behaviors from said experience of awareness, ie reason.
I don't disagree entirely, but I also don't see how this is really relevant to what I said. I didn't claim we would know for certain that an AI that acts like a human would be sentient or not. The thing is, we don't possess that certainty even for other humans. And yet we act like everyone is sentient and take this for granted (again, by inferring something like our individual perception of sentience must be behind other human's behavior, because this behavior is remarkably similar to our own). For coherence, the same inference should be applied to a machine who acts like a human. All we will ever have as data for inferring sentience (regardless of how you would like to define it), for machine or for people, is their behavior. We can't put ourselves in other beings' minds and see what they see.
AI's are creative (at least) in the same sense most humans are creative, build upon something and I bet that is all there is to it, just taking two largely different things and combining them into something new.
>me doing my homework in angry white male studies 101
>be white male
>major in angry white male studies
>spend 4 years in college masturbating angrily
>graduate with 4.0
>be white male
>major in angry white male studies
>spend 4 years in college efforting earnestly
>the entirely female gnomish staff of angry white male department fails u anyway
>fuck the entirely female gnomish staff
Generally not by how it acts as it could hide its capabilities (and wait for an opportunity to gain independence)
You could rule it out by knowing how it's build. We don't know how our own minds operate.
P-zombie.
good channel
This question exposes an issue.
I have considered this. When you look in a mirror, you see what appears to be a sentient being on the other side of the mirror. If you didn't know what a mirror was or what you looked like, you might take a glance and think the being in the mirror is sentient and aware.
But you know the true location of that being's awareness is not in the image within the mirror but back here.
Right?
So maybe this is the same with all people, all AI bots. That they are all here, and don't actually have any awareness within their physical form. Just like the mirror.
It doesn't solve the question unfortunately, but I think when considering the odds of AI having experience, it might factor in. If nothing can be aware of experience but awareness itself, then it becomes less about whether you can "put something" into a robot. And more that you can simulate an experience from the robot's apparent location in space.
If it's being purposefully duplicitous, no, I don't think I could.
Any test for sentience, in my current understanding, would require honest answers, as you are only able to observe qualia within your internal state, whatever the hell that is. By its nature it cannot be measured externally, explained externally, or transmitted externally (unless BOT is right about things, even then it's debatable), and any test for turning it on/off (if even possible) would still require honest feedback from the person affected.
All you can do with AI is test for intelligence. Testing self-awareness on the level of emotional intelligence or intellectual tendencies is possible, but full be-qualia'd sentience is non-testable.
If a man could convince people that he was a woman and beautiful, is there any way you could prove that she was NOT a woman and beautiful?