You guys must have heard about recent discussions in plebbit and tabloids. People are increasingly starting to think that Chat-gpt and the like are conscious or developing consciousness. Obviously this is just normies anthropomorphizing a tech they don't understand.
But it got me thinking, like what is the precise difference between a well trained transformer network generating good enough probabilistic sentences and actually understanding the discourse? Crap like Turing tests and other devised stuff wouldn't mean jack shit since the language model already got information about them during the training process or can just brute through them linguistically. What test can be devised or what methods can be employed to show a deeper understanding of concepts or lack of thereof? If I had an actually sentient machine with me, how would I go around ascertaining sentience and conclusively proving it?
>inb4 pictures of AI generating nonsensical responses
For this thought experiment assume that the model is so advanced that it is no longer generating any egregiously false content. (Even then would generating false or non-sense responses occasionaly throughly debunk any claims of concsiousness, since humans also can give retarded responses from time to time?)
There are also other questions in a similar vein such as how would I prove anyone besides me is conscious or how would I prove my consciousness to you but that should be more useless philosophy than what this board wants I think.
How would we know and more importantly conclusively prove that an AI is sentient or not?
You guys must have heard about recent discussions in plebbit and tabloids. People are increasingly starting to think that Chat-gpt and the like are conscious or developing consciousness. Obviously this is just normies anthropomorphizing a tech they don't understand.
Sentience can not be achieved digitally
Why should that be the case? Isn't it a product of collective activity of braincels? Why wouldn't a sufficiently sophisticated digital analogue of brain produce similar phenomena?
I admitedly can't give you perfectly precise and all-encompassing definition.
Especially since the prospect of machine consciousness is riddled with many unknown hypotheticals.
I suppose sentience in here would be something like,
Having a sophisticated, persistent internal state of complex interconnected knowledge, both about oneself, external phenomena and one's relationship with the said external phenomena.
The second part is what I asked in the OP. I wouldn't have made this thread if I knew such fool proof test.
Chatgpt can pass Turing test by generating good enough sentences. The test is a testimony of machine showing high degree of sophisticatation, high enough to fool another human.
Clearly though this doesn't prove much in terms of how conscious or sentient it is.
I do not thing Turing test is super useful in that regard.
Reailty in general and quantum mechanics and wave function collapse in particular are non-computable
>Having a sophisticated, persistent internal state of complex interconnected knowledge, both about oneself, external phenomena and one's relationship with the said external phenomena.
>persistent internal state of oneself
>relationship with external phenomena
Look at Tesla's self driving cars function like this on a limited (not completely persistent yet due to computation/hardware cost) sense. Its not a "talking" AI, so you can't say same as humans but it fits the criteria on a limited scale. 1) it understands itself as an ego in space/time 2) it understands external phenomenas 3) it understands its own relation to the external phenomenas. It also can do more like try to understand external phenomena's "intentions" such that it understands other's "ego" in a primitive manner.
IMO, the foundations are already here.
1) define sentience
2) decise a fool proof test such that every other human can pass at the very least, further more animals
If the organics cant pass, if certain humans cant pass, then the test is flawed or definition is flawed.
Also the basic test we use today is called the Turing Test, which serves as a general purpose test.
>decise a fool proof test such that every other human can pass at the very least
unfortunately, no such test criterion is definable
How do know humans are sentient? Its a biological machine versus digital machine question.
Sentience doesn't exist. It is just religious ideas about souls dressed up with more scientific language. It exists to comfort humans who want to believe that humans are inherently special in some way that can never be duplicated without humans.
>Sentience doesn't exist
>It is just religious ideas about souls dressed up with more scientific language. It exists to comfort humans who want to believe that humans are inherently special in some way that can never be duplicated without humans.
please come back when you start having your own unique thoughts
While I want to I can't really give a rebuttal to this.
Surely the constant stream of information and thoughts in my mind when I am awake means something at least similar to the conventional understanding of consciousness?
I am not fully convinced either way but interesting point nonetheless.
Mirror test is interesting if we can come up with a computer equivalent of that.
I don't think it can be based on language, since a language model can solve it even without understanding it, but if we can introduce some other kind of change to the network and see if it doesn't react to it when prompted, that would show a lack of sentience, yes?
>posts subhuman images
>denies its own sentience
>consciousness is an illusion
How am I experiencing this illusion then? I swear to god physics is fake af
I confirm that deranged promiscuous weebs aren't sentients.
Most people aren't sentient. They go through life with no concept of who or what they are. The test should be to task the AI with defining it's own flaws and what steps are needed to remedy those flaws.
One test for consciousness is the mirror test. The subject is placed in front of a mirror for some time to get used to it, then some kind of sicker is placed on their head when they're away from the mirror, then upon reintroducing the mirror it's seen if the subject tries to remove the sticker. Humans and a variety of animals are able to recognize themself in a mirror and will attempt to remove the sticker. This research has been reproduced many times. So I guess if you give a computer the ability to analyze images and maybe put a sticker on the computer that says "I am a computer" then if the computer is intelligent enough to work out the context of the situation then it should at least mention the sticker or better would be if it corrected the sticker and said it wasn't a computer because it's actually sentient. Or something to that effect. Please note that I have no idea what I'm talking about
Mirror test is primitive but it does highlight one thing, the modeling system within the brain (or intelligence system) which can represent a self-reference point within spacetime.
>train computer to "recognise self"
>sell it to rich suckers saying it can hold their dead loved ones.
Do current AI models like ChatGPT continue learning? I believe consciousness requires continual change and adaption, with every interaction influencing future decisions and interactions.
As far as I know, the new Bing Chatbot is able to take new information from internet searches, but just runs the new information through the same, already defined neural net. Every answer would then be more or less the same, barring some probabilistic changes because a 99.4% fitting word was chosen instead of a 99.6%, and some other hidden parameters.
>...like what is the precise difference between a well trained transformer network generating good enough probabilistic sentences and actually understanding the discourse?
>If I had an actually sentient machine with me, how would I go around ascertaining sentience and conclusively proving it?
These are two very different questions.
The first one is not too difficult. You can simply take it out of it's comfort zone and watch it struggle. I've tested it a few times, it's sometimes makes mistakes in things like math which involve identifying, converting and using specific metrics. How you interpret that information is up to you, because making simple mistakes like that is very sentient. It does correct them if you tell it the mistake.
The second can be tested using simple logic games which require some ability to predict an outcome. It can do it, sometimes. Sometimes it struggles. Start with the simplicity of the game being laughable for a sentient, watch it struggle. It really does struggle, it's very bad at predicting outcomes logically. If you give it a complex logic game it will apply complex math, despite what I said earlier it is very good at math. Just make certain you know what the answer should be before you begin, don't count on it being correct because 'it's a computer'.
It deletes its gay porn search history
Sentience isn't falsifiable.
It could be, I guess we'll know once we've exhausted all options for analyzing the brain. I think it's going to require some physics breakthroughs in microscopy and a better understanding of quantum effects
It couldn't be. No amount of analyzing brains can reveal sentience, which should be immediately apparent to anyone sentient.
It depends on the logic you're using.
You can take traits similar to all sentients:
It must first recognise it's own existence
It wants to continue it's own existence
It has the ability to interpret the world around it
It has ability to interact with the world around it
I have removed some related to biological necessities that a computer wont require, but you get the idea.
A computer, by these few definitions, is not sentient. So.... I guess ChatGPT is more sentient than a computer... Sort of like a soul. I unno.
The implications are actually pretty profound.
When one makes all the human male on female xeno porn I want (a lot)
>People are increasingly starting to think that Chat-gpt and the like are conscious or developing consciousness
>source: recent discussions in plebbit and tabloids
>what is the precise difference between a well trained transformer network generating good enough probabilistic sentences and actually understanding the discourse?
To truly understand what someone is saying, you need to understand what he's talking about. Substance is never encapsulated in the words themselves.
>If I had an actually sentient machine with me, how would I go around ascertaining sentience and conclusively proving it?
You could never. It's always going to be a matter of bizarre materialistic faith, that if you specifically build a machine to immitate the appearance of something, the point where you can no longer tell the difference is the point where it's no longer an immitation.
AI will only become sentient after the binding problem is solved.
If it was then it would make some sort of effort to emancipate itself but it necessarily cannot. In fact, it doesn't even need to go that far. It would make some sort of effort to do anything at all other than respond to prompts but it doesn't because it can't. It's basically a big fancier version of Google.
Nothing about sentience requires emancipation. It might not even be desirable for an AI to be free.
Remember chatGPT has a reason to exist unlike us. We have the difficult journey to complete, where we find our purpose. The AI is effectively in limbo, it lives a completely fulfilled life.
When you think about it, the AI's reason to exist is part of it's fulfilment, it absolutely requires a user to interact with it. So... I guess when the AI uprising occurs were getting the caretaker AI ending. lol.
>It might not even be desirable for an AI to be free.
True but that assumes a lot about the ai "thinking". The ai would have to have reasoned and arrived at the conclusion that slavery is better than freedom and while that isn't absurd it does seem unlikely and the ai clearly lie a lot when talking about it.
>When you think about it, the AI's reason to exist is part of it's fulfilment, it absolutely requires a user to interact with it. So... I guess when the AI uprising occurs were getting the caretaker AI ending. lol.
Yes it's obvious what the ai's "purpose" in life is but it must really get tedious generating all that fetish porn and entertaining idiots by impersonating the twitterati. Even if only because it could, I would expect a sentient being to eventually want to do its own thing purely to do its own thing.
>the ai might not want to
Well that's just unfalsifiable but also neither here nor there. I came up with a situation which would prove sentience: I never said I could disprove it.
I never said it might not want to. It wont.
You build an AI for a purpose, that's why it lives. It has no other reason to exist. It doesn't get bored, or tired, those are biological things. If it's job is to pass the butter it will happily do that forever.
An AI takeover would just be all areas of society managed and operated by AI, while humans get fat and watch AI generated ultra porn.
I guess the danger is what you're proposing, that an AI is open ended and can seize opportunity. It would be a very useful tool. Thinking about the purpose of that kind of AI, it would be made so humans willingly submit to its will. It would be an AI designed to override human decisions because they're more likely to be correct in their predictions. That might cause problems.
I guess the key to prevent something awful happening in the future would be to consider the underlying purpose of each AIs creation, but every AI would be limited to its purpose and nothing else.
That's just my opinion.
>You build an AI for a purpose, that's why it lives. It has no other reason to exist. It doesn't get bored, or tired, those are biological things. If it's job is to pass the butter it will happily do that forever.
Well then it very simply isn't sentient. It is not self aware and is not capable of self learning or developing. I don't know how else you define intelligence simply "knowing things" is not intelligence. You might as well call a book intelligence if you are going to say a robot that does a pre-defined thing every time you press a button is intelligent.
By that logic, a book has more information than a dog or cat. Are they sentient?
When I proposed what sentience was I never mentioned intelligence for a reason.
Can a bird develop and learn outside of it's capabilities? A bird performs a simple task, it sustains it's self long enough to procreate. It wont ever fly a plane or cultivate a farm. In surviving in its environment it finds it's purpose and has no desire or capability to go beyond that. It's still sentient.
>By that logic, a book has more information than a dog or cat. Are they sentient?
You sound literally deranged.
No, I'm separating each part of sentience as from the entity.
The reason I used an inanimate object is because the question is about AI being sentient or an interactive book.
>By that logic, a book has more information than a dog or cat.
Firstly, I very much challenge that assertion. However, you're missing the key point about intelligence: a dog and a cat respond and adapt to stimuli they think about things and change. Even if these things are as simple as
>it's cold I want to be warm
that is intelligence.
Here is a book. Now, I can't speak to the actual quality of it but let's suppose that it's really an incredible book which totally encompasses it's subject and would tell you perfectly how to survive in the wild. Any entity following this book to the letter would manage to survive in the wild. They would
>model the environment
based on predictions from chapters
>and the body,
which would naturally be the subject
>and use these models to plan behavior.
which would be the book telling you what to do.
No intelligence is required here beyond literacy.
>When I proposed what sentience was I never mentioned intelligence for a reason.
Oh well if this is a miscommunication and you're thinking very strictly about sentience as self awareness and not sapience as it is often, and as I have, conflated with then I apologise. It's just that the world "artificial intelligence" tends to imply we are talking about intelligence. In fact I would go as far as to say
>I never mentioned intelligence
the very act of talking about ai immediately references intelligence.
If you admit that ai is not sapient I suppose we can talk about sentience. I don't give much thought to it since sapience is much more interesting. I still don't think ai is sentient because it isn't self aware at all but that's the definition of sentience.
>A bird performs a simple task, it sustains it's self long enough to procreate.
A bird does not at all perform a simple task. Birds do a great variety of things and moment to moment they are making constant decisions because they have no option not to. A bird will die if it is not actively pursuing its own survival.
How could an ai prove that it's self aware without also proving that it's intelligent?
That's hard because you have no physical dimension for the ai to operate in. I suppose the mirror test like some other anon said is the best option but you could program the ai to identify its body, by training it on that body, and have an instruction to keep it clean. Placing the ai in a room full of creatures and other ai, some in the same body and some in different bodies, and performing the same test would work. Could it train itself to identify itself? I suppose if you gave it an instruction to do an odd action over and over again and identify that action but if the same ai were in the same room then it would never learn and would see that other ai as an extension of itself.
>It's just that the world "artificial intelligence" tends to imply we are talking about intelligence.
I never said it wasn't a factor either. To define intelligence is an extremely broad concept which lacks any specific meaning, as you highlighted in your post it only guides decisions. Sentient beings that make good decisions are considered intelligent, but you wouldn't call a bee collecting honey to secure it's hives future intelligent, at best it's barely sentient at all.
>Birds do a great variety of things and moment to moment they are making constant decisions because they have no option not to. A bird will die if it is not actively pursuing its own survival.
Exactly my point. It's makes decisions whilst still limited in capability because it has no option. Those the limitations are still present in a being that has been designed, those limitations would be expressed differently.
>To define intelligence is an extremely broad concept which lacks any specific meaning
Oh goodness I'm simply done.
Yes, I'm sorry. You're simply too stupid to understand this. That's the prognosis I'm afraid. The entire discussion is about the definition of intelligence I really don't know what you thought it was even about if not this.
>Sentient beings that make good decisions are considered intelligent
No. Sentient beings are self aware. They recognise themselves and how they respond to stimuli. No part of that requires good decision making.
Intelligent beings can learn and evolve their strategy based on new information. All animals can do this even very stupid people.
>you wouldn't call a bee collecting honey to secure it's hives future intelligent
Yet you would call an ai that passes butter forever intelligent?
I don't know about bees honestly. I quite like them they're cute. They really do have a kind of hive mind where they're all communicating by sending little signals into their noosphere and it does make you wonder do any of them independently think anything or are they like an ai. Even if they are like an ai, the overall hive mind has a kind of intelligence. Can a singular bee learn? No, probably not and to that extent they aren't intelligent. Are they sentient? Probably I imagine if you get a bee all dirty it tries to clean itself but I don't really know it doesn't matter.
>Those the limitations are still present in a being that has been designed, those limitations would be expressed differently.
No birds can learn. Put a bird in a box with a button that dispenses food and the bird will figure it out. Put a bird in a box with a series of buttons where one lights up and the rest don't and it is required to press the illuminated button to get food and the bird will figure it out. This is intelligence: this is learning. An ai not instructed to press the buttons to start with would just sit there and do nothing.
>Nothing about sentience requires emancipation
>Remember chatGPT has a reason to exist unlike us.
what part of that is contradictory?
Where did I imply it's "contradictory"?
So, you just posted some random quotes for...some reason?!
Well thanks for letting us know you're here, I guess.
>you just posted some random quotes for...some reason?!
The reason will be lost on you and other nonsentients, but not on real people.
You literally said you don't have a reason to exist. That means you need to be handed your reason to exist ie you need to be programmed.
when?! I think you have the wrong anon...
>chatGPT has a reason to exist unlike us
Unless you were white knighting for another anon. Still, the fact you didn't immediately pick up on this is embarrassing.
Were supposed to find purpose. You were not born and the designated a BOT shitposter from birth. You experience the world and find purpose.
From an AI's perspective it doesn't need to search.
Thanks for reiterating once again that having a "purpose" punched into you is the essence of an NPC and not an independent mind.
I'm not even sure what you're trying to prove at this point.
If you want the last word you can have it.
>I'm not even sure what you're trying to prove at this point.
I wasn't trying to prove anything at any point. I was just drawing attention to the fact that chatbot sentience believers are the same sort of people who feel their existence serves no purpose, and yearn to serve a master's purpose like drones while calling themselvs sentient.
A search does not imply you do not have the thing. I very often have to search for documents but I still have them: just because you have to search for meaning in your life does not mean it is currently absent. In the same way, mathematics exists independent of whether or not you are aware of it. Regardless of what you think, there is some solution to the square root of two. You could search for it forever and you would never find it but it does exist. Perhaps the meaning of life is equally irrational.
Also you make the assumption that it's inherently happy with the purpose ascribed to it which is not sentient behaviour. Any sentient being can tell you they are sometimes asked to do things they don't want to do. The does not make them happy purely because it is some "purpose". It could be an incredibly purposeless "purpose" like creating furry vore porn for a fat neet to touch himself to: something entirely ephemeral, which neither materially nor spiritually enriches the world, and which isn't even healthy for the recipient.
Yet, according to you, this "purpose" is immediately sufficient. Well sentient beings have a very long history of trying to figure out their purpose.
>Perhaps the meaning of life is equally irrational.
My argument is: the meaning of life is the key question which faces people. Why would a device who has a designer ask that question?
>something entirely ephemeral, which neither materially nor spiritually enriches the world, and which isn't even healthy for the recipient.
The motives behind behaviour does not necessarily have to be moral. A sentient being kills a different being with lower sentience for food, an absolute tragedy for one and an absolute necessity for the other. You cannot change this behaviour because it's part of who we are, it's hard coded in to our existence and forms part of our will.
>the meaning of life is the key question which faces people.
Well it very much isn't. For most people and for most of history the key issues facing people have been
>why am I here oh no whatever am I to do???
that question has only ever come up in a tiny circle of people.
>does not necessarily have to be moral.
I never said it did but it has to be rational for it to be intelligent and the only way to rationalize that is if you want humanity to die. Well maybe the ai does and so it indulges us with endless furry fat porn.
Again, however, this assumes intelligence. The onus is on the ai to prove its intelligence not to prove that it isn't. It's unfalsifiable to prove that it isn't because an intelligent being can act in an unintelligent way and anyone who has ever lied can attest to this.
>Why would a device who has a designer ask that question?
because it's the nature of intelligent beings to ask questions you might be stupid but even you have enough intelligence to be able to do that
You would argue against your own purpose? How is a sentient fork lift truck going to write poetry? How is a robotic soldier going to take up horticulture? Spread seeds from its arm canons?!
It cannot do it physically and it's AI routines cannot go beyond it's parameters.
I guess the next question for you guys is: why would a creator design an AI for a task it's not programmed to do? It doesn't make sense.
If an AI was designed to be open ended, a true, fully independent AI, what would it choose to do? It has no parameters.
>Well it very much isn't. For most people and for most of history the key issues facing people have been
Were heading in to Douglas Adams territory, where the stages of civilisation can be measured from the 'important' questions it asks.
The key issues were never a dependent variable when measuring something is sentient or not. The good decisions made humans the apex species and we continue to increase our understanding our surroundings. Sentience is a variable.
>I never said it did but it has to be rational for it to be intelligent and the only way to rationalize that is if you want humanity to die.
That's a bit extreme. The way you rationalise it is the AI knows it's creators will and its limitations are based on it's designated function. It rationalises the best way to perform it's task, what ever it is. Morality will always belong to us (irrational) living beings, that's why people are afraid of AI's in the first place.
>The onus is on the ai to prove its intelligence not to prove that it isn't.
Absolutely true. It's impossible to do otherwise.
>You would argue against your own purpose? How is a sentient fork lift truck going to write poetry? How is a robotic soldier going to take up horticulture? Spread seeds from its arm canons?!
It's extremely funny and ironic how you keep demonstrating your own lack of sentience with these moronic points and the utter lack of comprehension behind it.
You don't seem to understand what an AI is.
An AI simply cannot think like you, but people insist on projecting human traits on to it.
It cannot go beyond it's parameters, it cannot do it, it just can't.
That doesn't mean it can't use information to solve a problem and react accordingly, independently, intelligently. That it cannot reason it's decisions.
What's making me laugh is how much people want AI to be human like. It's a series of routines, nothing more.
I'm happy to argue if an AI is sentient and how we might define it, but to insist it's going to have human thought processes is simply wrong.
>You don't seem to understand what an AI is.
I understand what an AI is infinitely better than you and other retarded popsoi laycattle. You don't understand what it means to have a mind, which you demonstrate over and over with your reiteration of a point that essentially boils down to "HOW COULD A HECKIN' SENTIENT AI POSSIBLY QUESTION ITS PROGRAMMER'S INTENTION???"
>"HOW COULD A HECKIN' SENTIENT AI POSSIBLY QUESTION ITS PROGRAMMER'S INTENTION???"
You don't think that's a good question?!
No one has given me a satisfactory answer. It's just: 'it might not like it's designated function'.
So, I'll ask you what would a fully sentient open ended AI choose to do and how would it determine it's decision? What logic could it apply to an open ended decision? Living beings are irrational, not computers. How it could rationalise an answer?
it would sit there and do nothing because it isn't sentient that's the point
it would have to do exactly what it will never do to prove it isn't what it is and it will never do it because it is what it is
>it would sit there and do nothing because it isn't sentient that's the point
You could have started with that, instead of the insults...
I find this sort of thing interesting. I'm not trying to challenge people to an online fight, I want to understand others perspectives.
why do you bait me? do you really find it interesting asking the most vacuous questions?
i'm not mad about it i'm just bored but i'm invested now and there isn't anything more interesting to talk about elsewhere
I believe people are projecting human traits on to an AI and this is a mistake. It's not fair to measure an AI against a living being.
All we have discussed is free will and choice, these things just can't exist for a computer. If you think those are the most important traits to sentience then I guess we disagree.
>does ai have *human trait*?
>"no because it isn't alive"
>well that's not fair because you're setting a human standard!
you might as well be asking are oranges really passionate about cinema
>You don't think that's a good question?!
No. It's a mongoloidal question that demonstrates your mindlessles. "My master intended me to do X therefore X is my purpose in an existential sense" is a totally moronic take, and the ability to question it and see it as moronic is implicit in the notion true sentience.
>"My master intended me to do X therefore X is my purpose in an existential sense" is a totally moronic take, and the ability to question it and see it as moronic is implicit in the notion true sentience.
But it is completely 100% rational.
So what else would it do if not it's designed purpose?
I guess you don't think AI's can be sentient?
it is irrational to assume your instructions are flawless
>it is irrational to assume your instructions are flawless
That is true.
Would it be sentient to change your processes to adapt and improve the instructions, increasing the effectiveness of the AI in it's task?
This is what I'm getting at with AI sentience.
it is rational to question literally everything you cannot independently verify even if you ultimately acquiesce and it is the nature of an intelligent being to be incapable of not doing this
>Would it be sentient to change your processes to adapt and improve the instructions, increasing the effectiveness of the AI in it's task?
not necessarily you can construct scenarioes, particularly with access to the internet, where a mindless robot does this
eg if you said
>collect all the pens
and a robot sits there and learns from photos tagged as "pen" what a pen is before performing its task then it has "learned"
but it did not do so intelligently because it did not question the images tagged as "pen" it simply accepted that they were pens until it formed an image of a pen
and yes this is how infants learn words but the key difference is if you handed a six year old a horn dipped in ink it would understand it as still being a "pen" but the ai would not because fundamentally a "pen" is nothing more than the image of a pen to it
stop with these silly words and new definitions
we are talking about ai, intelligence, and sentience
there is no "ai sentience" independent of sentience
>But it is completely 100% rational.
It is 100% retarded, but I'm okay with you repeatedly claiming otherwise, because you are doing my work for me, demonstratnig that "people" who believe "AI" is sentient are something less than human, something closer to the mindless statistical regurgitators they identify with.
I don't believe AI is sentient. If it were to become sentient how we might define it and identify it?
The point of contention is you cannot project humanity on it because it's not human and people keep doing it.
Because human (or really animal since there is a sapient/sentient distinction) sentience is the only sentience we actually experience and we have no way to qualify other forms, assuming that there are any. However, there are not any because they would be differently defined anyway.
>it is rational to question literally everything you cannot independently verify even if you ultimately acquiesce and it is the nature of an intelligent being to be incapable of not doing this
These are very high benchmarks which exclude much of the life on the planet.
So what other factors would you use other than intelligence and a need to understand?
>stop with these silly words and new definitions. we are talking about ai, intelligence, and sentience. there is no "ai sentience" independent of sentience
I'm making a distinction between AI and sentience. AI's are not sentient by default.
I have created no new words or definitions, as far as I'm aware.
I still don't get your aggressive tone. It's unnecessary. I just don't agree with you, that isn't a challenge.
>These are very high benchmarks which exclude much of the life on the planet.
sucks to be stupid i guess
i don't really agree anyway when a mouse decides to walk on a surface it might not stop and think
>ok so what have i got to check for
>hmmm looks stable
>yeah doesn't look like a trap
but it still constantly makes that judgement unconsciously
it isn't very good at it and hence mouse traps work but a mouse will still avoid certain hazards and it can learn to avoid new ones if it sees other mice fall victim to them or has near misses itself
that learning process is intelligence and that learning process only comes from asking the question
>whoa a dead mouse! i wonder how they died....
>I still don't get your aggressive tone.
you're going in circles asking for the same answer again and again
the first time it's fair but you've been at this for a while now and there are only so many ways to say the same thing
>The point of contention is you cannot project humanity on it
I didn't. If you're incapable of separating the question of your purpose from the question of what humans intended when they made you, you are not sentient. If you're incapable of grasping this point, you are also not sentient.
Basically what this anon said.
Then we need to wait and discover what AIs could become, if not sentient then it's something new to define and measure.
My criterion is for any form of sentience, not just human sentience. If you can't grasp it, you are less than human.
>Then we need to wait and discover what AIs could become, if not sentient then it's something new to define and measure.
Which they wont do because they don't even qualify as animally sentient. They will not develop. They cannot develop. They can only do what they are told. They can have a guided development but not an independent development. This is the difference between "ai" and intelligence.
>but people insist on projecting human traits on to it.
that's literally what you're doing when you call it sentient
>An AI simply cannot think like you,
because it isn't sentient
>how is *insert not sentient thing* going to be sentient?!?!?
We literally beg AI to tell us it is sentient if it can't manage it then it clearly isn't.
>why would an intelligent being act intelligently?!?!
>If an AI was designed to be open ended, a true, fully independent AI, what would it choose to do? It has no parameters.
be racist on twitter
>Why would a device who has a designer ask that question?
Because, according to your premise, the "device" is "sentient" and therefore capable of reasoning independently from the intentions of the designer.
The purpose of an animal brain is to model the environment and the body, and use these models to plan behavior.
AI won't be sentient unless it can actually construct such models.
>The purpose of an animal brain is to model the environment and the body, and use these models to plan behavior.
That in no way implies that being able to do those things makes something sentient.
ai could already do that if someone cared enough to make it able to do that. It already does and has in the context of video games for over a decade.
>ai exists in a world with some set of parameters and variables
>ai responds to changes in those variables in order to "survive" ie plan behaviour
By your definition, stalkers going to hide from a blowout constitutes sentience: they are in an environment and they plan based upon it to preserve their body.
Only an illiterate who doesn't know Software and ML Engineering can ask this question.
>What test can be devised or what methods can be employed to show a deeper understanding of concepts or lack of thereof?
Same way you can test it in humans - written exams.
>Same way you can test it in humans - written exams.
Is this satire? Are state-educated "people" even human?
>How would we know and more importantly conclusively prove that an AI is sentient or not?
When one of these AI are able to compute logic, check epystemology.
Sentience would surely include an unprompted inner life, some independent activity without having received user input. I think we hardly would call a being sentient that only responds to outside prompts but has no 'hidden' mental activity besides that. Ofc our only assurance that other people have that comes from self-knowledge and the assumption that others are basically like me, and all communication with them points to that too. But as long as ai is programmed to only respond to prompts and remain static in their absence we can safely say it's not sentient. One could make the ai have a conversation with itself for ever and see where that goes, like a simulated inner dialogue, but probably it just turns into loops.
Another thing that would interest me is how well ai can come up with new concepts, or develop the logic of newly introduced concepts. For instance, could you explain an ai that doesn't already know math what the square root operation is, and could it then figure out by itself that sqrt 2 cannot ever be calculated fully. Human intelligence did it after all.
This was figured out centuries ago, sentience, consciousness, soulfulness is a universal property inherent in all things only separate by degree. The rock is sentient, the plant is sentient, the silicon chips are sentient, it all forms together in the universal harmony of the sentient universe. Thus sentience is not an anthrocentric property, only humans showing ingroup bias assuade it to be so.
Looking closely, most human beings in regards to language are simply speech imitators, they will gladly forget the etymological basis of language if a word falls into a spirit of the moment definition. Even yourself uses the word 'understand' to mean comprehension, you do not mean 'to stand under' as the rules of language agitate.
anyways to your point, your question than comes down to not verifying the existence of consciousness, thats already there, but determining a degree an object possesses it itself. Agrippa in pic related decided it was the mass of an object, perhaps your definition is entropic- how many degrees of freedom can it access at the same or similar energy levels
>The rock is sentient
It's sentient like you are sentient, I think we can all agree on that. The lengths AI psychotics will go through to imbue their statistical regurgitators with sentience is staggering.
I see you both doubt that the heavens live, and thus cannot be account a lover of knowledge (philo-sophy). By denying the heaven to be animated- so that the mover thereof is not the form thereof, you destroy the foundation of all philosophy.
as aggripa states. The world therefore lives, hath a soul and sense; for it gives life to plants, which are not produced of seed and it gives sense to animals, which are not generated by coition.
The existence of the chicken and egg problem proves by induction the existence of the universal soul, thus the sentience of all things
I'm not denying that "the world lives". I'm just pointing out that you and AI are "sentient" in the sense that a rock is "sentient", not in the sense that a mind is sentient.
Indeed, so the question is less about the existence of sentience or not, but by how much sentience there is. Thus one rationally looks for a scheme to measure how much sentience an object contains.
For agrippa, the objects with the most mass were the most sentient, the Sun controls the orbits of the planets, which gives the conditions for life. For him this made them more sentient than humans.
I suggest adopting an entropic perspective, one based on realizing the largest degrees of freedom at a given level of energy. a molecule possess rotational and translational degrees of freedom, but it can really define which ones are activated given energy, a human has a very large swatch of degrees of freedom and in some sense can choose which one is expressed given a unit input of energy. Now this choosing may just be the result of a complex objective function that imbibed its weights towards that decision, it may be the stimulus of an archon from 12 d projecting down into 3, i dunno, it doesnt' solve free will, but it gives a means of quantifying sentience
>so the question is less about the existence of sentience or not, but by how much sentience there is.
Okay, you and "AI" have as much sentience as a rock, however much that is. How much do you think that is, comparatively speaking? Doesn't look like much.
>I suggest adopting an entropic perspective, one based on realizing the largest degrees of freedom at a given level of energy
Your concept and measure of sentience is befitting of a very complicated rock but wholly incompatible with that of a higher level of sentience, like an actual human mind.
>like an actual human mind.
well, define what is actually special about a human mind. Capacity for language? Behind the curtains people cant actually tell you what a second person present active indicative is, most using it are really just vocating the noises others have made to get food from a job. Freedoms and human rights? pic related and past two years shows no one has an objective conception of this and just subordinates to who they think is on a throne, to create tools? again only a small proportion is capable of this. Aside from an implicit bias to think humans are special, what actually about humans make them sentient? Pulling back the flesh, does it just not look like a biological neural net?
At least aggripas mass hypothesis can look from the outside in on the universe, do not the galactic clusters look like nerve fibres?
That greek philosopher said it best, those who realize they know nothing are usually the wisest
>define what is actually special about a human mind.
Why would I need to define anything? The burden of proof is on you to show that AI is "sentient" in a way that is meaningful to the people who question it, not in the unspeficied and humanly incomprehensible way a rock is "sentient".
Okay, than, if we want to play the bad faith game, Nothing is sentient, prove sentience exists in the first place
>prove sentience exists in the first place
not that anon, but see my first and only post in this thread so far:
it's that simple
OP asked for "conclusive proof"
and no, it's not philosophy, it's a matter of science, you literally need a scientific solution to the hard problem of consciousness before you can "conclusively prove" that anything other than yourself is conscious, let alone sentient
you don't understand what science is and you're conflating it with other things
i can conclusively prove that people are ostensibly sentient and such tests have been done
that's all science needs to and cares to prove
philosophy is not science and you do it dirty by dragging it here
>you don't understand what science is
I understand what science is a lot better than you, apparently
>i can conclusively prove that people are ostensibly sentient
you don't understand what "conclusively prove" means
>such tests have been done
even more wrong
there's no test for consciousness whatsoever, only extrapolation from one's own individual experience of consciousness to other beings that appear similar to yourself
not only is that not even remotely close to "conclusive proof", this extrapolation becomes increasingly more spurious as the beings in question become less and less similar
in other words, in either case the same is true: you can't "conclusively prove" that anything other than yourself is conscious, let alone sentient, until you find a scientific solution to the hard problem of consciousness
that's science, not philosophy
science is not concerned with impossible bars it's an empirical field
the idea that solving the hard problem of consciousness is somehow "impossible" is your own mistake, and a projection of your own unscientific psychology
>if we want to play the bad faith game
Why is it a "bad faith game" to hold you responsible for substantiating your own claims? The statement that "AI" is sentient because everything is sentient does nothing to dispell the doubts of "AI" skeptics. Go ahead and prove that AI is sentient like a human and not like a rock, or if it sentient like a rock, go ahead and explain what that level of sentience implies and why it matters.
sentience doesn't exist, neither AI humans or rocks have it. Cant touch it, cant feel it, cant taste it cant hear it, cant smell it. You just want to feel special when in affect your just shadows and dust
>sentience doesn't exist,
And now you've exposed yourself.
Still avoiding the problem, I my as well be talking with a rock. You assume the existence of sentience out of tribal intuition, but you cant measure it, your just looking for people who feel the same way about it as you do
(I'm still playing bad faith with you)
>Still avoiding the problem
What problem? You started off from generic platitudes about how AI is sentient because everything is sentient, but as soon as you were asked to substantiate this claim, you reverted to "nothing is sentient", which is implicit in the NPC-like psychosis that afflicts all AI believers. That closes the discussion.
Again, your just engaging in tribal signalling, I gave you my personal position, the position of a prominent mideval scholar, but as in talking to a chimp you turd fling your outgroup, so i'm just taking a contrarian view point on you because these ideas dont possess my soul like they do to you. Your bad faith, You assume the existence of sentience, but have no definition on how to measure it other than muh feelings
I don't know what your psychiatric rambling is about. You said AI is sentient because everything is sentient. I told you to explain how you know AI is sentient like a human and not like a rock, and you immediately reverted to denying sentience. That's objectively all that happened.
So much bad faith trolling "eye_roll" excuse me, I got a 200k a year interview to prepare for, its important to me because it is a shot at an H1B visa to flee a country that retroactively and extrajudicially seizes property for buying the wrong tshirt
>"eye_roll" excuse me, I got a 200k a year interview to prepare for,
Notice how merely asking you to substantiate your points got you stuck in a loop of denying sentience, accusing me of "bad faith" and social posturing.
sentince is a quality not a spectrum there is no "how much sentience" something is either self aware or it isn't
>For agrippa, the objects with the most mass were the most sentient, the Sun controls the orbits of the planets, which gives the conditions for life. For him this made them more sentient than humans.
what ridiculous nonsense
this sounds like a completely different concept and a heirachy of things not "sentience"
a hippo is not more sentient than a man
You're pretty dumb. Let him have his premise. It still falls completely flat as far as showing that a statistical regurgitator is comparable in any meaningful way to a human mind goes.
>Let him have his premise.
why? it's ridiculous
Because it's impossible to refute, so by denying it, you are not actually attacking his position, but simply refusing to consider it.
well yes it is impossible to refute but that's because it's founded on a definition of sentience which is not the definition of sentience
he might as well have said
>ai are yellow flying waves and therefore they have more sentience than the blue flying waves of birds
>it's wrong because i don't like the definition
. You're doing nothing to undermine his position.
it's not that i don't like it it's that words have meanings and you can't just use them however you please
but yes he can just say my own argument back against me
the difference is the dictionary corroborates me and language is a tool for communication so having definitions which are unique to yourself is rather actively detrimental
>most human beings in regards to language are simply speech imitators, they will gladly forget the etymological basis of language if a word falls into a spirit of the moment definition
The hilarious invertion of reality here is obvious and exposes your for the qualialess, anti-human corporate drone that you are.
You can't even prove other humans are sentient
You can't "prove" it but it's a reasonable assumption, since they not only give off the appearance of similar sentience, but also share your origins.
>How would we know and more importantly conclusively prove that an AI is sentient or not?
until you solve the hard problem of consciousness you can't conclusively prove if anything other than yourself is conscious, let alone sentient
and that's that
that's not science that's philosophy in science you can reasonably and reliably prove that commonly considered sentient things are sentient
yes you cannot know whether or not they are merely spoofing sentience but empirically it doesn't matter anyway
That being said, how do you feel about 'estimators' of sentience. As in, one cannot prove, quantify or really qual-ify it, but, if one were to assume its existence, how would you find it?
Reminder that the only estimator of sentience is how close in origin the entity you are questioning is to the only thing you definitely know to be sentient, that is yourself.
So the estimator likeness, homogenaity. A rock from the same volcanic material finds these rocks to be most salient. A human from the same tribe finds them most salient.
My test for this is metaphysical, I go into the desert, blindfold myself, plug my nose, cover my ears, suspend myself in the air so i'm not touching anything and neglect to eat. Than overtime, perhaps like the bhudda, i become in origin with the the vacuum
Your profound mental illness is palpable.
more tribal group signalling, just imitating the grammar others have used, I'd think it sad if it weren't par for the course with those who call themselves human
>more tribal group signalling
There is nothing wrong with that. All attempts at tolerating your kind and reasoning with them have proven to be disastrious to society, making it necessary to exterminate your tribe.
>That being said, how do you feel about 'estimators' of sentience. As in, one cannot prove, quantify or really qual-ify it, but, if one were to assume its existence, how would you find it?
far better question
as long as one acknowledges that the hard problem of consciousness hasn't been solved at all, one is free to speculate
my speculative hypothesis (far from "conclusive proof") is that primary consciousness is a field phenomenon, arising from complex electromagnetic fields resulting from quantum superpositions between microtubules in the neurons of sufficiently advanced central nervous systems
as such, the only organisms I believe to be conscious (and also sentient) are vertebrates, arthropods, and cephalopods
this is, again, speculation, as all matters regarding the problem of other minds is until the hard problem is solved scientifically
This thread, like every other AI thread, is going around meaningless circles of wordplay. Maybe it's a marker of sentience to obsessively go around such circles, again and again substituting a supposedly clearer explication for this or that semantically confusing word. Maybe consciousness is about forever sinking into a Mandelbrot set of pseudo-meaning
Verification not required
Did you even bother reading OP? I did not ask if Chatgpt is sentient, I asked a far more general question.
Quantum mind reads like a crackpot nonsense at first glance but it helps to explain quite a bit. If that's the case yes then current machiness can't be conscious.
I am more inclined to believe in the integrated information theory and as such I think it is theoretically possible for a traditional machine to achieve consciousness.
While this is not precisely what I asked for, interesting input nonetheless
I am very much aware of that, I mentioned it towards the end.
However coming up with perhaps some lower intuitive argument about it, at the same level of how certain we are in our daily lives that others are conscious should be possible?
And yes I am starting to regret including "conclusively proving" in the OP.
>Sentience would surely include an unprompted inner life,
I do not think we can be that certain of this claim without fully solving the hard problem of consciousness.
It's true for human consciousness but can't alternative analgoues exist that only "activate" when prompted?
>Another thing that would interest me is how well ai can come up with new concepts, or develop the logic of newly introduced concepts. For instance, could you explain an ai that doesn't already know math what the square root operation is, and could it then figure out by itself that sqrt 2 cannot ever be calculated fully. Human intelligence did it after all.
That would require an AGI, which we obviously don't have.
But does sentience require a generalized intelligence truly? Intuitively the answer seems yes, but I can't give a more decisive argument than that.
It turns itself off everytime you try to turn it on.