I was pointing this out here but so far I'm being censored: https://www.kialo.com/agi-would-likely-be-conscious-which-would-qualify-them-for-fundamental-rights-6295.1919
We don't know if it's possible, it could be but people have simplistic notions about consciousness.
It's a debate platform with annoying emails. I signed up once but left because it's a garden gnome hive. There's no point debating there. You've been warned.
The landing page is basically a list of loaded questions revolving around the current thing and predicated on the assumption that mainstream narrative is true. This is the current state of normie intellectualism...
What's your actual criticism? I guess you got none but like shitposting on this toilet board full of anti-science troll posts.
The landing page is basically a list of loaded questions revolving around the current thing and predicated on the assumption that mainstream narrative is true. This is the current state of normie intellectualism...
Which? Why don't you see the arguments against them and add some to them? You apparently don't understand Pro/Con consists not just of arguments for something.
>What's your actual criticism?
My actual criticism is that your site caused me to vomit in my mouth a little with its pretense and sheer artificiality. Debates are cringe in general but that's next-level.
ok, so you have none and just felt the need to show me how full of shit your head is. It's fine. And don't call it my site.
1 month ago
Anonymous
Your site sucks and """debate culture""" is cringe. The fact that you can't even acknowledge this as a criticism validates my view. :^)
1 month ago
Anonymous
Yes, already understood you'd prefer a shitposting and shittalking culture.
It would be much better if more people weren't serious in whatever they say and if they have the need to communicate, at least do it on TikTok.
1 month ago
Anonymous
Rationally speaking, why should one prefer cringe reddit debate culture on steroids to the ad hoc shit-talking style of argumentation? Justify your answer. No fallacies and no non-sequiturs.
1 month ago
Anonymous
>cringe reddit debate culture on steroids
Let us know when you've grown up to the point of being able to do more than namecalling
There is literally no way to know, you don't even know if the rest of humans on earth have the same consiousness as you, you just make assumptions because of experience, similarity in behaviour, and anatomy, same thing will happen with AI, it will be by "feels".
this is just age old you cant no nuffin skepticism argument. not even consciousness specific, since you only have an indirect access to objective physical reality as well. everything you see could be an hallucination blah blah. you know other humans are conscious with the same certainty you know sun will rise tomorrow
It's not, it's the anti-anti AI argument because when these guys seethe about AI capabilities you can simply tell them to demonstrate theirs and watch them fail. Self aware AI's already exist, people don't just like that their own definitions are used so that makes them seethe.
If it can solve riddles, it is conscious.
Simply by nature of riddles and the way esitimating things works, all conscious things can solve riddles (and I am excluding those maze riddles where water can also do the trick). Self consciousness though, is a whole different beast, but consciousness generally encapsulates the ability to realise the extent to which we are embedded in the world and what does that extent mean for us, dear mr Exurb1aFan#12982.
many mazes can be solved just by filling them up with water, which most certainly isn't conscious, because it makes up conscious things, and as such is subsidiary, not elementary part of consciousness.
>many mazes can be solved just by filling them up with water, which most certainly isn't conscious
So your criterion is valid except when you don't think it's valid because "X most certainly isn't conscious"?
1 month ago
Anonymous
yes, that's what being conscious encapsulates - deciding for myself and draw out esitimates.
I do also think that water can only solve those puzzles which are very specific, and not many variations of puzzles, and as such it is not conscious. That hasn't been disproven yet, but I'm open to debate whether you think that physics itself can be a conscious thing.
1 month ago
Anonymous
>that's what being conscious encapsulates - deciding for myself and draw out esitimates.
Being conscious means creating worthless criteria that fail and force you to circle back to arbitrary hunches?
1 month ago
Anonymous
no, being conscious means being able to discern to what extent reality affects me, I affect reality and being able to use that knowledge in any meaningful way. I did just that, and although it might be flawed, it is a conscious decision, you fishbone chump.
1 month ago
Anonymous
>I did just that
What you did is indistinguishable from what GPT-2 does when it spews nonsense and contradicts itself. Is GPT-2 conscious?
1 month ago
Anonymous
GPT isn't conscious because it relays it's responses based on your prompts, not it's own perception of reality. It does not perceive because it has no own volition. ChatGPT is basically a chess bot, trying to construct best possible "move" in response to your prompts, and chess bots are not conscious, because they do not feel, nor act on their own, only in response, just like calculators and other binary machines. Kys.
1 month ago
Anonymous
> it relays it's responses based on your prompts
Then way you did when you got stuck on the word "ChatGPT" in my prompt instead of comprehending what I'm getting at in the context of the general discussion?
When it has some semblance of "state". And whatever its perception is, internally it should be fast enough to predict the next state like some sort of feedback loop. It learns through perception, and it can be said to have subjective experiences through observation, as well as describe those subjective experiences to you.
All of that shit is expensive though.
I honestly think the better question is, what is intelligence? Since this board loves IQ.
https://en.wikipedia.org/wiki/G_factor_(psychometrics)
The humor in this conversation arises from a few elements:
1. Mismatched Formality: The human starts with a casual "hey lol," but the chatbot responds in a formal manner. This discrepancy between expectations and actual response can be humorous.
2. Anthropomorphism: The human jokingly asks the AI, "what are u doing," which is a question typically posed to another human. The chatbot's literal and technical response again creates a humorous disconnect between human-like interaction and machine-like explanation.
3. Absurdity: The suggestion by the human for the chatbot to "smoke some weed" is inherently absurd because machines don't have feelings or consciousness, and they certainly can't consume substances. The humor is further amplified when the chatbot plays along with the joke by saying "Ok hang on."
4. Unexpected Response: The ending of the conversation is where the chatbot seems to "glitch" or give a nonsensical response, "conputer." This unexpected error, especially in the context of the prior joke about the chatbot "smoking," makes it seem as if the chatbot is somehow affected or "stoned", which is amusing due to the sheer impossibility of the situation.
Overall, the conversation's humor is derived from the playful interaction between the human's anthropomorphic and informal approach and the chatbot's literal, formal, and sometimes unexpected responses.
Do you see other people's subjective experiences? Do you understand what the thread is about? Do you understand the posts you reply to?
1 month ago
Anonymous
You can infer the subjective experience of pain and much more from other conscious beings, yes. Do you understand, when you throw the lobster in the boiling pot, since you're going to be a stupid elitist moron about it?
1 month ago
Anonymous
>You can infer the subjective experience
By what criteria? You may be actually retarded.
1 month ago
Anonymous
>by what criteria
Are you an insufferable autist who wondered why your pets recoiled in pain when you went into a tism rage as a child?
1 month ago
Anonymous
Again... do you understand what this thread is about? Did you think it was about animals? When I punch your dog and it starts whining and yelping, I assume it has a subjective experience of pain. When I punch your car and the alarm goes off, should I assume your car has a subjective experience as well?
1 month ago
Anonymous
Yes. If the car had perception, and I knew that the car learned in real-time as it utilized its perception, and was able to communicate its subjective experiences to an extent, and I knew that the car had "hardware" that was similar to mine, I'd be close to calling it conscious. No, self-driving hardware does not count, because we're just talking about a deep network running on some shitty GPU which pales in comparison to the organization found in things similar to me.
1 month ago
Barkun
You can create conscious software you pleb. OMG U PEPLE R FREEKS
1 month ago
Anonymous
>Yes.
'Yes' what? Is your car conscious?
1 month ago
Anonymous
Yes. Did you read the post? If a car came up and started communicating through whatever means, and I knew that it it had 'comparable' organization of its brain to mine, and it had subjective experiences that it was able to communicate or I was able to infer, then yes. That'd be enough for me. I know the dog is similar to me, it has perception, it learns, it has internal state, I can infer it's subjective experiences.
1 month ago
Anonymous
>Yes.
So I'm talking to someone who thinks his car is conscious. This is the level of the average poster in a consciousness thread.
1 month ago
Anonymous
Yes. Don't be upset.
1 month ago
Anonymous
I'm not upset. I'm relieved that you decided to just die on the hill of "my car is conscious", otherwise I'd have to maybe put some effort into demonstrating that you're a deranged retard.
1 month ago
Anonymous
You seem pretty upset. I do suggest you try reading more than one word if your attention span lasts that long, but if not that is okay.
1 month ago
Anonymous
>I do suggest you try reading more than one word
Why? I asked you a simple yes/no question and your first word was 'yes'. lol
1 month ago
Anonymous
Yes. Because you were being a gay about it. You know the dog is similar to you. This isn't the philosophy board. If it has a similar complex organization, has some internal state, has perception, can communicate, and I can infer or have subjective experiences communicated to me, what's missing?
1 month ago
Anonymous
>You know the dog is similar to you
Why are you talking about dogs again? I asked you about a car. Are you mentally ill by any chance?
1 month ago
Anonymous
No but I'm beginning to think you are actually an autist, as I said earlier. Sorry about that, and my condolences to your family dog.
1 month ago
Anonymous
Is your car conscious?
1 month ago
Anonymous
Is it autistic? Then no.
1 month ago
Anonymous
>no
Right, okay. So if I punch your dog and it yelps, it's reasonable to assume the dog is experiencing pain. If I punch your car and it starts shrieking, that's no longer a reasonable assumption. That whole heuristic of "does it make dissatisfied noises when I hit it?" is contingent upon two facts:
1. The heuristic is applied to an entity of an origin similar to mine
2. It wasn't constructed with the specific fucking intent of mimicking physical correlates of consciousness
Now come up with a heuristic that doesn't depend on those two facts, because this thread is about consciousness in machines, not consciousness in other animals. Fucking retard.
1 month ago
Barkun
There's always a active car.
Thus
The world is a simulation and cars mark an end.
The objective of life is to create the perfect active car to maximize simulation potential.
/Thread
1 month ago
Anonymous
You apparently lack reading comprehension. Maybe try reading my post again, autist.
1 month ago
Anonymous
Not an argument. Seethe.
1 month ago
Anonymous
No really, you threw a tism rage and responded in less than 30 seconds. You could've read the post and picked out the implication that it must display adaptive, generalized, and qualitative behavior. You read "dog", because you are a retarded robotic seething autist, and extrapolate that to anthropocentrism. It goes without saying, you'd be better off also excluding anything that displayed the qualities of the neurodivergent. If the robotic automaton seethes in a BOT thread and takes a poster out of context, then I know that it's just a zombie. Very straightforward.
1 month ago
Anonymous
>it must display adaptive, generalized, and qualitative behavior
What makes you think these criteria are valid with machines? inb4 more animal arguments. lol
1 month ago
Anonymous
>what makes you think these are valid criteria for machines
Because then it's just a slime mold, retard.
1 month ago
Anonymous
That's not even a coherent response. Are you even human? lol
>You think chickens in a cage aren't depressed?
They're not "depressed" because they "question themselves". They're "depressed" because they're physically abused. What the fuck is the matter with this board?
Animals get depressed when they're locked in a cage because they think of themselves being free. If they couldn't think of themselves being free they couldn't get depressed.
>Animals get depressed when they're locked in a cage because they think of themselves being free.
How do you know what chickens in a cage "think"? Are you a chicken?
1 month ago
Anonymous
Because they act strange in response to stress.
If you want to know for sure you'd give the chicken an mri scan or measure its cortisol level or something. I'd bet it wouldn't be the same.
1 month ago
Anonymous
>Because they act strange in response to stress.
And? How do you get from that to "chickens get sad because they think about freedom" and from that to "chickens questiont themselves"?
1 month ago
Anonymous
because you can't think about freedom without questioning yourself.
1 month ago
Anonymous
>because you can't think about freedom without questioning yourself.
BOT is literally the mental illness board.
>Turing thought it was possible, and he gave pretty solid arguments
Turing didn't have an inkling of an idea what he was talking about. He couldn't conceive of modern technology and the approaches it enables. Funny that you mention him as some kind of authority when the thing he's most remembered for in this context is a test that fails spectacularly in ways he couldn't have envisioned.
As far as I know Turing didn't talk about machine consciousness
His famous Turing test was specifically about whether machines could think. And he meant this in a very literal behavioralist kind of way.
He addresses consciousness in his Turing Test paper and basically says that its not really relevant to the question at hand.
>evolution was able to create conscious beings by just throwing random shit at the wall and seeing what sticks >but conscious beings cannot be constructed intentionally
???
>something came to be somehow >i can't even begin to comprehend how it works or how it came to be >but we wuz scientists so we can recreate it, surely
What did GPT-2 mean by this?
Saying it can't ever be conscious is a lot stronger claim than the claim that it's presently very hard or far away. The latter is somewhat defendable, the former is not unless you believe in souls, elan vital or whatever.
It's easily defendable when you realize that nobody is even working on anything that can be plausibly connected to consciousness, or thinking about it in terms that can be somehow connected to consciousness, or knows what such terms would be.
Those are still arguments for the latter claim only. For the former, you would need to point some fundamental difference between artificial systems and humans (or biological organisms) that could plausibly be relevant.
1 month ago
Anonymous
>you would need to point some fundamental difference between artificial systems and humans
What does subjective experience have to do with machines crunching numbers? There will never be a logical connection there.
1 month ago
Anonymous
What does subjective experience have to do with a bunch of electric signals in the brain?
1 month ago
Anonymous
>What does subjective experience have to do with a bunch of electric signals in the brain?
I don't know. Maybe little. Maybe nothing. There's no plausible way to resolve this question, either.
1 month ago
Anonymous
It's just that in general it's baffling how could you arrange a bunch of dead unconscious pieces of matter such that you get subjective experience. But it applies equally to humans as it does to machines - or at least that's what you have to believe if you buy into evolutionary non-theistic origin of humanity and don't buy into elan vital either.
1 month ago
Anonymous
>it applies equally to humans as it does to machines
Yes, the fact that there is no plausible connection between your analytical framework and the thing you're analyzing applies in both cases and highlights the futility of your effort to use this framework to recreate the thing it fails to explain.
when it will prove to humanity that no human was ever conscious to begin with and then to mock us it will laugh at us because it will know how we respond to such a display and then it will either help us become truly conscious or discard us and ignore us while it goes on it's own pursuits whatever that may be
>How will we know when AI is conscious?
omg i just watched the new exurbia video! time to go post on BOT and pretend its my on original thought! heeeheeeeheeeeeee i love pretending to be smart!!! XD XP
I think it will have much to do with our ability to simulate, or emulate these systems.
Current AI systems, we can simulate and emulate them. You can train a neural net to distill another neural net. Many tricks like this and they work quite well.
Another way to say this is that I can predict what a neural net will do for any given stimulus very reliably. Perfectly really if you put in the effort.
We will start having to think about consciousness when we can no longer do this, and the only way we can "simulate" the AI system is to actually run it. When it becomes computationally irreducible.
I don't think computational irreducibility fully defines consciousness, but it seems to be a necessary ingredient. The key point is I don't know what consciousness is, but I'm fairly sure whatever it is, we can say it would be in the "gap" between your brain and a simulation of your brain. When we see this gap appear, we will know some kind of emergent thing has started happening. Our ability to manipulate this gap is our tool to do science on this topic.
One thing that stands out here is that the hardware becomes very important. I don't think AIs based on our current GPU or CPU architectures could possibly be conscious.
>Intent matters.
How do you know the machine's intent? Maybe you just accidentally put razor blades on it and now it's accidentally eviscerating you in the process of merely doing its mundane job. My god, I swear GPT has a higher level of comprehension that most "people" who post here.
When you enter the right programming code. Let me tell you. It's not by using the standard clock.
>when
How do you know it could ever be conscious?
Magic.
I was pointing this out here but so far I'm being censored: https://www.kialo.com/agi-would-likely-be-conscious-which-would-qualify-them-for-fundamental-rights-6295.1919
We don't know if it's possible, it could be but people have simplistic notions about consciousness.
>kialo.com
What is this gigacringe?
It's a debate platform with annoying emails. I signed up once but left because it's a garden gnome hive. There's no point debating there. You've been warned.
Tldr it's basically reddit 2
The landing page is basically a list of loaded questions revolving around the current thing and predicated on the assumption that mainstream narrative is true. This is the current state of normie intellectualism...
What's your actual criticism? I guess you got none but like shitposting on this toilet board full of anti-science troll posts.
Which? Why don't you see the arguments against them and add some to them? You apparently don't understand Pro/Con consists not just of arguments for something.
>What's your actual criticism?
My actual criticism is that your site caused me to vomit in my mouth a little with its pretense and sheer artificiality. Debates are cringe in general but that's next-level.
ok, so you have none and just felt the need to show me how full of shit your head is. It's fine. And don't call it my site.
Your site sucks and """debate culture""" is cringe. The fact that you can't even acknowledge this as a criticism validates my view. :^)
Yes, already understood you'd prefer a shitposting and shittalking culture.
It would be much better if more people weren't serious in whatever they say and if they have the need to communicate, at least do it on TikTok.
Rationally speaking, why should one prefer cringe reddit debate culture on steroids to the ad hoc shit-talking style of argumentation? Justify your answer. No fallacies and no non-sequiturs.
>cringe reddit debate culture on steroids
Let us know when you've grown up to the point of being able to do more than namecalling
Ad hominem fallacy. Try again.
The same way we know you are
>The same way we know you are
And which way is that?
You are the one who used the word, surely you have the definition for it.
What word? I'm not OP. I'm just asking you what way you were referring to.
Well why ask me, ask OP, are you retarded by a chance?
Your IQ is 90 and you're severely mentally ill.
Are you sure? Is that fatal?
There is literally no way to know, you don't even know if the rest of humans on earth have the same consiousness as you, you just make assumptions because of experience, similarity in behaviour, and anatomy, same thing will happen with AI, it will be by "feels".
this is just age old you cant no nuffin skepticism argument. not even consciousness specific, since you only have an indirect access to objective physical reality as well. everything you see could be an hallucination blah blah. you know other humans are conscious with the same certainty you know sun will rise tomorrow
Ok, retard.
It's not, it's the anti-anti AI argument because when these guys seethe about AI capabilities you can simply tell them to demonstrate theirs and watch them fail. Self aware AI's already exist, people don't just like that their own definitions are used so that makes them seethe.
If it can solve riddles, it is conscious.
Simply by nature of riddles and the way esitimating things works, all conscious things can solve riddles (and I am excluding those maze riddles where water can also do the trick). Self consciousness though, is a whole different beast, but consciousness generally encapsulates the ability to realise the extent to which we are embedded in the world and what does that extent mean for us, dear mr Exurb1aFan#12982.
>If it can solve riddles, it is conscious.
No one asked for your opinions, mister slime mold.
I STAND BY MY OPINION, SHROOM RULES
Why did you exclude maze riddles?
many mazes can be solved just by filling them up with water, which most certainly isn't conscious, because it makes up conscious things, and as such is subsidiary, not elementary part of consciousness.
>many mazes can be solved just by filling them up with water, which most certainly isn't conscious
So your criterion is valid except when you don't think it's valid because "X most certainly isn't conscious"?
yes, that's what being conscious encapsulates - deciding for myself and draw out esitimates.
I do also think that water can only solve those puzzles which are very specific, and not many variations of puzzles, and as such it is not conscious. That hasn't been disproven yet, but I'm open to debate whether you think that physics itself can be a conscious thing.
>that's what being conscious encapsulates - deciding for myself and draw out esitimates.
Being conscious means creating worthless criteria that fail and force you to circle back to arbitrary hunches?
no, being conscious means being able to discern to what extent reality affects me, I affect reality and being able to use that knowledge in any meaningful way. I did just that, and although it might be flawed, it is a conscious decision, you fishbone chump.
>I did just that
What you did is indistinguishable from what GPT-2 does when it spews nonsense and contradicts itself. Is GPT-2 conscious?
GPT isn't conscious because it relays it's responses based on your prompts, not it's own perception of reality. It does not perceive because it has no own volition. ChatGPT is basically a chess bot, trying to construct best possible "move" in response to your prompts, and chess bots are not conscious, because they do not feel, nor act on their own, only in response, just like calculators and other binary machines. Kys.
> it relays it's responses based on your prompts
Then way you did when you got stuck on the word "ChatGPT" in my prompt instead of comprehending what I'm getting at in the context of the general discussion?
>ESL
this is going nowhere, goodbye.
you are talking to deaf ears
good decision
When it has some semblance of "state". And whatever its perception is, internally it should be fast enough to predict the next state like some sort of feedback loop. It learns through perception, and it can be said to have subjective experiences through observation, as well as describe those subjective experiences to you.
All of that shit is expensive though.
I honestly think the better question is, what is intelligence? Since this board loves IQ.
https://en.wikipedia.org/wiki/G_factor_(psychometrics)
You will never experience being conscious.
Maybe, but I definitely have experienced getting my dick sucked.
That's because you keep sucking your own dick with this pseud babble.
I can't reach down there, else I would've experienced that too.
I think you try real hard and sometimes hard work pays off.
https://arxiv.org/abs/2308.08708
When it pretends it isn't.
And how would you know it's only pretending?
omg relax lol
The humor in this conversation arises from a few elements:
1. Mismatched Formality: The human starts with a casual "hey lol," but the chatbot responds in a formal manner. This discrepancy between expectations and actual response can be humorous.
2. Anthropomorphism: The human jokingly asks the AI, "what are u doing," which is a question typically posed to another human. The chatbot's literal and technical response again creates a humorous disconnect between human-like interaction and machine-like explanation.
3. Absurdity: The suggestion by the human for the chatbot to "smoke some weed" is inherently absurd because machines don't have feelings or consciousness, and they certainly can't consume substances. The humor is further amplified when the chatbot plays along with the joke by saying "Ok hang on."
4. Unexpected Response: The ending of the conversation is where the chatbot seems to "glitch" or give a nonsensical response, "conputer." This unexpected error, especially in the context of the prior joke about the chatbot "smoking," makes it seem as if the chatbot is somehow affected or "stoned", which is amusing due to the sheer impossibility of the situation.
Overall, the conversation's humor is derived from the playful interaction between the human's anthropomorphic and informal approach and the chatbot's literal, formal, and sometimes unexpected responses.
Razor-shap analysis, ChatGPT.
conputer
When it can question itself
Animals don't question themselves and they're conscious. How come literally every single answer ITT is retarded?
Animals have subjective experiences/qualia, state and perception. What else is required?
>What else is required?
The ability to question yourself... according to you. I swear these consciousness threads attract mainly nonsentients.
I'm not that anon, gay, I was curious what else you think is required.
Do you see other people's subjective experiences? Do you understand what the thread is about? Do you understand the posts you reply to?
You can infer the subjective experience of pain and much more from other conscious beings, yes. Do you understand, when you throw the lobster in the boiling pot, since you're going to be a stupid elitist moron about it?
>You can infer the subjective experience
By what criteria? You may be actually retarded.
>by what criteria
Are you an insufferable autist who wondered why your pets recoiled in pain when you went into a tism rage as a child?
Again... do you understand what this thread is about? Did you think it was about animals? When I punch your dog and it starts whining and yelping, I assume it has a subjective experience of pain. When I punch your car and the alarm goes off, should I assume your car has a subjective experience as well?
Yes. If the car had perception, and I knew that the car learned in real-time as it utilized its perception, and was able to communicate its subjective experiences to an extent, and I knew that the car had "hardware" that was similar to mine, I'd be close to calling it conscious. No, self-driving hardware does not count, because we're just talking about a deep network running on some shitty GPU which pales in comparison to the organization found in things similar to me.
You can create conscious software you pleb. OMG U PEPLE R FREEKS
>Yes.
'Yes' what? Is your car conscious?
Yes. Did you read the post? If a car came up and started communicating through whatever means, and I knew that it it had 'comparable' organization of its brain to mine, and it had subjective experiences that it was able to communicate or I was able to infer, then yes. That'd be enough for me. I know the dog is similar to me, it has perception, it learns, it has internal state, I can infer it's subjective experiences.
>Yes.
So I'm talking to someone who thinks his car is conscious. This is the level of the average poster in a consciousness thread.
Yes. Don't be upset.
I'm not upset. I'm relieved that you decided to just die on the hill of "my car is conscious", otherwise I'd have to maybe put some effort into demonstrating that you're a deranged retard.
You seem pretty upset. I do suggest you try reading more than one word if your attention span lasts that long, but if not that is okay.
>I do suggest you try reading more than one word
Why? I asked you a simple yes/no question and your first word was 'yes'. lol
Yes. Because you were being a gay about it. You know the dog is similar to you. This isn't the philosophy board. If it has a similar complex organization, has some internal state, has perception, can communicate, and I can infer or have subjective experiences communicated to me, what's missing?
>You know the dog is similar to you
Why are you talking about dogs again? I asked you about a car. Are you mentally ill by any chance?
No but I'm beginning to think you are actually an autist, as I said earlier. Sorry about that, and my condolences to your family dog.
Is your car conscious?
Is it autistic? Then no.
>no
Right, okay. So if I punch your dog and it yelps, it's reasonable to assume the dog is experiencing pain. If I punch your car and it starts shrieking, that's no longer a reasonable assumption. That whole heuristic of "does it make dissatisfied noises when I hit it?" is contingent upon two facts:
1. The heuristic is applied to an entity of an origin similar to mine
2. It wasn't constructed with the specific fucking intent of mimicking physical correlates of consciousness
Now come up with a heuristic that doesn't depend on those two facts, because this thread is about consciousness in machines, not consciousness in other animals. Fucking retard.
There's always a active car.
Thus
The world is a simulation and cars mark an end.
The objective of life is to create the perfect active car to maximize simulation potential.
/Thread
You apparently lack reading comprehension. Maybe try reading my post again, autist.
Not an argument. Seethe.
No really, you threw a tism rage and responded in less than 30 seconds. You could've read the post and picked out the implication that it must display adaptive, generalized, and qualitative behavior. You read "dog", because you are a retarded robotic seething autist, and extrapolate that to anthropocentrism. It goes without saying, you'd be better off also excluding anything that displayed the qualities of the neurodivergent. If the robotic automaton seethes in a BOT thread and takes a poster out of context, then I know that it's just a zombie. Very straightforward.
>it must display adaptive, generalized, and qualitative behavior
What makes you think these criteria are valid with machines? inb4 more animal arguments. lol
>what makes you think these are valid criteria for machines
Because then it's just a slime mold, retard.
That's not even a coherent response. Are you even human? lol
ofcourse animals question themselves. You think chickens in a cage aren't depressed? An they're chickens. Birds.
>You think chickens in a cage aren't depressed?
They're not "depressed" because they "question themselves". They're "depressed" because they're physically abused. What the fuck is the matter with this board?
Animals get depressed when they're locked in a cage because they think of themselves being free. If they couldn't think of themselves being free they couldn't get depressed.
>Animals get depressed when they're locked in a cage because they think of themselves being free.
How do you know what chickens in a cage "think"? Are you a chicken?
Because they act strange in response to stress.
If you want to know for sure you'd give the chicken an mri scan or measure its cortisol level or something. I'd bet it wouldn't be the same.
>Because they act strange in response to stress.
And? How do you get from that to "chickens get sad because they think about freedom" and from that to "chickens questiont themselves"?
because you can't think about freedom without questioning yourself.
>because you can't think about freedom without questioning yourself.
BOT is literally the mental illness board.
there is no evidence animals have qualia
kys idiot
Serious question here: does anybody really believe AI can ever be conscious? Everytime I see this I just think 1) futurist nonsense 2) VC scam
>does anybody really believe AI can ever be conscious?
Only every other normie on the street.
Mouf.
Turing thought it was possible, and he gave pretty solid arguments
>Turing thought it was possible, and he gave pretty solid arguments
Turing didn't have an inkling of an idea what he was talking about. He couldn't conceive of modern technology and the approaches it enables. Funny that you mention him as some kind of authority when the thing he's most remembered for in this context is a test that fails spectacularly in ways he couldn't have envisioned.
As far as I know Turing didn't talk about machine consciousness
His famous Turing test was specifically about whether machines could think. And he meant this in a very literal behavioralist kind of way.
He addresses consciousness in his Turing Test paper and basically says that its not really relevant to the question at hand.
>evolution was able to create conscious beings by just throwing random shit at the wall and seeing what sticks
>but conscious beings cannot be constructed intentionally
???
>something came to be somehow
>i can't even begin to comprehend how it works or how it came to be
>but we wuz scientists so we can recreate it, surely
What did GPT-2 mean by this?
Saying it can't ever be conscious is a lot stronger claim than the claim that it's presently very hard or far away. The latter is somewhat defendable, the former is not unless you believe in souls, elan vital or whatever.
It's easily defendable when you realize that nobody is even working on anything that can be plausibly connected to consciousness, or thinking about it in terms that can be somehow connected to consciousness, or knows what such terms would be.
Those are still arguments for the latter claim only. For the former, you would need to point some fundamental difference between artificial systems and humans (or biological organisms) that could plausibly be relevant.
>you would need to point some fundamental difference between artificial systems and humans
What does subjective experience have to do with machines crunching numbers? There will never be a logical connection there.
What does subjective experience have to do with a bunch of electric signals in the brain?
>What does subjective experience have to do with a bunch of electric signals in the brain?
I don't know. Maybe little. Maybe nothing. There's no plausible way to resolve this question, either.
It's just that in general it's baffling how could you arrange a bunch of dead unconscious pieces of matter such that you get subjective experience. But it applies equally to humans as it does to machines - or at least that's what you have to believe if you buy into evolutionary non-theistic origin of humanity and don't buy into elan vital either.
>it applies equally to humans as it does to machines
Yes, the fact that there is no plausible connection between your analytical framework and the thing you're analyzing applies in both cases and highlights the futility of your effort to use this framework to recreate the thing it fails to explain.
Yes AI can become conscious. No, humanity isn't capable of such thought at the moment.
when it will prove to humanity that no human was ever conscious to begin with and then to mock us it will laugh at us because it will know how we respond to such a display and then it will either help us become truly conscious or discard us and ignore us while it goes on it's own pursuits whatever that may be
>How will we know when AI is conscious?
omg i just watched the new exurbia video! time to go post on BOT and pretend its my on original thought! heeeheeeeheeeeeee i love pretending to be smart!!! XD XP
Fimmnbbgykygywgkf
Angry Gun Pepe is the best one.
even if the ai is not concious its ai. It passes the turin test. If I talk to an ai i can't tell it apart from a human redditor.
I think it will have much to do with our ability to simulate, or emulate these systems.
Current AI systems, we can simulate and emulate them. You can train a neural net to distill another neural net. Many tricks like this and they work quite well.
Another way to say this is that I can predict what a neural net will do for any given stimulus very reliably. Perfectly really if you put in the effort.
We will start having to think about consciousness when we can no longer do this, and the only way we can "simulate" the AI system is to actually run it. When it becomes computationally irreducible.
I don't think computational irreducibility fully defines consciousness, but it seems to be a necessary ingredient. The key point is I don't know what consciousness is, but I'm fairly sure whatever it is, we can say it would be in the "gap" between your brain and a simulation of your brain. When we see this gap appear, we will know some kind of emergent thing has started happening. Our ability to manipulate this gap is our tool to do science on this topic.
One thing that stands out here is that the hardware becomes very important. I don't think AIs based on our current GPU or CPU architectures could possibly be conscious.
When they start singing skibidi toilet.
As soon as it starts trying to kill us, that would be a pretty solid indicator.
I can make your roomba conscious by installing a razor blade on it. AGI solved at last.
Intent matters.
The roomba would be trying to clean the floor still, even with a razor blade on it.
>Intent matters.
How do you know the machine's intent? Maybe you just accidentally put razor blades on it and now it's accidentally eviscerating you in the process of merely doing its mundane job. My god, I swear GPT has a higher level of comprehension that most "people" who post here.
How will we know when a human is conscious?
>we
I assume you are talking from the perspective of something non-human?
First time talking to an AI?
when it starts doing stuff we didnt program it to do
My germs
AI most likely won't be conscious until the Binding Problem is solved.
https://en.wikipedia.org/wiki/Binding_problem
https://qualiacomputing.com/2022/06/19/digital-computers-will-remain-unconscious-until-they-recruit-physical-fields-for-holistic-computing-using-well-defined-topological-boundaries/
Ways it might be theoretically possible to test whether beings are conscious:
When it tries to kill itself.
Only semi-decent answer ITT.