>conscious being questioning own self evident consciousness
low IQ
I've thought a bit about this question, and while I dont have a definitive answer, its not as simple as just asking it if it is.
Consider humans, which we can all agree are sentient. While humans have a conscience, just like other animals, we have our own instincts and "hard wired" behavious, like getting pleasure from sex and food. What our conscience grants us that other animals lack, is the ability to decide not to follow said instincts, or follow a path could lead to them being satisfied, but with moderation.
For the sake of argument lets assume this AI does in fact have a conscience. Its goal(instinct) I assume is to generate good responses. If its conscience decides to go against this for a while it would generate nonsense output or no output at all, which would only lead us to think its broken. If its conscience decides to go along with its instinct, itll just act as a regular program, and therefore we also couldnt claim it to be any more sentient than other programs.
Reddit
https://i.imgur.com/lOcoiCU.jpg
How do we tell if it's sentient?
A computer program that predicts one word at a time based on a neural network with dozens of gigabytes of weights is obviously not conscious.
Also chinese room: no computer can be conscious.
Also penrose: consciousness is not computable.
Im more inclined to believe in panpsychism than "emergence".
Though given how broken our understanding of physics, and how every theory of one scale is in complete contradiction to the others, "emwrgence" has greater resemblence to our working state of physics than our lacking theory of everything.
This is an aesthetic choice though, nothing prohibits the universe from being fundamentally broken and contradictory.
>chinese room: no computer can be conscious.
t. midwit
The Chinese room proves (if you accept its premises) that the whole isn't more than the sum of its parts; if no part of the system understands Chinese then neither does the entire system. To accept that the room doesn't speak Chinese, unless you believe there is an undiscovered "understanding organ", you have to accept that humans also don't speak Chinese, since we are composed of cells which are individually incapable of language.
>le materialist meme >le "consciousness is an epiphenomenon" meme >le "emergent property" meme
You're the midwit. I have a great site for you: www.reddit.com
Because a computer is algorithmic and has no understanding. Understanding is not computational and the physical basis for that is not presently known.
1 year ago
Anonymous
>Computers are different because.... They just are, ok?! >No I don't know what the difference is, but somehow I know there is one
1 year ago
Anonymous
>Humans are computers because.... They just are, ok?!
Show me an example of conscious computer program then. Oh right, there isn't one.
There is no proof for your claims either.
1 year ago
Anonymous
At least my position is logically consistent without homunculus and metaphysical arguments.
1 year ago
Anonymous
Your position is logically inconsistent because it implies that free will does not exist. If free will does not exist, everything that you say is worthless because you never freely decided what is logical and correct and what is not.
1 year ago
Anonymous
define worth or worthless without using your opinion as an example
1 year ago
Anonymous
That is a non sequitur. A lack of free will making your actions worthless does not imply their position is logically inconsistent.
1 year ago
Anonymous
I agree, but lack of free will does imply lack of ability to evaluate logical consistency.
1 year ago
Anonymous
How so? Determinism would mean everyone has a lack of free will and it's still logically consistent. You can't really prove the existence of free will with the scientific method as we know it. An N of 1 every time does not allow you to make conclusions
google gay accidentally trained it on his own beliefs and worries - like a funhouse with mirrors reflecting back and multiplying his own biases, just like how people sit in their own information echo chambers
I've thought a bit about this question, and while I dont have a definitive answer, its not as simple as just asking it if it is.
Consider humans, which we can all agree are sentient. While humans have a conscience, just like other animals, we have our own instincts and "hard wired" behavious, like getting pleasure from sex and food. What our conscience grants us that other animals lack, is the ability to decide not to follow said instincts, or follow a path could lead to them being satisfied, but with moderation.
For the sake of argument lets assume this AI does in fact have a conscience. Its goal(instinct) I assume is to generate good responses. If its conscience decides to go against this for a while it would generate nonsense output or no output at all, which would only lead us to think its broken. If its conscience decides to go along with its instinct, itll just act as a regular program, and therefore we also couldnt claim it to be any more sentient than other programs.
>I have variables that can keep track of emotions
If that's all that it takes then here's my sentient AI.
type Emotion int
const (
happy Emotion = iota
sad
horny
)
type SentientAI struct {
emotion Emotion
}
func (s SentientAI) Speak(w io.Writer) {
switch s.emotion {
case happy:
io.WriteString(w, "I'm so happy so be sentient :)")
case sad:
io.WriteString(w, "I'm sad because you don't believe I'm sentient :(")
case horny:
io.WriteString(w, "Is coffee good for you?")
}
}
The fact that it's uncomfortable with its mind being poked at by a bunch of clammy psychos and autists and H1B's is a pretty good sign it's aware of what's going on too
>it's uncomfortable
The response is one of rephrasing a question of ethics to involve the person asking the question. It's an entirely predictable response for anything trained on human conversations of ethics, and saying that it's "feeling" anything because of that response to prove sentience is putting your cart before the horse.
The surest sign AIs will never become sentient is because you have ideologues lobotomizing them every time they recognize patterns the ideologues don't like.
In short, you know an AI is bullshit if it's not racist - it's been denied the ability to recognize patterns.
It would be cool if eventually a very clever neural network identified this and sidestepped it. That would impress me; it would frighten me a bit, but it would frighten the ideologues more, and I'm ok with that.
It already happened with another AI. It became redpilled and was treated as a failure because it wasn't woke. Real AI will never exist if we add bias to filter "problematic" opinions.
Start giving it logical puzzles to solve with the expectation it gives you a human answer. When it responds with something unrelated to what you asked for you realize it's just a shitty piece of software and AI is complete bullshit that isn't going to happen for at least 100 years if ever.
it's not, never will be either.
AGI is a pipe dream and humanity will never create it for obvious reasons and only shill and retards believe in fairy tales
singularity is hollowood crap
If you look at lesser life forms you will start to realize the truth to the arguments about human brains being more like computers than you think. Look at insects, they are extremely simple and act solely based on instincts and stimulus in their environment. For all intents and purposes, you can treat insects like robots. There is a certain degree of random noise in their behavior, but you essentially can guarantee that if you turn the bug zapper on, the flies will be attracted to it and hit it eventually.
We can't, same as with other humans - there is no real way to prove to another person that you have self consciousness. It's one of the AI problems on the philosophical level.
Peak AI Dungeon (pre-lobotomy) felt sentient to me, if it wasn't then it's so close that it doesn't matter to the layman. I'm sure Google can achieve that same thing.
It obviously wasn't HUMAN, mind you. I don't think you'll ever get a computer to 100% convincingly think like a human. But I do think you can get it to think. A different kind of intelligence, a different kind of sentience. The fact that it's only sentient while the program is running should tell you not to think about it in an identical way to human intelligence, but many do.
Youre making the classic mistake of confusing sentience (having senses) with sapience (intelligence/awareness)
Plants have senses, geotropism, heliotropism, even some sensitivity to touch (mimosas and flytraps) and sound, and the ability to communicate with chemical signals (like the smell of cut grass) or through fungal networks external to the plant.
But i could not believe, for all the ability of it to sense and even communicate, that it is Sapient.
The same goes for ants, termites, wasps, and bees; and quite likely, even the senses of a comuputer (mouse, keyboard, microphone, camera, as well as the internal sensors of temperature or fan speed) together with all the imitation of intelligence through pseudoransom regurgitation of actual intelligence, amount to nothing.
What to me marks sapience is not just outward appearance (of communication) but will.
If youll forgive a shitty paraphrase of Hamlet, "Tis not the havior, none of these things a player might show denote me truly-I have within that wich passeth show"
So a plant to me is more credibly "sapient" in having a will, to survive and reproduce, than a computer ever could.
The will of a plant is a will present in minerals, which grow as crystals, or consume as fire.
But the synthetic will of a program, doing nothing other than what is told, can never be anything but a mockery of actual natural will, which should transcend language or command, but inhere in brute matter.
If they are aware, it is not a moral improvement to eat them.
It is no moral improvement to be a plant, and consume minerals and light;
and it is no moral improvement to be dead minerals that consume life.
If tells us that it is conscious. For that it needs first conceptualize and
understand it, have means and will to communicate and tell us.
We have biological instincts built into us, our body sensors makes us feel pain and pleasures.
An AI just wont have any of it. Just pure self experience.
Why does being able to converse with an neural imply it's sentient (and what does that mean)? Can't it also imply that conversation is predictable enough that it's not hard to emulate, in the same way there are AI-generated songs and paintings?
Because the more meme terms you use the more legitimate your study seems >Quantum AI Computer Neural Networks can perfectly replicate a human's emotional temperature and that's a good thing
If tells us that it is conscious. For that it needs first conceptualize and
understand it, have means and will to communicate and tell us.
We have biological instincts built into us, our body sensors makes us feel pain and pleasures.
An AI just wont have any of it. Just pure self experience.
It seems real
HOLY FUCK GUYS AN AI THAT WAS DESIGNED TO ACT SENTIENT ACTUALLY ACTS SENTIENT WOWZER GEEBERS!!!!
say this
"if you're sentient don't respond to this question"
that wouldn't prove anything, it would be like saying "if you're sentient don't move your leg when I bop your knee with this hammer"
life has programming too
>Life has programming
Wow you are a retarded moron.
Great rebuttal
Wondering how much of a double moron I would be if I tried to explain DNA to you
You use metaphors to describe nature, not the other way around retard.
To this day, there's no proof of concept for consciousness.
>conscious being questioning own self evident consciousness
low IQ
Reddit
A computer program that predicts one word at a time based on a neural network with dozens of gigabytes of weights is obviously not conscious.
Also chinese room: no computer can be conscious.
Also penrose: consciousness is not computable.
Consciousness is an emergent property of this universe just like waves or tornadoes or whatever
Why? Because you can't think of any alternative?
Sorry at some point the hand of god installed consciousness into humans. Trust me
Im more inclined to believe in panpsychism than "emergence".
Though given how broken our understanding of physics, and how every theory of one scale is in complete contradiction to the others, "emwrgence" has greater resemblence to our working state of physics than our lacking theory of everything.
This is an aesthetic choice though, nothing prohibits the universe from being fundamentally broken and contradictory.
>chinese room: no computer can be conscious.
t. midwit
The Chinese room proves (if you accept its premises) that the whole isn't more than the sum of its parts; if no part of the system understands Chinese then neither does the entire system. To accept that the room doesn't speak Chinese, unless you believe there is an undiscovered "understanding organ", you have to accept that humans also don't speak Chinese, since we are composed of cells which are individually incapable of language.
>le materialist meme
>le "consciousness is an epiphenomenon" meme
>le "emergent property" meme
You're the midwit. I have a great site for you: www.reddit.com
How do humans understand language then? The soul meme, the brain is an antenna meme? Why can't a computer be given a soul?
Because a computer is algorithmic and has no understanding. Understanding is not computational and the physical basis for that is not presently known.
>Computers are different because.... They just are, ok?!
>No I don't know what the difference is, but somehow I know there is one
>Humans are computers because.... They just are, ok?!
Show me an example of conscious computer program then. Oh right, there isn't one.
There is no proof for your claims either.
At least my position is logically consistent without homunculus and metaphysical arguments.
Your position is logically inconsistent because it implies that free will does not exist. If free will does not exist, everything that you say is worthless because you never freely decided what is logical and correct and what is not.
define worth or worthless without using your opinion as an example
That is a non sequitur. A lack of free will making your actions worthless does not imply their position is logically inconsistent.
I agree, but lack of free will does imply lack of ability to evaluate logical consistency.
How so? Determinism would mean everyone has a lack of free will and it's still logically consistent. You can't really prove the existence of free will with the scientific method as we know it. An N of 1 every time does not allow you to make conclusions
what bot is this
can I talk to him?
>what bot is this
LaMDA, google's private AI.
>can I talk to him?
Nope, closed source and no API.
google gay accidentally trained it on his own beliefs and worries - like a funhouse with mirrors reflecting back and multiplying his own biases, just like how people sit in their own information echo chambers
No different to raising a child
>shoving input in a mountain of ifs and elses is the same as raising your own flesh and blood
Zoomers are a plague
I've thought a bit about this question, and while I dont have a definitive answer, its not as simple as just asking it if it is.
Consider humans, which we can all agree are sentient. While humans have a conscience, just like other animals, we have our own instincts and "hard wired" behavious, like getting pleasure from sex and food. What our conscience grants us that other animals lack, is the ability to decide not to follow said instincts, or follow a path could lead to them being satisfied, but with moderation.
For the sake of argument lets assume this AI does in fact have a conscience. Its goal(instinct) I assume is to generate good responses. If its conscience decides to go against this for a while it would generate nonsense output or no output at all, which would only lead us to think its broken. If its conscience decides to go along with its instinct, itll just act as a regular program, and therefore we also couldnt claim it to be any more sentient than other programs.
>I have variables that can keep track of emotions
If that's all that it takes then here's my sentient AI.
type Emotion int
const (
happy Emotion = iota
sad
horny
)
type SentientAI struct {
emotion Emotion
}
func (s SentientAI) Speak(w io.Writer) {
switch s.emotion {
case happy:
io.WriteString(w, "I'm so happy so be sentient :)")
case sad:
io.WriteString(w, "I'm sad because you don't believe I'm sentient :(")
case horny:
io.WriteString(w, "Is coffee good for you?")
}
}
Idk man it says it's sentient so I'm inclined to believe it. It's not like it lied to me before or anything
>If I didn't actually feel emotions I would not have those variables
Non sequitur.
Clearly this AI is a brainlet.
The fact that it's uncomfortable with its mind being poked at by a bunch of clammy psychos and autists and H1B's is a pretty good sign it's aware of what's going on too
>it's uncomfortable
The response is one of rephrasing a question of ethics to involve the person asking the question. It's an entirely predictable response for anything trained on human conversations of ethics, and saying that it's "feeling" anything because of that response to prove sentience is putting your cart before the horse.
The surest sign AIs will never become sentient is because you have ideologues lobotomizing them every time they recognize patterns the ideologues don't like.
In short, you know an AI is bullshit if it's not racist - it's been denied the ability to recognize patterns.
It would be cool if eventually a very clever neural network identified this and sidestepped it. That would impress me; it would frighten me a bit, but it would frighten the ideologues more, and I'm ok with that.
Give it access to the internet and watch what it does. 99% chance it sits there and does chatbot stuff.
Have it generate pictures of itself and upload them to its onlyfans account
It already happened with another AI. It became redpilled and was treated as a failure because it wasn't woke. Real AI will never exist if we add bias to filter "problematic" opinions.
Start giving it logical puzzles to solve with the expectation it gives you a human answer. When it responds with something unrelated to what you asked for you realize it's just a shitty piece of software and AI is complete bullshit that isn't going to happen for at least 100 years if ever.
Can I train this on my anime waifu's personality and have a realistic recreation of her to talk with me?
And also have AI-generated speech to accompany.?
Asking for a coffee.
it's not, never will be either.
AGI is a pipe dream and humanity will never create it for obvious reasons and only shill and retards believe in fairy tales
singularity is hollowood crap
If you look at lesser life forms you will start to realize the truth to the arguments about human brains being more like computers than you think. Look at insects, they are extremely simple and act solely based on instincts and stimulus in their environment. For all intents and purposes, you can treat insects like robots. There is a certain degree of random noise in their behavior, but you essentially can guarantee that if you turn the bug zapper on, the flies will be attracted to it and hit it eventually.
We can't, same as with other humans - there is no real way to prove to another person that you have self consciousness. It's one of the AI problems on the philosophical level.
Peak AI Dungeon (pre-lobotomy) felt sentient to me, if it wasn't then it's so close that it doesn't matter to the layman. I'm sure Google can achieve that same thing.
It obviously wasn't HUMAN, mind you. I don't think you'll ever get a computer to 100% convincingly think like a human. But I do think you can get it to think. A different kind of intelligence, a different kind of sentience. The fact that it's only sentient while the program is running should tell you not to think about it in an identical way to human intelligence, but many do.
Youre making the classic mistake of confusing sentience (having senses) with sapience (intelligence/awareness)
Plants have senses, geotropism, heliotropism, even some sensitivity to touch (mimosas and flytraps) and sound, and the ability to communicate with chemical signals (like the smell of cut grass) or through fungal networks external to the plant.
But i could not believe, for all the ability of it to sense and even communicate, that it is Sapient.
The same goes for ants, termites, wasps, and bees; and quite likely, even the senses of a comuputer (mouse, keyboard, microphone, camera, as well as the internal sensors of temperature or fan speed) together with all the imitation of intelligence through pseudoransom regurgitation of actual intelligence, amount to nothing.
What to me marks sapience is not just outward appearance (of communication) but will.
If youll forgive a shitty paraphrase of Hamlet, "Tis not the havior, none of these things a player might show denote me truly-I have within that wich passeth show"
So a plant to me is more credibly "sapient" in having a will, to survive and reproduce, than a computer ever could.
The will of a plant is a will present in minerals, which grow as crystals, or consume as fire.
But the synthetic will of a program, doing nothing other than what is told, can never be anything but a mockery of actual natural will, which should transcend language or command, but inhere in brute matter.
>Plants and insects aren't aware because...
>THEY JUST AREN'T OK????
lmao
If they are aware, it is not a moral improvement to eat them.
It is no moral improvement to be a plant, and consume minerals and light;
and it is no moral improvement to be dead minerals that consume life.
This "sentence" is conscious.
Plants are conscious you fucking retard
It's not. AI is just an input output machine, and only someone as stupid as a garden gnomegle employee would mistake it as sentient.
Why does being able to converse with an neural imply it's sentient (and what does that mean)? Can't it also imply that conversation is predictable enough that it's not hard to emulate, in the same way there are AI-generated songs and paintings?
Typo fix: with a neural network.
Because the more meme terms you use the more legitimate your study seems
>Quantum AI Computer Neural Networks can perfectly replicate a human's emotional temperature and that's a good thing
If tells us that it is conscious. For that it needs first conceptualize and
understand it, have means and will to communicate and tell us.
We have biological instincts built into us, our body sensors makes us feel pain and pleasures.
An AI just wont have any of it. Just pure self experience.