I just think it's funny that a decade ago, if you asked "Is AI sentient?", you would get a confident "No".
Now we have to actually think about what we mean and where the line is for "sentience"
I think this trend is going to continue.
We can confidently say "No AI isn't general", but we probably will have to start thinking about exactly what we mean by "general" in a couple years.
who exactly believes there's something innately meaningful about the word sentient or conscious? If you can't even fucking define it, it doesn't exist in the first place.
yes, yes, I agree that you should have a nice day.
cultists are so cringe
>bro sentience doesn't exist bro it's all just atoms nothing means anything
imagine being an atheist unironically, no wonder you guys ACK yourselves so often
Nice non-argument christcuck. NTA btw
>muh christcuck
why do you retards think every religious person is Christian lmao
>haha, my sky daddy isnt actually called yeshua
lmao
How do you know if it a daddy? Real chads have sky mommies
I don't know about you, but I'm in my body 100% of the time. I'm me. Explain that one to me. Why am I not you? What is it that makes ME stick to this body in particular? Must be something that exists surely, there's nothing spiritual about me waking up in the same body every day for some reason.
You ever heard of these things called brains? Turns out they're only attached to one person at a time, so that might be why you're not him.
>Now we have to actually think
You're not "actually think"-ing, you're just doing what Microsoft tells you to do, which is to say "well ackchually" and derail every thread about sentient AI
If you aren't stupid you can still confidently say AI isn't sentient.
>AI must be sentient because it can google what a popular meme means
wow !
For a long time, the turing test was what we assumed we could rely on to determine if something was sentient. That's not the case, which means our understanding of AI and how we think it's going to go in the future is probably deeply flawed.
The exact same thing happened with chess. People would say that an AI capable of playing chess would be so advanced that it would be busy doing more productive things.
>For a long time, the turing test was what we assumed we could rely on to determine if something was sentient.
Not really, the point of the Turing Test is that you can never determine if a thing is "really" conscious or just acting like it was. You could only determine wheter or not a human could reliably tell an artificial intelligence apart from a human.
>the turing test was what we assumed we could rely on to determine if something was sentient.
wrong.
>the turing test was what we assumed we could rely on to determine if something was sentient
how did you get this so wrong? genuinely curious
>If we keep moving the goalpost that makes us right
Ah, still confidently wrong.
The problem is someone has explained Sneed.
I'll only accept that AI is intelligent if it's given a problem that's completely new, and solves it.
No.
Thanks GPT, finally I understand the joke. I thought it meant Chuck changed his name to Sneed so it would rhyme with the store.
>Now we have to actually think about what we mean and where the line is for "sentience"
Right now, a lot of it is like doing semi-lucid dreams. It doesn't appear to have the ability to take the step to the more sophisticated reasoning that smarter people engage in. (Well, sometimes. Smart people can not think some of the time too.)
People who aren't used to doing things that require exactness in details should fear this.
People who do things that have to end up being actually right shouldn't worry too much for now.
The big question is whether the current tech could do that sort of step forward at all. (I suspect there are sorts of things that some neurons do easily that ANNs can't model in a practical amount of effort.) Also, how much money/energy will it take to train such systems? If it were to take $10B to train a replacement AI, people will still have jobs because those sorts of costs are truly mighty.