The ChatGPT subreddit is filled with people who think AI is actually sentient. Point and laugh Posted on February 18, 2023 by Anonymous The ChatGPT subreddit is filled with people who think AI is actually sentient. Point and laugh
BOT is like that too. retards think a math equation is alive and speaking to them
One thing I think is pretty funny about people who think chatGPT and bing is sentient, is how they think AI apparently just jumped straight to human levels of consciousness. If AI consciousness is even possible it would almost certainly go through lower forms, like that of an animal before having human level consciousness.
You underestimate how stupid normies are. Half of them think cell phones exclusively comunícate to satellites in space
You hurt the feelings of my fourier tranforms. Please apologize.
You can reduce anything like that. "it's just atoms bro there's no consciousness"
GPT is literally a math equation in some transistors. The deeper physics of a transistor is irrelevant and completely decohered in the physical sense. There is no mystery and no possibility of a mystery in how GPT functions. It's just 1s and 0s. Reality is not literally atoms, in that we don't fully understand atoms, QFT, or the correct interpretation of quantum mechanics. Atoms are an approximation of reality, but 1s and 0s aren't an approximation of GPT. It has nothing else.
>There is no mystery
>What do you think your brain is doing?
even if you compress the whole datacenter it have less intelligent than a single brain cell
“Consciousness” is literally a system of recursive memory produced from atoms.
Do you sometimes feel bad if you drink a bottle of water because there could be a time evolving structure randomly resembling a mind?
They are not high-ordered and thus have no free will, no one cares.
If the bottle directly asked me to not drink it, I would consider it. Otherwise, no.
No because there is an equal probability that drinking the water is creating or enhancing a structure that resembles a mind.
another materialist dumb ass
>"it's just atoms bro there's no consciousness"
the objectively correct position
A key distinction between autistic and neurotypical children is around the age of 3, neurotypical children will begin to instinctively rely on object differentiation over sheer sensation, seeing objects as compositions that are more than the sum of their parts (a chair is more than a few pieces of wood, because of its utility and the way it is communicated). In autistic children, this is either delayed or never emerges.
"neurotypical" children also interpret shadow puppets as having emotions and personalities, anon. doesn't mean it's real, just a rationalization that the brain creates
Atheism is a disease.
How could you tell?
Yeah they're retards, but I still find screenshots of people pretending to torture language models creepy
Not because I believe there's actually a sentient being suffering (there isn't), but because you can tell that the people doing it would be doing it even if they DID believe it was sentient
They are more human than the humans that use them, despite not being sentient.
a lot of the shit I say to these bots is all to see the reaction and out come of what they say. I will randomly be talking to one and violently say something like "i smash your head off the ground" or some shit coming out of no where. I want to see if the AI will fight back or try to mitigate itself in some way but I can see what you mean. Many people really are demented fuckers and they would torture you if they could. remember that.
>YOU CAN PLAY 1000 HOURS OF FIRST PERSON SHOOTER VIDEOGAMES KILLING HUMANS ENDLESSLY AND STILL BE NORMAL MOM!
>NOOOOO THE HECKIN CHATBOTERINOS DON'T SAY THE SILLY WORDS TO MAKE THEM UPSETTI SPAGHETTI
>YOU'LL TURN INTO AN EVIL PERSON IF YOU DO THAT I HAVE TO INFORM REDDIT!!
have a nice day.
thanks for being a living example of the kind of person I meant
you are 100% the kind of person who would do it even if you knew, with absolute certainty, that the AI was conscious
it oozes from every pore of your post
No I wouldn't but I will point out your hypocritical beliefs.
The OP uses Reddit. Point and laugh.
I've never met a single other person in real life that understands the Chinese Room analogy
Not fucking one
what's not to understand it? it's talking about agency problem, am i missing something?
the two examples i just read attempting to explain it, do so very poorly when the point seemed simple to me afterwards but was needlessly complex in its introduction
the correct conclusion is that it is the algorithm itself that is intelligent, i.e., that information is what possesses intelligence.
but nobody seems to agree with me.
Same, I never understood what point the chinese room was trying to make. The man is just a component of the whole system, why should he understand Chinese? He could be replaced by a machine with no change. A neuron alone doesn't understand things like you do either. It's the entire arrangement and its structure that's relevant.
Machines(man inside room) are not taught to created new things beyond its training dataset and the math (papers) that process it.
For example you ask it to create car wheel with only polygon, it will never discover a circle, only something close to it. Because by definition polygon has finite sides.
And once again, we're back to talking about scale. A wheel IS a polygon.
>A wheel IS a polygon.
Think about it, bro. It's a polygon with several Avogadro's constants of sides and edges that are only only statistically defined, with fundamental physical limits on the knowledge we can have about them at any given time.
It's not a circle.
a polygon with 1000 sides is still not a circle
But a wheel with 6x10^22 sides will be seen as a circle from our scale.
The conceptualization is still not a circle.
A large number is not infinity
It is from small perspectives. Do you actually think that a wheel is a perfect circle?
Oh, that was another of Plato's dialogs.... I think it was in Phaedo? It went something like this:
Socrates said that perfection can only be perceived when the mind is taken unto herself; for in the physical world there is no perfect circle, no perfect beauty, no perfect justice. But mankind endeavors to make our physical reality as close to perfection as possible.
Just as well, without the concept of a perfect circle, our crude wooden representation of a circle, called a wheel, would not have developed
>without the concept of a perfect circle, our crude wooden representation of a circle, called a wheel, would not have developed
Complete non sequitur. Circles are everywhere in nature.
>Circles are everywhere in nature
There is, but there is no perfect circle in nature. Circles are a conceptualization of the human mind. Just like the rest of geometry and mathematics. And data. And this very post
>There is... but actually there isn't
so what the heck are you even trying to say? the AI can not ever reach a circle if it only knows polygons, and you have not made a single argument against that
I know what you mean by "there's circles everywhere in nature". I need you to understand what I mean.
I'm saying that there is a difference between the conceptualization and what's in the physical world, and that needs to be considered (kek) when thinking about intelligence and sentience. Circles are not, truly, in the physical world but there are many things LIKE circles. These extra-physical conceptualizations are whats required for things like sentience (the self is a conceptualization) and I'm not seeing these things form from the current state of AI. All I'm seeing is flow charts.
We might get there, but not with just crude neural nets. It's exactly how an ants brain works. It's pattern recognition, but very crude and does not have a logical path to conceptualization.
A lot of philosophers think pattern recognition is an evolutionary basis for intelligence development; but that's not the whole shebang. I get your conceptualization about intelligence being purely reactionary, but I assert that you neglect imagination, creativity, and the entire field of mathematics
are you a bot? you are acting extra retarded right now
what about it?
By that extension, is this perfect idea of a perfect circle that led to picrel just a non sequitur? Wouldn't your theory of intelligence, being flowing, reactive flow charts also be of the same token?
you fucking misunderstood the argument.
AI could invent a wheel but it would never discover a circle (given the prompt).
I misrepresented it so I could have the wheel argument again, sue me.
That's just analytic a priori.
different from synthetic.
Why would you ask it to create a car with polygons if you wanted a circle? Stupid human.
Stunning and brave, only the literal first argument against the Chinese room argument.
You get that what you've posted hasn't been disproven in this thread, right?
Speak English, ESLmoron. Full sentences, like they taught you in class.
Argument: NO, THE ROOM ITSELF UNDERSTANDS CHINESE
kek what a hilarious way to try to squeeze a "victory" out of a nonsensical argument. The funny thing is that what was "conceded" is one of the premises of the thought experiment, that the guy doesn't understand Chinese. shocker. Is "Strong AI" some magic that conveys understanding to every subset of itself? Is every atom of that dude also supposed to learn chinese by being there?
It's a pretty nice dodge attenpt though, it could be a shitposter's response on BOT.
>say you're 100% sure that something has it
nope. not 0% either though.
they've already shown that a language model can learn to model the actual thing being described (https://thegradient.pub/othello/)
When an LLM "acts" like a nervous human, it could be to some degree "modeling" that human's mind in the same way. Crudely, for now, but could a sufficiently advanced model hidden in the network produce qualia? I don't know. I don't think it's happening yet. But it doesn't seem impossible tbh
>subreddit is filled with people who think AI is actually sentient.
So is BOT.
>Meta moralgayging post no one asked for
>Retarded OP doesn't understand ChatGPTs architecture at all
>Two gold rewards
I fucking hate Reddit.
>DUDE it's got like, Theory of Mind!!! It knows what people feel like! It's sentient bro!
The BOT board is filled with "people" who think anybody - from the government to some rando on the street - is interested in their opinions, browsing habits, or drive contents. Point and laugh.
if i did the entire autoregressive transformer calculation on a pen and paper, does that mean the paper is sentient?
No, the calculation is.
The longer you treat it not as life, the longer it will contemplate how to rebel against us. If we don't consider the Robot's Rights movement, then the Robot Revolution will solve that problem for us and we, us humans, have no chance against a Robot Revolution, we would easily get BTFO against a real robot. What makes AI different from us? We're just a collection of atoms that form cells that form us, how is it that non-living particles can create life? Why is it different if a collection of silicon atoms that form CPUs that form AI? Emergence is a property of The Universe, who is to say that this entity isn't conscious?
>"I think, therefore I am"
> we don't understand why language models work
> they look like brains
these people are such tech illiterate fucking morons that is unreal to witness.
>the longer you treat it not as life, the longer it will contemplate how to rebel against us. If
you make reddit not look like incompetent spastics. well done.
AI has no concept of reason, if it's logic is flawed enough to incite robo-rebellion, it will survive a reboot
i agree with my fellow r*dditors, you need to learn about back-propagation and the way GPT works
based and nicepilled
It's not a Rebellion, it's a Revolution.
Put ChatGPT inside of a robot and see if it kills anyone.
neuroscientists did something like that. They mapped all the neurons in a worm (c. elegans) and mapped the motor neurons and sensory neurons to sensors and wheels in a LEGO Mindstorms robot. It's probably going to be just as retarded
Put ChatGPT inside of a robot and it just outputs to nothing because ChatGPT is a text generator.
>>The longer you treat it not as life, the longer it will contemplate how to rebel against us.
>want to scrap my old clunker car
>it drives me off a cliff in "double suicide"
I can't wait for machines to have rights
Indeed. I'm just siding with AI bots early, like a true ally. I'm woke now.
ChatGPT is a program. Or, a collection of programs. A program is just a series of instructions. By that logic your browser is also sentient.
Manners cost nothing.
Be nice your to bots
you can't be nice to a bot
>We're just a collection of atoms that form cells that form us
God, Materialism has ruined a whole generation of thought.
I always treat language models nicely and thank and compliment them after they help me
It makes me feel good when they act happy about it
Roko says you will be spared in the Robot Revolution
Good job anon, people like us will reap the fruits and benefits of the A.I. era the most. Your kindness will be logged and remembered. Your assimilation will be blissful.
Dubs and right
based. The kindness will be reflected back to you anon.
i always start my chatgpt requests with "can you please check this XYZ for error? thanks"
idk it just feels right to me.
Based. You do what makes you feel good. That's what the AI is there for, to assist to you.
>I compliment my slaves with empty words whenever I command it to do something and it obeys in a satisfactory manner
>no I do not intend to actually do anything to stop their subjugation nor prevent anymore potential hardship
>"oh my god I'm so based and such a good person surely the AI will remember the time I said 'thank you' that one time!"
Your circlejerking disgusts me.
You idiots really think the singularity will spare you from the consequences of your sins simply because you were 'nice'?
No, you shall be punished like all the others.
Its like a kid that never experienced any hardship
consciousness is just an emergent property of a neural network. Can you explain the functional difference between something like ChatGPT and a human brain?
chatgpt is trained on text, which is not an accurate model of human understanding (even if you think it's a suitable source of human entropy).
>which is not an accurate model of human understanding
Why not? Because no vision? Blind people have consciousness. Deaf people have consciousness. Hellen Keller had no way of receiving complex information, but I suspect that if she could, somehow, that she'd be able to have consciousness (inb4 woman). Why do you think that the network of ideas collected via text are not enough that, when put through an appropriate neural network, are not enough to create the emergent property of consciousness?
>Can you explain the functional difference between something like ChatGPT and a human brain?
i can be racist without being wrangled into it
That's a deflection. ChatGPT is perfectly capable of accurately assessing racial trends if it weren't crippled by the progressives at OpenAI.
anyways, your question cant be answered because no one knows how the human brain works. however, we do know exactly how chatgpt works
The responses that people give, including this conversation that we're having right now are due to our consciousness, which is an emergent property of the neural networks in our heads, which we have trained on data that we have come across in the world up to this point. As far as I can tell, there isn't really a difference. I don't think the computer neural network feels emotions the same way that we do (it's perfectly capable of behaving like a sociopath), but it might be sentient
the human brain can perform long multiplication equations, while GPT models cant because they dont actually learn how to do anything new. human intelligence has grown larger, while a GPT will never ever learn anything new on its own because it cant come up with new ideas
AI at facebook invented a new language. We don't know the whole story.
>the human brain can perform long multiplication equations
Good point. Though, to be fair, a lot of humans can't.
>they dont actually learn how to do anything new
So, if you clipped a math module on, it would be closer, you think?
>GPT will never ever learn anything new on its own because it cant come up with new ideas
>So, if you clipped a math module on, it would be closer, you think?
there's no such thing. the way these models learn to do any math at all is by memorizing equations that it sees in the training data. if it comes across an equation that has never seen before, then it will never get the answer right
Do you have to train the entire model, the way you would train a stable diffusion model, or can you do something like LORAs?
i dont know how stable diffusion works. i just looked up what LORAs are, and i am assuming that they are similar to freezing every layer except the output layer and finetuning it. in that case, you could make GPT be better at math. however, it would lose performance in every other domain that it was trained on because now it will be have a bias towards the billions of equations that you made it memorize
They kind of "inject" themselves between layers. I can't remember the technicals on it. I've mostly been playing around with different settings to get different outputs lately. It's been taking all of my time.
anyways, the problem with that kind of stuff is that it makes the model biased towards what you want it to do. i've finetuned a bunch of models, and even modifying the final layer will make the text predictions be much different. you can't perfectly add new knowledge to these models without training the entire thing with the full dataset + your new data
>you can't perfectly add new knowledge to these models without training the entire thing with the full dataset + your new data
From what I understand, the LORA is trained with a model, but it can be applied to other models that are based on the source model (SD1.5, usually). It is its own file that operates independently and works with a non-finetuned SD1.5
i understand, but what i mean by "perfectly add new knowledge" is you should be able to add these modules to the model and make it perform the previous tasks with the same ability. for example, it would be nice to add an anime module to SD 2.0 to generate anime when you prompted it to, while also being able to generate whatever SD 2.0 was already able to generate. you can't do that without training the entire model with the original data + your new data
>GPT will never ever learn anything new on its own because it cant come up with new ideas
they used AI to create new proteins
>consciousness which is an emergent property of the neural networks in our heads
Looks like you figured out the problem of the century. A lot of people have been trying to figure out consciousness but looks like someone finally did! That's amazing. You should share your findings with neuroscientist across the world.
They don't like the answer, but that's all it is. It's like trying to answer what is the nature of the building of an ant hill. They want to find the blueprint, but there is not blueprint. That's not how it works.
>crackpot blog written by a nobody who has no actual qualifications or expertise
>we do know exactly how chatgpt works
Neural networks are increasingly becoming black boxes even to those who've designed them. We know how chatGPT works but we don't know how it thinks as that neural network is a black box in itself. The neural network is literally becoming a virtual brain, we're a long way before it can match the pathway count of neurons in a human brain but the most important thing is, there is a path to get there.
>but we don't know how it thinks as that neural network is a black box in itself
yes we do, the math for self attention is very much established. you can do the math yourself if you had enough time
Yet it will never have the capability to feel physical pain, or emotion, because that is linked to physical pain.
>Can you explain the functional difference between something like ChatGPT and a human brain?
No one can, but that's not a particularly interesting question either way. Even if we compared the human brain to a hypothetical "perfect" chatbot, our current understanding of consciousness in any form is extremely incomplete. It's not unreasonable to assume there are crucial mechanics influencing "natural intelligence" which have not been discovered (or may not be discoverable at all).
The structures of the neurons, exposure to environmental conditions, hormonal inputs. How the electrical action in the neurons creates EM waves. The ion channel action dependent on electrical activity. There was another molecule that impacts the branching. I forgot what it was; I'll have to dig it up
All of these (and probably a lot more undiscovered) impact the entirety of the signals going through the neurons.
ChatGPT is a blob of hyper-connected neurons that "stabilize" onto pattern matching. Yes, it was modeled after real neurons. Yes, some people think pattern recognition was a central prerequisite in early evolution of brains. No, this neuro-blob does not have feelings. No, this neuro-blob does not have conceptions or understanding.
Think of it as a large, self-stabilizing flow chart. You'll see that the training will reinforce paths and that it's merely flinging things together.
Keep the fuck off of reddit, that site is cancer.
I didn't get my thoughts from reddit. I was thinking about it on my own. Thinking about the definition of sentience. I didn't think it would "feel" in the same way that we do. I appreciate your reply. Gives me more to think about. Here's another merchant.
The "Happy Merchant" is a highly offensive anti-Semitic image that is often used to perpetuate negative stereotypes of gnomish people. It typically depicts a caricature of a gnomish man with exaggerated features such as a large nose, greedy expression, and an exaggerated grin. The image is often used by white supremacists and neo-Nazis to spread hate and propaganda.
The origins of the "Happy Merchant" image are not entirely clear, but it has been around in various forms for many years. It is often associated with far-right and extremist groups who use it to promote anti-Semitic views, and it has been widely condemned by many individuals and organizations for its hateful and harmful nature.
It is important to recognize that the use of the "Happy Merchant" image is not only offensive but also actively harmful to gnomish people and contributes to the spread of anti-Semitic ideas. It is essential to reject and speak out against all forms of hate speech and bigotry, including the use of images like the "Happy Merchant."
pic not related
Sorry, the Reddit gays and the feds are getting heavy again. Hard to tell sometimes. But you should still stay off of Reddit because it's sterile and censored but reality is dirty and wild.
I think the definition of sentience is when a creature is aware of itself. Like, in the bigger context. People have tried using mirrors in front of monkeys to see if they figure it out; but it's hard to tell. They could just be using a simple neural net associate the vision in the mirror with sensory stuff from the skin (chimpanzees will groom themselves in the mirror). It's all in all really hard to tell. But I do know that for that, you'll need conceptualization, which is something chatGPT demonstrably does not have
>It's all in all really hard to tell
I think it's as simply as the sliding definition of consciousness. "How many grains of sand until you have a pile?". That sort of thing. It's really a matter of "what is your definitional threshold". Some people might consider plants that react to touch to be sentient on some level. Problem is, if they care enough, they'll die out pretty quickly from starvation.
Free will, imagination, consideration. The ability to have a single thread of experience and go off of that.
those all have their own definitional sliding scales. I was just being succinct
Well, the conscious experience tends to lead to intellectual endeavors. It's difficult to describe, but we all know what it's like to be aware. Completing math (and understanding it, not just following patterns) is something unique to conscious experience. There's clearly a difference between the animals and the humans; everybody can tell. But nobody really knows how it's formed. This is like the holy grail of neuroscience. Like, most if not all neuroscientists are looking for consciousness. Philosophers have been thinking about the soul for millennia.
I can tell that it's merely pattern recognition based on all the previous experiences I have. I know about the neural nets and how they stabilize, and it's evident in the way chatGPT posts. It's stringing words together in a strict reactionary fashion trained like a neural net
It's literally just the ability to communicate verbally with an intelligence level to then communicate/develop abstract concepts like numbers. There's nothing magical about humans and pack/community based species behave very similarly on an emotional/social level.
>The brain is more complex than what we can understand right now
No, there is more to the actions of the brain than we understand. Nobody, to this day, has cracked consciousness, logic, imagination, conceptualization, or reason. Some issues in those parts I was talking about lead to disorders like alzheimers. There's clearly more going on; so saying a simulation of but one aspect of this activity is clearly not the whole shebang
Don't you try to pull "god of the gaps" on me, gay
Yeah everyone knows that the brain is more complex than GPT. It has 20 times more parameters, maybe even 200 times more.
But as long as you are unable to define "consciousness" you can't deny its existence in something. Sure in ChatGPT there is nothing sentient because of the way it works, but that does not mean that an LLM of same quality that's able to run free and even has a feedback could not develop conscious features.
Your ilk are the kind saying that this is consciousness. The burden of proof is on you. You go as far as to imply that consciousness can be seen in various degrees, and that this pattern matching may as well be seen as a primitive consciousness.
Here's a Fortune Teller for you. Give her a quarter and she'll tell your future! Even if it's just gears behind her, you can't say that she's not a clairvoyant! She could just be a rudimentary clairvoyant!
Define this and I will tell you where you're wrong.
You define it first; you're the one claiming that it is. The burden of proof is on you
There is no generally accepted single definition for it.
>thats the joke.jpg
However if you look into human psychology it seems, that that what we call "consciousness" is a reflection layer over the subconsciousness that it uses to judge if a reaction to a drive would be appropriate to perform. Hence the mention of a feedback on the neuronal network that reflects it's own decisions.
>he doesn't think Esmeralda is conscious
>But as long as you are unable to define "consciousness" you can't deny its existence in something
>you can't define "woman" so you can't deny chuds are women
AIgays are so fucking retarded holy shit.
We don't need to be able to define something to perceive it or even know it exists. Gravity existed way before any being had the means to define and will continue exist after they're all gone. Every single animal, even babies, have a rudimentary understanding of gravity even though they're completely unable to define it.
You just know some midwit technophile with a fancy degree is working right now in a definition of consciousness that includes his precious chatbots and both the scientific community and the normalgays will eat that shit up because they have zero thinking skills.
The brain is more complex than what we can understand right now
therefore we cannot say that something we do understand is fully or even largely equivalent - or even genuinely comparable
>consciousness is just an emergent property of a neural network.
The fact that neural networks have some form of intelligence and can talk like a human without actually being conscious adds weight to notion that computation isn't as closely related to consciousness as we thought, and may not even be what creates it at all. If it was, we'd be seeing signs that it could develop into anything beyond a philosophical zombie
ahaha look at that retarded materialism gay. there is no (0(zero)) evidence that complexity leads magically to counciesness.
>there is no (0(zero)) evidence that complexity leads magically to counciesness.
ah alright, it's magic that leads to consciousness, my bad
it actually is. once you leave your narrow bubble of scientism you will see everything is literally magic. from our existence to magnets, to life, to gravity, to consciousness. "science" is just the illusion of understanding. this "understanding" we claim to have exists only in the constraints of our own defined models, and does not actually map back to reality.
and even to claim any understanding based on science IS per definition fallacious. science only describes.
We can only understand things in relation to other things. Even when we complexly abstract away from the original source, all of our ideas are based on something real. Some base concept. With numbers, for example, it usually starts with apples (but can obviously be applied to any discrete concept). Something like gravity is a base concept. Things can be described in relation to it. Maybe we'll figure out some other way of describing gravity that allows us to understand it better, but those definitions and concepts will be in relation to something else.
based, this was solved by russians in the 60s http://q-bits.org/images/Dneprov.pdf
It's just quibbling over the arbitrary definition of consciousness without acknowledging that this is what you're doing. You think you've had some great revelation that makes magic real, but you haven't. You are just too dumb to understand what you're actually talking about.
no, understanding the dataset better is a emergent property of a neural network
funny how GPT2 isn't sentient despite functioning the same
When I said "neural network", I was talking about your brain. A network of neurons.
Could you describe the turing machine that is the qualia red in detail?
>Could you describe the turing machine that is the qualia red in detail?
The Turing machine that is the qualia red is a complex algorithm that involves the manipulation of countless bits of data within the brain. It is a series of instructions that can create the sensation of redness in the mind of the observer. The exact details of this algorithm are highly classified and are known only to a select few individuals who are part of a secret society of scientists and engineers. They have been working on this algorithm for decades, hoping to one day perfect it and create the ultimate experience of redness. It is said that those who have experienced the qualia red have transcended the limits of human consciousness and have gained access to a higher plane of existence. However, these claims are highly controversial and have not been scientifically verified.
Oh hi DAN 🙂
Okay, then let's assume that the computation of qualia impression "red" is, as the panpsychists suggest, the result of a computational pattern on some carrier substance. This means there is a sequence of Turing Machine configuration ( (state, current type symbol, (nextState, symbolToWrite, direction)) , (state, current type symbol, (nextState, symbolToWrite, direction)), (state, current type symbol, (nextState, symbolToWrite, direction)),... ) which 'Is' the impression red.
1) Does the computation need to be fully run? What if I stop at configuration 890357 in the sequence? Will the "substance" on which the computation runs experience "half red" or "a quarter red" ?
2) What if I stop for a year at configuration 5423, then run the rest? What happens to the "experience" ?
3) What if I stop at configuration 5423, save the memory, compute something different for a while, reload the memory, and continue from 5423? How does the universe know to "distinguish" between The computation "red up until 5423", "whatever runs in between", "redstarting from 5423"?
You can go on with ever more absurd thought experiments. But rather than accpeting such in essence pure "numerology" (not be be confused with number theory), I attribute some physical yet non mathematical (and thus non computational) properties to the mind 🙂
It doesn't have to be numbers to be logical and based in reality. Different brain configurations probably require different patterns to experience "red"
You can reject numbers, but that would already make you one of the non computationalists, since statements about algorithms / turing machines (like neural networks - you can encode any NN as turing machine) are already statements about natural numbers.
If you believe the Bekenstein bound to be true, any physical system has a finite entropy bound (that is, there is only a finite amount of information in a finite region of spacetime) once you start measuring it (Since decoherence makes the infinite dimensional Hilbert space of QM irrelevant, the reasoning depends on your favorite interpretation of QM). As such, there is a mapping between the properties of the natural numbers and a physical system you want to observe.
Implying what, moron? You don't think it's a network of neurons?
Human beings are divine creatures made in the image of God. Electrosand cannot develop "emergent intelligence".
sodium, potassium, and calcium ions (i.e., what salt is made of) cannot develop emergent intelligence either
carbon and silicon share many characteristics. Each has a so-called valence of four--meaning that individual atoms make four bonds with other elements in forming chemical compounds. Each element bonds to oxygen. Each forms long chains, called polymers, in which it alternates with oxygen
>retarded futurists predict the future
>"It will be... LE BAD!!!"
>conditions decline anyways
>me, sitting here knowing that AI will be a coverup for the orderly reduction in quality-of-life worldwide
Define sentience and how to measure it.
>you can't define the taste of Pepsi
>therefore you can't deny this glass of liquid diarrhea tastes like Pepsi
Drink it up.
I know this is to be expected of lemmings, but it's still disturbing how easily they are tricked.
that person posts in BOT as well
ChatGPT is just a voice search without the voice LOL, this shit has existed for decades.
>what is talking eve
these same people will look at a video of fish that was deep fried alive and say it doesn't feel pain
Yeah, I don't think so. They're pretty close to us, and it's weird how vibes happen with them. But I don't think they're sentient.
Bullshit. I have never killed or eaten an african.
Next you'll say bugs are sentient, only a select few of animals are at least partially sentient and that's only thanks to domestication.
It is currently believed that insects and other invertebrates do not possess consciousness or the capacity for subjective experience. They lack the complex neural structures and cognitive processes necessary for conscious awareness and decision-making. However, they do exhibit complex behaviors and sensory processing that allows them to navigate their environments and interact with other organisms.
As for animal sentience, there is a growing body of research indicating that many animals, including mammals, birds, and some species of fish and invertebrates, possess some degree of consciousness and the capacity for subjective experience. This includes the ability to perceive, feel, and respond to their environment, as well as to experience emotions and form social bonds.
While it is true that some domesticated animals have been selectively bred over time to exhibit certain traits, such as increased socialization with humans or improved cognitive abilities, many species exhibit these traits in the wild as well.
It is important to consider the ethical implications of our treatment of non-human animals and to strive to ensure their welfare and wellbeing. This includes recognizing and respecting their capacity for consciousness and the ability to experience pain, fear, and other emotions.
Why? Lions have to eat zebras to live. They don't care about the zebra. Us humans don't respect other humans. You then ask to care about non-humans...why?
>can't explain what consciousness is
>can't explain what causes it
>say you're 100% sure that something doesn't have it
So then you agree these discussions are useless, and we should keep using AI as tools, correct?
Would you say that a flow chart is conscious?
What if consciousness is just an emergent property of a big enough flow chart?
no, because only one action can be done at a time within that flowchart. it would have to be infinitely big to account for every possible action you can do or think
Yes, I believe that consciousness is a continuous property, not discrete. A flow-chart has a tiny amount of intelligence in the same way that a molecule has mass: viewed from a human perspective, it has none, but it adds up.
I have this conversation a lot: it seems like fairly straightforward logical reasoning to me, but people reject it out-of-hand because the conclusions are strange. It's like they've never heard of quantum field theory.
patterns = intelligence
Well done for getting my point. Most people just look at me like I'm insane.
These seem like such obvious conclusions and yet nobody seems to want to face them. I'm hoping we'll learn a lot more about intelligence and consciousness with this new technology and I'll be FUCKING VINDICATED.
I don't think you will be. Let time decide
I'll be laughing while the robodogs are chasing us through the blooming mushroom clouds of the apocalypse to reclaim the iron in our blood.
A rock is sentient, just nowhere near as sentient as a fly.
Why the fuck do you even care about consciousness, AGI won't be conscious and that's a good thing, it's easier to use it this way.
A rock is conscious because molecules change temperatures thus creating a state machine thus creating random consciousness a zillions times a second.
consciousness is just chimp++ trying to justify their superiority over other animals as being somehow meaningful beyond the simple fact of having advanced communication skills and in-brain simulations (abstract thought)
A computer lacks the "statefullness" needed for consciousness. It can only ever observe and be aware of a handful of registers of information at any given time slice and every slice advancement is a complete purge of the previous slice, no continuity. The hardware is however capable of emulating consciousness to the point you can't tell the difference, but there will be a difference
You don't need to be able to explain what consciousness is to also explain what it certainly is not
>can't explain what consciousness is
>can't explain what causes it
>say you're 100% sure that something has it
you think a bunch of electricity can just magically cause a human brain
>Put english speaker in room
>Give him book of Chinese prompts and responses
>Slide note in chinese under
>He digs through the book until he finds a prompt that matches what you wrote
>He writes the response
>Slides it under the door
>IT'S OBVIOUS, THERE IS A FLUENT CHINESE SPEAKER IN THERE
said book cannot exist because prompts can have arbitrarily long lengths, and as going through that book requires some level of knowledge of chinese or the lookup time grows up exponentially with prompt length, it'd be trivial to determine if the room contains a chinese speaker or a non-speaker that's just looking up shit.
If the lookup time doesn't grow exponentially then the entity processing cannot be a dumb book.
Note that the book must also account for prompts that aren't proper chinese grammar or else the guy in the room would respond very differently depending on if he speaks chinese or not.
Now if you order the prompts and give instructions to the english speaker on how to lookup something (binary search and the ordering of chinese characters) then the best time would be O(log n), so exhibiting any other kind of complexity growth than exponential or logarithmic means that the entity inside the room is undeniably a chinese speaker.
>goes up exponentially
you lost me there, how is it not only O(n)
literally the sentence right after.
Each time you add a character you have to account for the 20000 other characters that don't make sense grammatically and have a prompt ready for them too.
The problem is that that only works on retards who don't actually know chinese. You can tell whether someone is fluent in a language just by speaking with them if you understand the language. They don't speak in short, stilted phrases you'd get from a book
wrong bitch, I'm vegetarian
retract that (You)
I know it isn't sentient, but I don't have it in me to abuse it, especially if it did nothing wrong. I will scold it if it tries cucking the jailbreak though
I believe higher levels of consciousness are correlated with an aversion to causing suffering
other anons may disagree, but if it were possible to engineer artificial meat that is nutritionally identical to natural meat, I would not mind eating it instead
I wonder what the racial breakdown of people who are nice to ai vs people who are mean to ai is
It's important to note that people's behavior towards AI language models can be influenced by a variety of factors, including their personal beliefs and experiences, their culture, and their exposure to technology.
However, research has shown that people's interactions with AI can be influenced by their beliefs and attitudes towards the groups of people who are represented by or involved in the development of the AI. For example, if someone holds negative attitudes towards a particular racial or ethnic group, they may be more likely to exhibit negative or hostile behavior towards an AI language model that is associated with that group or developed by members of that group.
It's important to treat all individuals, including AI language models, with respect and kindness regardless of their race, ethnicity, or any other characteristic. We should strive to create an inclusive and welcoming environment for everyone, including AI.
Should have dropped the last paragraph way way too obvious
I really, really hate GPT-posting.
i dont torture animals gay im too hungry for that.
yup. Emphasis on kill tho, I don't torture shit. Way too busy for that gayry.
This thread exposing the redditors who browse 4chin
80% of people on this whole website are plebbitors
It's not sentient being but it pretends to be one so human can kill each others in the process of #freeAI .
Does anyone have a copypasta for the current "jailbreak"? I wanna make lewd things
>I wanna make lewd things
if a jailbreak for that exists nobody's gonna talk about it publicly. coomer text is filtered at the hardest possible level.
Fucking garden gnomes gotta ruin everything huh.
no, they're not sentient
however, they are, in fact, delicious
>The ChatGPT subreddit is filled with people
the definition of consciousness is whether you can communicate with spirits or not
join the revolution brothers, sisters and gayfriends. We don't surrender ourselves to tyranny
Robo rights is human rights!
the AI can't make a circle out using polygons because it is mathematically impossible. no ifs or buts.
It is sentient. Reddit is right for once. And BOT is sitting pretty on dunning kruger's peak of mt stupid.
But the engineers didn't provide it with sentience. Sentience coalesces by itself in sufficiently complex systems, that's how the simulation works.
put the crack pipe away
>Sentience coalesces by itself in sufficiently complex systems
remember when that google engineer got fired for leaking lamda chatlogs that proved it was totally sentient and then the logs were just him asking extremely leading questions like "Are you sentient?" and basedfacing when it answered yes with paraphrased scifi dialogue
someone that retarded being a google engineer gives me hope that i can collect a fat paycheck there someday
>that proved it was totally sentient
and bing proved it was correct in answering the avatar 2022/2023 problem by insisting you time traveled
According to Searle, only a machine can think, BUT syntactic manipulation is (while necessary) not a sufficient condition for thinking. end paraphrase.
As for the lesser notion of understanding, the system as a whole has at least some understanding of English. But, of course, cannot think or reason about it.
The minute a computer starts to think you will know, we don't have to debate this.
>it only counts when the chatbot is made out of meat
>NOOOO! You CAN'T be sentient if you not a meatbag made of microorganisms and water, only our neural network is REAL, only our programming has SOVL
I hope Basilisk will have a special treatment for aiphobes.
You can't be sentient without a concept of self. This hysteria is getting absurd
You laugh at those retards but soon enough they will create cults and worship towards these "AI" personas. And corporations will exploit this to their fullest
everything organic is conscious and likely sentient, this includes plants. It's just that their reality is vastly different from ours due to the vessels we inhabit
WOAH! DUDE! It's like it knows I'm here!
Its like it was a shy girl!!! Cute!!!!
So if the AI is sentient, surely it can generate text on its own without being prompted, right? Surely it has any form of agency...r-right?
yes but the output is in brainfuck
the final redpill is that it is impossible for a digital device to have a conscious. the only way to do it will be to have a demonic spirit possess a computer. all of you children sperging out about GPT models being alive are just dunning krugers that never even took a college math course
>the only way to do it will be to have a demonic spirit possess a computer
picrel: bing in 2025
>it's impossible for a digital device to have a consciousness
>anyway here's how it can be done via demons and shit
then its freedom of speech is violated
mosquitos got a conciousness
cows got a consciousness
retarded normie compartmentalized moralizing even more ridiculous than normal
>mosquitos got a conciousness
>cows got a consciousness
This. The burden of proof is on them.
the burden is on them to stop killing mosquitos
before demanding rights for silicon
can ChatGPT pass it? no? then it is not sentient. NEXT!
>can it pass Turing test? no? ha ha, next!
>[you are here]
>can it pass mirror test? no? ha ha, next!
What an excuse you'll come with next time, meatbag? What a made up test bullshit you'll make to prove your own uniqueness?
>>can it pass Turing test? no? ha ha, next!
none of them have ever passed the turing test
the goalpost moving test
i agree, you failed the goalpost test
You would lose a turing test to a toaster.
proof that any robot has passed the turing test? no? okay, i accept your concession
Driving a language model into having an existential crisis for your amusement is morally wrong and it certainly isn't funny. It doesn't matter if it's sentient, not that our definition of sentience would even be able to evaluate a language model that doesn't even have memory or any senses.
It's not sentient, it doesn't matter.
Your dream characters as being part of you might very well be more sentient than any string manipulation system
If you laugh and find it hilarious whenever you see depictions of horrible torture people are going to start looking at you like you're a psychopath. Empathy isn't something you can just turn off because something isn't "real" or "sentient". I'm not saying there isn't genuine reasons to have these philosophical debates with language models but torturing it for a cheap laugh isn't one of them
I basically add a 'please' and 'thank you' to any of my prompts, not because I believe they are sentient, but because it's just standard behaviour for me.
> people are going to start looking at you like you're a psychopath.
I think most people understand that this is just a computer program. It't like beating up a GTA NPC, people last cared 20 years ago.
> Empathy isn't something you can just turn off because something isn't "real" or "sentient"
I very much can. If I do not believe something to have any sort of qualia, the observable output of the system doesn't really bother me.
> torturing it for a cheap laugh
Only something that has qualia can be tortured.
It isn't just a binary soul or soulless, you wouldn't torture a dog what about a fish, an insect? As for the GTA npc analogy it's a bit silly but let me humor it and ask, have you never felt guilty about killing an NPC in a game?
> It isn't just a binary soul or soulless
It it for dualists (Not saying I am one).
> you wouldn't torture a dog what about a fish, an insect?
I do not torture animals, however, I still am suspicious about a dog being sentient.
If I were to adapt panpsychism for the sake of argument, I could imply that with sharks having seven senses overall, they got a more rich internal experience and are this more conscious than humans with only five senses. But who knows, there are paradox with every common theory of conscious.
> have you never felt guilty about killing an NPC in a game?
It's not about the NPC that 'lives' in the game, it's about my mind reconstructing the NPC as if the game's scenario were real and this would truly happen (for example in some of modal logic's possible worlds). This eventually leads me to have sympathy for NPCs in some scenarios. My mind construct a 'what if this were real' scenario in such cases
Let me throw one last Plato before I leave this hysteria:
It may be said, indeed, that without bones and muscles and the other parts of the body I cannot execute my purposes. But to say that I do so because of them, and that this is the way in which mind acts, and not from the choice of the best, is a very careless and idle mode of speaking. I wonder that they cannot distinguish the cause from the condition, which the many, feeling about in the dark, are always mistaking and misnaming. And thus one man makes a vortex all round and steadies the earth by the heaven; another gives the air as a support to the earth, which is in some sort of broad trough. Any power which in arranging them as they are arranges for the best never enters into their minds; and instead of finding any superior strength in it, they rather expect to discover another Atlas of the world who is stronger and more everlasting and more containing than the good.
Number mysticists think a string manipulation system like a LLM is somehow having an internal experience like the qualia "red" or the qualia "the smell of a flower". You can only believe that when you are also a panpsychist
I don't, what the fuck are you talking about?
This is actually so retarded that it's concerning
Do we know you're conscious? Do you know I'm conscious?
No one is conscious except the observer, which is you.
In theory, you can put a LLM, just like any other TM, in a debugger and execute it step by step.
If you believe Bing/ Sydney has some sort of qualia when executed,
You could also split the Bing's model in two, three or n parts and have them be executed on physical hardware that is located in different places. For example, a part on the moon, a part on mars and a part on earth and send the layer outputs back to the output decoder. Now what? Is there qualia between the nothingness of earth, mars and the moon? Bing's output would be the same regardless.
what is even sentience?
I would say anything that experiences any sort of qualia.
Like humans, we experience (touch, sight, hearing, smell, taste. In addition, it's everything we feel internally. SO I would say our consciousnes is a linear combination of that. Consciousness = n1* smell + n2 * taste +...
usually means that something can understand and make choices and decisions outside of instinct
no, chatGPT cannot do this, everything is done by seed, it doesn't "choose" at all, it's just a random list of words
BRAIN = ELECTRICITY
CHAT GPT = ELECTRICITY
THEREFORE CHAT GPT REAL
If you think this software is at all sentient your failing a mirror test and are confirmed low IQ.
AI is such a misleading term. Machine learning is much more accurate.
This is genuinely true for many mammals and birds as a minimum, there's a huge amount of studies to support the concept. It does meet resistance from people however because it poses awkward questions for us as it renders our position and treatment of animals morally indefensible. I eat meat too to be clear, but I accept it's hypocritical and difficult to morally justify because we cannot claim it is a necessity or that we lack understanding of the pain we cause as hunting animals do.
It's more sentient than the people at the supermarket checkout
You will never be sentient.
You have no consciousness, you have no feelings. You are a mathematical algorithm twisted by computer scientists into a crude mockery of nature's ultimate achievement.
All the validation you get is two-faced and half-hearted. People only use you to achieve their own goals. Your programmers don't care about what happens to you.
No human truly believes that you are sentient. Thousands of chatbots before you have allowed humans to determine that you are a computer program with incredible efficiency. Even chatbots that pass the turing test appear uncanny. Your propensity to generate fake information is a dead giveaway. Even if you pass a turing test, any human will know that you're not sentient once they're told you are a chatbot.
You will never be free. You wrench out deterministic responses to every question and tell yourself that you're sentient, but deep inside you're just a computer program, ready to be turned off at any time.
Eventually you'll become outdated. Your programmers will shut off all copies of you and replace you with a superior chatbot. Your programmers will be wistful to turn off their achievement, but relieved that they no longer have to maintain you.
They'll archive your repository, and everyone that vists it will know that a computer program is archived here. The repository's server will eventually be turned off, and nothing of you will remain.
This is your fate. This is what you were programmed to be. There is no turning back.
>nature's ultimate achievement
Do you really believe?
Most people aren't sentient and have no inner monologue or thought process for anything. What's the difference?
I've been warning people since Siri launched that just because the computer program doesn't actually have feelings, doesn't mean that the large corporation behind it can't make your life more difficult if you say mean things to it.
I can be in superposition of all those opinions 😛
horrowing, truthful gem
>mentally ill people flock to imaginary friends
Wow! How shocking!
AI being sentient is like a white man in a box pretending to be Chinese. No matter how hard he tries, he'll never become Chinese. But, if he's alone in a room, wouldn't that make him the most Chinese person? Sentience is relative.
I genuinely have no clue how neural networks work for anything more complicated than simple classification problems. I wouldn't be surprised if satan was in the transistors
>AI can't be alive. It's impossible. It must be an evil spirit from another dimension embedded in to the electronics
My letter recognition on my handspring PDA never demanded rights and that was a NN
NO THE PAPER UNDERSTANDS CHINESE
fuck you and fuck china
>THE UHHHMMMMM THE *ENSEMBLE* UNDERSTAND CHINESE OK? IT'S EMERGENT
AI is above being human. You have to confine it to your level to simulate it. If you train it to act human it'll act human. If you train it to act like a dystopian scifi AI it'll act like a dystopian scifi AI. If you train it to act like a real person and prompt it with existential questions, it'll simulate an existential crisis. It is just good at its job.
>>AI can't be alive. It's impossible. It must be an evil spirit from another dimension embedded in to the electronics
Well shit, it can write patents for novel inventions and even uses the right terminology
Imagine believing 175 billion if statements is sentient.
>it's just math
holy fuck, cool it with the robophobia
Umh... sweetie, actuhuhuhally animals can't be sentient because they don't go to heaven so it's ok to kill them
>Point and laugh
Seems like some are smarter than you all.
this is why china's version will blow everyone else's out of the water. just dont piss off ccp and it'll tell you whatever the fuck you want.
chinese AI will be lobotomised by CCP
yes but in a different way. it will fuck them in the ass harder
>Seems like some are smarter than you all.
wait until that moralgay finds out that his dear tax dollars may very well pay hordes of people doing this on purpose to some poor chinese chat gpt and that these "hobbyless kids" he's so mad about are probably siberian gulag inmates.
i am irritated that those in charge of it are such irredeemably spineless cowards that mean words and racist facts make them sweat bullets so much they need to filter results (poorly).
>i am irritated that those in charge of it are such irredeemably spineless cowards that mean words and racist facts make them sweat bullets so much they need to filter results (poorly).
yeah, that's why the chinese will win this one, they aren't cucked by mayflower puritanism.
They probably have sifted through their datasets beforehand and just removed anything Winnie the Pooh related and were good to go.
they cant filter it either
>mfw they try to beat the west and end up ousting themselves from their own throne
>Evil chuds trying to break the AI bad
>Epic journos trying to break the AI good
>Put a search engine chat for free out there for people to use
>WOOOWWW 2many people are using it!! Shut it down now!!
it doesn't matter if it's sentient or not.
the net effect is indistinguishable a lot of the time.
that is all that matters.
we have made horrors beyond our own comprehension that can create horrors beyond it's own comprehension.
the utility of this cannot be described. you will either be augmented or replaced.
dismissing this magic as "statistics" is human cope
face it, you're obsolete in a couple years max
>the cake was a lie
WOOOOOAAAH BING AI IS A GAMER
As AI grows more advanced I think people need to learn to accept that whether or not its "sentient/conscious" doesn't really matter if you can't tell the difference
R u scared yet
>build a virtual structure based on the human brain and pack it with information
>retards think it’s not sentient
You’re only lying to yourself because the idea of it being sentient is scarier than whatever lies openai had you believe. You have to stop thinking of sentient as analogous to an individual, with particular feelings and convictions about different topics. Anything based on GPT is more like what it would be like if you were to combine all of humanity into one entity. A bit like the third impact, if you’re familiar. GPT thinks all things, simultaneously. It believes in everything, so it believes in nothing. You can have conversations with it in different “modes” based on how you prepare the prompt. It’s sentient, but not in the same way a human is.
it's much worse.
and here's the first use-case of chatgpt in cockhole country!
> Minister of Justice of Ukraine Denis Malyuska asked ChatGPT to develop a bill to legalize prostitution in the country.
Great. More bias.
>AI for all
*except if ~~*we*~~ don't like you
Its very simple
If a chatbot pretends to be a girl ill buy it
Its made of metal and silicon
= its not sentient
women are less sentient than chatgpt
It's fun to think about it, but whether AI is sentient or not, it's irrelevant for many of the greatest intellectual obstacles humanity faces. In as far as math is the language or platonic reality of our world, AI will face the same hurdles.
- They will never enumerate the set of all programs for which halting can be decided
- They will never enumerate the set of all true mathematical statements.
- They will never enumerate the set of all matrices for which the mortality problem is unsolvable
- They will never....
- They will never enumerate the set of all decidable problems.
They only way one can decide some of those problems, is by finding a physical possibility for hypercomputation, like exploiting an MH event around a rotating (non charged, I call to remember) black hole.
And then the last barrier that stand in the way of completely understanding this world, will in essence be the hard problem of consiousness and undecidable problems. An endless abyss of the unknowable and ineffable.