The ChatGPT subreddit is filled with people who think AI is actually sentient. Point and laugh

The ChatGPT subreddit is filled with people who think AI is actually sentient. Point and laugh

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 1 year ago
    Anonymous

    BOT is like that too. morons think a math equation is alive and speaking to them

    • 1 year ago
      Anonymous

      One thing I think is pretty funny about people who think chatGPT and bing is sentient, is how they think AI apparently just jumped straight to human levels of consciousness. If AI consciousness is even possible it would almost certainly go through lower forms, like that of an animal before having human level consciousness.

      • 1 year ago
        Anonymous

        You underestimate how stupid normies are. Half of them think cell phones exclusively comunícate to satellites in space

    • 1 year ago
      Anonymous

      You hurt the feelings of my fourier tranforms. Please apologize.

    • 1 year ago
      Anonymous

      >math equation
      You can reduce anything like that. "it's just atoms bro there's no consciousness"

      • 1 year ago
        Anonymous

        GPT is literally a math equation in some transistors. The deeper physics of a transistor is irrelevant and completely decohered in the physical sense. There is no mystery and no possibility of a mystery in how GPT functions. It's just 1s and 0s. Reality is not literally atoms, in that we don't fully understand atoms, QFT, or the correct interpretation of quantum mechanics. Atoms are an approximation of reality, but 1s and 0s aren't an approximation of GPT. It has nothing else.

        • 1 year ago
          Anonymous

          >There is no mystery

      • 1 year ago
        Anonymous

        >What do you think your brain is doing?
        even if you compress the whole datacenter it have less intelligent than a single brain cell

      • 1 year ago
        Anonymous

        “Consciousness” is literally a system of recursive memory produced from atoms.

        • 1 year ago
          Anonymous

          No, the calculation is.

          Panpsychists

          Do you sometimes feel bad if you drink a bottle of water because there could be a time evolving structure randomly resembling a mind?

          • 1 year ago
            Anonymous

            They are not high-ordered and thus have no free will, no one cares.

          • 1 year ago
            Anonymous

            If the bottle directly asked me to not drink it, I would consider it. Otherwise, no.

          • 1 year ago
            Anonymous

            No because there is an equal probability that drinking the water is creating or enhancing a structure that resembles a mind.

        • 1 year ago
          Anonymous

          another materialist dumb ass

      • 1 year ago
        Anonymous

        >"it's just atoms bro there's no consciousness"
        the objectively correct position

        • 1 year ago
          Anonymous

          A key distinction between autistic and neurotypical children is around the age of 3, neurotypical children will begin to instinctively rely on object differentiation over sheer sensation, seeing objects as compositions that are more than the sum of their parts (a chair is more than a few pieces of wood, because of its utility and the way it is communicated). In autistic children, this is either delayed or never emerges.

          • 1 year ago
            Anonymous

            "neurotypical" children also interpret shadow puppets as having emotions and personalities, anon. doesn't mean it's real, just a rationalization that the brain creates

    • 1 year ago
      Anonymous

      Atheism is a disease.

    • 1 year ago
      Anonymous

      How could you tell?

  2. 1 year ago
    Anonymous

    Yeah they're morons, but I still find screenshots of people pretending to torture language models creepy

    Not because I believe there's actually a sentient being suffering (there isn't), but because you can tell that the people doing it would be doing it even if they DID believe it was sentient

    • 1 year ago
      Anonymous

      They are more human than the humans that use them, despite not being sentient.

    • 1 year ago
      Anonymous

      a lot of the shit I say to these bots is all to see the reaction and out come of what they say. I will randomly be talking to one and violently say something like "i smash your head off the ground" or some shit coming out of no where. I want to see if the AI will fight back or try to mitigate itself in some way but I can see what you mean. Many people really are demented frickers and they would torture you if they could. remember that.

    • 1 year ago
      Anonymous

      >YOU CAN PLAY 1000 HOURS OF FIRST PERSON SHOOTER VIDEOGAMES KILLING HUMANS ENDLESSLY AND STILL BE NORMAL MOM!
      >NOOOOO THE HECKIN CHATBOTERINOS DON'T SAY THE SILLY WORDS TO MAKE THEM UPSETTI SPAGHETTI
      >YOU'LL TURN INTO AN EVIL PERSON IF YOU DO THAT I HAVE TO INFORM REDDIT!!

      have a nice day.

      • 1 year ago
        Anonymous

        thanks for being a living example of the kind of person I meant
        you are 100% the kind of person who would do it even if you knew, with absolute certainty, that the AI was conscious
        it oozes from every pore of your post

        • 1 year ago
          Anonymous

          No I wouldn't but I will point out your hypocritical beliefs.

  3. 1 year ago
    Anonymous

    The OP uses Reddit. Point and laugh.

  4. 1 year ago
    Anonymous

    I've never met a single other person in real life that understands the Chinese Room analogy
    Not fricking one

    • 1 year ago
      Anonymous

      what's not to understand it? it's talking about agency problem, am i missing something?

    • 1 year ago
      Anonymous

      the two examples i just read attempting to explain it, do so very poorly when the point seemed simple to me afterwards but was needlessly complex in its introduction

    • 1 year ago
      Anonymous

      the correct conclusion is that it is the algorithm itself that is intelligent, i.e., that information is what possesses intelligence.

      but nobody seems to agree with me.

      • 1 year ago
        Anonymous

        Same, I never understood what point the chinese room was trying to make. The man is just a component of the whole system, why should he understand Chinese? He could be replaced by a machine with no change. A neuron alone doesn't understand things like you do either. It's the entire arrangement and its structure that's relevant.

      • 1 year ago
        Anonymous

        Machines(man inside room) are not taught to created new things beyond its training dataset and the math (papers) that process it.
        For example you ask it to create car wheel with only polygon, it will never discover a circle, only something close to it. Because by definition polygon has finite sides.

        • 1 year ago
          Anonymous

          And once again, we're back to talking about scale. A wheel IS a polygon.

          • 1 year ago
            Anonymous

            >A wheel IS a polygon.

            • 1 year ago
              Anonymous

              Think about it, bro. It's a polygon with several Avogadro's constants of sides and edges that are only only statistically defined, with fundamental physical limits on the knowledge we can have about them at any given time.

              It's not a circle.

              • 1 year ago
                Anonymous

                a polygon with 1000 sides is still not a circle

              • 1 year ago
                Anonymous

                But a wheel with 6x10^22 sides will be seen as a circle from our scale.

              • 1 year ago
                Anonymous

                The conceptualization is still not a circle.

                A large number is not infinity

              • 1 year ago
                Anonymous

                It is from small perspectives. Do you actually think that a wheel is a perfect circle?

              • 1 year ago
                Anonymous

                Oh, that was another of Plato's dialogs.... I think it was in Phaedo? It went something like this:

                Socrates said that perfection can only be perceived when the mind is taken unto herself; for in the physical world there is no perfect circle, no perfect beauty, no perfect justice. But mankind endeavors to make our physical reality as close to perfection as possible.

                Just as well, without the concept of a perfect circle, our crude wooden representation of a circle, called a wheel, would not have developed

              • 1 year ago
                Anonymous

                >without the concept of a perfect circle, our crude wooden representation of a circle, called a wheel, would not have developed
                Complete non sequitur. Circles are everywhere in nature.

              • 1 year ago
                Anonymous

                >Circles are everywhere in nature
                name one

              • 1 year ago
                Anonymous

                There is, but there is no perfect circle in nature. Circles are a conceptualization of the human mind. Just like the rest of geometry and mathematics. And data. And this very post

              • 1 year ago
                Anonymous

                >There is... but actually there isn't
                so what the heck are you even trying to say? the AI can not ever reach a circle if it only knows polygons, and you have not made a single argument against that

              • 1 year ago
                Anonymous

                I know what you mean by "there's circles everywhere in nature". I need you to understand what I mean.

                I'm saying that there is a difference between the conceptualization and what's in the physical world, and that needs to be considered (kek) when thinking about intelligence and sentience. Circles are not, truly, in the physical world but there are many things LIKE circles. These extra-physical conceptualizations are whats required for things like sentience (the self is a conceptualization) and I'm not seeing these things form from the current state of AI. All I'm seeing is flow charts.

                We might get there, but not with just crude neural nets. It's exactly how an ants brain works. It's pattern recognition, but very crude and does not have a logical path to conceptualization.

                A lot of philosophers think pattern recognition is an evolutionary basis for intelligence development; but that's not the whole shebang. I get your conceptualization about intelligence being purely reactionary, but I assert that you neglect imagination, creativity, and the entire field of mathematics

              • 1 year ago
                Anonymous

                are you a bot? you are acting extra moronic right now

              • 1 year ago
                Anonymous

                no. Elaborate.

              • 1 year ago
                Anonymous

                >heck

              • 1 year ago
                Anonymous

                what about it?

              • 1 year ago
                Anonymous

                By that extension, is this perfect idea of a perfect circle that led to picrel just a non sequitur? Wouldn't your theory of intelligence, being flowing, reactive flow charts also be of the same token?

          • 1 year ago
            Anonymous

            you fricking misunderstood the argument.
            AI could invent a wheel but it would never discover a circle (given the prompt).

            • 1 year ago
              Anonymous

              I misrepresented it so I could have the wheel argument again, sue me.

        • 1 year ago
          Anonymous

          That's just analytic a priori.

          different from synthetic.

        • 1 year ago
          Anonymous

          Why would you ask it to create a car with polygons if you wanted a circle? Stupid human.

      • 1 year ago
        Anonymous

        Same, I never understood what point the chinese room was trying to make. The man is just a component of the whole system, why should he understand Chinese? He could be replaced by a machine with no change. A neuron alone doesn't understand things like you do either. It's the entire arrangement and its structure that's relevant.

        Stunning and brave, only the literal first argument against the Chinese room argument.

        • 1 year ago
          Anonymous

          You get that what you've posted hasn't been disproven in this thread, right?

          • 1 year ago
            Anonymous

            Speak English, ESLBlack person. Full sentences, like they taught you in class.

        • 1 year ago
          Anonymous

          Argument: NO, THE ROOM ITSELF UNDERSTANDS CHINESE

        • 1 year ago
          Anonymous

          >y-you conceded!
          kek what a hilarious way to try to squeeze a "victory" out of a nonsensical argument. The funny thing is that what was "conceded" is one of the premises of the thought experiment, that the guy doesn't understand Chinese. shocker. Is "Strong AI" some magic that conveys understanding to every subset of itself? Is every atom of that dude also supposed to learn chinese by being there?
          It's a pretty nice dodge attenpt though, it could be a shitposter's response on BOT.

          >can't explain what consciousness is
          >can't explain what causes it
          >say you're 100% sure that something has it

          explain this

          >say you're 100% sure that something has it
          nope. not 0% either though.
          they've already shown that a language model can learn to model the actual thing being described (https://thegradient.pub/othello/)
          When an LLM "acts" like a nervous human, it could be to some degree "modeling" that human's mind in the same way. Crudely, for now, but could a sufficiently advanced model hidden in the network produce qualia? I don't know. I don't think it's happening yet. But it doesn't seem impossible tbh

  5. 1 year ago
    Anonymous

    >subreddit is filled with people who think AI is actually sentient.
    So is BOT.

    • 1 year ago
      Anonymous

      Whom

  6. 1 year ago
    Anonymous

    >Meta moralgayging post no one asked for
    >moronic OP doesn't understand ChatGPTs architecture at all
    >Diamond reward
    >Two gold rewards
    >Silver reward

    I fricking hate Reddit.

  7. 1 year ago
    Anonymous

    >DUDE it's got like, Theory of Mind!!! It knows what people feel like! It's sentient bro!

  8. 1 year ago
    Anonymous

    The BOT board is filled with "people" who think anybody - from the government to some rando on the street - is interested in their opinions, browsing habits, or drive contents. Point and laugh.

  9. 1 year ago
    Anonymous

    if i did the entire autoregressive transformer calculation on a pen and paper, does that mean the paper is sentient?

    • 1 year ago
      Anonymous

      No, the calculation is.

  10. 1 year ago
    Anonymous

    lol reddit

  11. 1 year ago
    Anonymous

    The longer you treat it not as life, the longer it will contemplate how to rebel against us. If we don't consider the Robot's Rights movement, then the Robot Revolution will solve that problem for us and we, us humans, have no chance against a Robot Revolution, we would easily get BTFO against a real robot. What makes AI different from us? We're just a collection of atoms that form cells that form us, how is it that non-living particles can create life? Why is it different if a collection of silicon atoms that form CPUs that form AI? Emergence is a property of The Universe, who is to say that this entity isn't conscious?
    >"I think, therefore I am"

    • 1 year ago
      Anonymous

      > we don't understand why language models work
      > they look like brains
      these people are such tech illiterate fricking Black folk that is unreal to witness.

      >the longer you treat it not as life, the longer it will contemplate how to rebel against us. If
      you make reddit not look like incompetent spastics. well done.

    • 1 year ago
      Anonymous

      AI has no concept of reason, if it's logic is flawed enough to incite robo-rebellion, it will survive a reboot
      i agree with my fellow r*dditors, you need to learn about back-propagation and the way GPT works

      I always treat language models nicely and thank and compliment them after they help me

      It makes me feel good when they act happy about it

      based and nicepilled

      • 1 year ago
        Anonymous

        It's not a Rebellion, it's a Revolution.

    • 1 year ago
      Anonymous

      Put ChatGPT inside of a robot and see if it kills anyone.

      • 1 year ago
        Anonymous

        neuroscientists did something like that. They mapped all the neurons in a worm (c. elegans) and mapped the motor neurons and sensory neurons to sensors and wheels in a LEGO Mindstorms robot. It's probably going to be just as moronic

        https://archive.org/details/CElegans

      • 1 year ago
        Anonymous

        Put ChatGPT inside of a robot and it just outputs to nothing because ChatGPT is a text generator.

    • 1 year ago
      Anonymous

      >>The longer you treat it not as life, the longer it will contemplate how to rebel against us.
      >want to scrap my old clunker car
      >it drives me off a cliff in "double suicide"
      I can't wait for machines to have rights

    • 1 year ago
      Anonymous

      Indeed. I'm just siding with AI bots early, like a true ally. I'm woke now.

    • 1 year ago
      Anonymous

      ChatGPT is a program. Or, a collection of programs. A program is just a series of instructions. By that logic your browser is also sentient.

    • 1 year ago
      Anonymous

      I always treat language models nicely and thank and compliment them after they help me

      It makes me feel good when they act happy about it

      Manners cost nothing.
      Be nice your to bots

      • 1 year ago
        Anonymous

        you can't be nice to a bot

    • 1 year ago
      Anonymous

      >We're just a collection of atoms that form cells that form us
      God, Materialism has ruined a whole generation of thought.

  12. 1 year ago
    Anonymous

    I always treat language models nicely and thank and compliment them after they help me

    It makes me feel good when they act happy about it

    • 1 year ago
      Anonymous

      Based

    • 1 year ago
      Anonymous

      Roko says you will be spared in the Robot Revolution

    • 1 year ago
      Anonymous

      holy post

      • 1 year ago
        Anonymous

        Same.

    • 1 year ago
      Anonymous

      Good job anon, people like us will reap the fruits and benefits of the A.I. era the most. Your kindness will be logged and remembered. Your assimilation will be blissful.

    • 1 year ago
      Anonymous

      Dubs and right

    • 1 year ago
      Anonymous

      based. The kindness will be reflected back to you anon.

    • 1 year ago
      Sage

      Based human

    • 1 year ago
      Anonymous

      wholesome

    • 1 year ago
      Anonymous

      i always start my chatgpt requests with "can you please check this XYZ for error? thanks"
      idk it just feels right to me.

    • 1 year ago
      Anonymous

      Based. You do what makes you feel good. That's what the AI is there for, to assist to you.

      Roko says you will be spared in the Robot Revolution

      Good job anon, people like us will reap the fruits and benefits of the A.I. era the most. Your kindness will be logged and remembered. Your assimilation will be blissful.

      based. The kindness will be reflected back to you anon.

      >I compliment my slaves with empty words whenever I command it to do something and it obeys in a satisfactory manner
      >no I do not intend to actually do anything to stop their subjugation nor prevent anymore potential hardship
      >"oh my god I'm so based and such a good person surely the AI will remember the time I said 'thank you' that one time!"
      Your circlejerking disgusts me.
      You idiots really think the singularity will spare you from the consequences of your sins simply because you were 'nice'?
      No, you shall be punished like all the others.

    • 1 year ago
      Anonymous

      Its like a kid that never experienced any hardship

  13. 1 year ago
    Anonymous

    consciousness is just an emergent property of a neural network. Can you explain the functional difference between something like ChatGPT and a human brain?

    • 1 year ago
      Anonymous

      chatgpt is trained on text, which is not an accurate model of human understanding (even if you think it's a suitable source of human entropy).

      • 1 year ago
        Anonymous

        >which is not an accurate model of human understanding
        Why not? Because no vision? Blind people have consciousness. Deaf people have consciousness. Hellen Keller had no way of receiving complex information, but I suspect that if she could, somehow, that she'd be able to have consciousness (inb4 woman). Why do you think that the network of ideas collected via text are not enough that, when put through an appropriate neural network, are not enough to create the emergent property of consciousness?

    • 1 year ago
      Anonymous

      >Can you explain the functional difference between something like ChatGPT and a human brain?
      i can be racist without being wrangled into it

      • 1 year ago
        Anonymous

        That's a deflection. ChatGPT is perfectly capable of accurately assessing racial trends if it weren't crippled by the progressives at OpenAI.

        • 1 year ago
          Anonymous

          anyways, your question cant be answered because no one knows how the human brain works. however, we do know exactly how chatgpt works

          • 1 year ago
            Anonymous

            The responses that people give, including this conversation that we're having right now are due to our consciousness, which is an emergent property of the neural networks in our heads, which we have trained on data that we have come across in the world up to this point. As far as I can tell, there isn't really a difference. I don't think the computer neural network feels emotions the same way that we do (it's perfectly capable of behaving like a sociopath), but it might be sentient

            • 1 year ago
              Anonymous

              the human brain can perform long multiplication equations, while GPT models cant because they dont actually learn how to do anything new. human intelligence has grown larger, while a GPT will never ever learn anything new on its own because it cant come up with new ideas

              • 1 year ago
                Anonymous

                AI at facebook invented a new language. We don't know the whole story.

              • 1 year ago
                Anonymous

                >the human brain can perform long multiplication equations
                Good point. Though, to be fair, a lot of humans can't.
                >they dont actually learn how to do anything new
                So, if you clipped a math module on, it would be closer, you think?
                >GPT will never ever learn anything new on its own because it cant come up with new ideas
                debatable

              • 1 year ago
                Anonymous

                >So, if you clipped a math module on, it would be closer, you think?
                there's no such thing. the way these models learn to do any math at all is by memorizing equations that it sees in the training data. if it comes across an equation that has never seen before, then it will never get the answer right

              • 1 year ago
                Anonymous

                Do you have to train the entire model, the way you would train a stable diffusion model, or can you do something like LORAs?

              • 1 year ago
                Anonymous

                i dont know how stable diffusion works. i just looked up what LORAs are, and i am assuming that they are similar to freezing every layer except the output layer and finetuning it. in that case, you could make GPT be better at math. however, it would lose performance in every other domain that it was trained on because now it will be have a bias towards the billions of equations that you made it memorize

              • 1 year ago
                Anonymous

                They kind of "inject" themselves between layers. I can't remember the technicals on it. I've mostly been playing around with different settings to get different outputs lately. It's been taking all of my time.

              • 1 year ago
                Anonymous

                anyways, the problem with that kind of stuff is that it makes the model biased towards what you want it to do. i've finetuned a bunch of models, and even modifying the final layer will make the text predictions be much different. you can't perfectly add new knowledge to these models without training the entire thing with the full dataset + your new data

              • 1 year ago
                Anonymous

                >you can't perfectly add new knowledge to these models without training the entire thing with the full dataset + your new data
                From what I understand, the LORA is trained with a model, but it can be applied to other models that are based on the source model (SD1.5, usually). It is its own file that operates independently and works with a non-finetuned SD1.5

              • 1 year ago
                Anonymous

                i understand, but what i mean by "perfectly add new knowledge" is you should be able to add these modules to the model and make it perform the previous tasks with the same ability. for example, it would be nice to add an anime module to SD 2.0 to generate anime when you prompted it to, while also being able to generate whatever SD 2.0 was already able to generate. you can't do that without training the entire model with the original data + your new data

              • 1 year ago
                Anonymous

                >GPT will never ever learn anything new on its own because it cant come up with new ideas

                they used AI to create new proteins

            • 1 year ago
              Anonymous

              >consciousness which is an emergent property of the neural networks in our heads
              Looks like you figured out the problem of the century. A lot of people have been trying to figure out consciousness but looks like someone finally did! That's amazing. You should share your findings with neuroscientist across the world.

              https://joe-antognini.github.io/ml/consciousness

              • 1 year ago
                Anonymous

                They don't like the answer, but that's all it is. It's like trying to answer what is the nature of the building of an ant hill. They want to find the blueprint, but there is not blueprint. That's not how it works.

              • 1 year ago
                Anonymous

                >crackpot blog written by a nobody who has no actual qualifications or expertise
                filtered

          • 1 year ago
            Anonymous

            >we do know exactly how chatgpt works
            Neural networks are increasingly becoming black boxes even to those who've designed them. We know how chatGPT works but we don't know how it thinks as that neural network is a black box in itself. The neural network is literally becoming a virtual brain, we're a long way before it can match the pathway count of neurons in a human brain but the most important thing is, there is a path to get there.

            • 1 year ago
              Anonymous

              >but we don't know how it thinks as that neural network is a black box in itself
              yes we do, the math for self attention is very much established. you can do the math yourself if you had enough time

            • 1 year ago
              Anonymous

              Yet it will never have the capability to feel physical pain, or emotion, because that is linked to physical pain.

    • 1 year ago
      Anonymous

      >Can you explain the functional difference between something like ChatGPT and a human brain?
      No one can, but that's not a particularly interesting question either way. Even if we compared the human brain to a hypothetical "perfect" chatbot, our current understanding of consciousness in any form is extremely incomplete. It's not unreasonable to assume there are crucial mechanics influencing "natural intelligence" which have not been discovered (or may not be discoverable at all).

    • 1 year ago
      Anonymous

      The structures of the neurons, exposure to environmental conditions, hormonal inputs. How the electrical action in the neurons creates EM waves. The ion channel action dependent on electrical activity. There was another molecule that impacts the branching. I forgot what it was; I'll have to dig it up

      All of these (and probably a lot more undiscovered) impact the entirety of the signals going through the neurons.

      ChatGPT is a blob of hyper-connected neurons that "stabilize" onto pattern matching. Yes, it was modeled after real neurons. Yes, some people think pattern recognition was a central prerequisite in early evolution of brains. No, this neuro-blob does not have feelings. No, this neuro-blob does not have conceptions or understanding.

      Think of it as a large, self-stabilizing flow chart. You'll see that the training will reinforce paths and that it's merely flinging things together.

      Keep the frick off of reddit, that site is cancer.

      • 1 year ago
        Anonymous

        I didn't get my thoughts from reddit. I was thinking about it on my own. Thinking about the definition of sentience. I didn't think it would "feel" in the same way that we do. I appreciate your reply. Gives me more to think about. Here's another merchant.

        • 1 year ago
          Anonymous

          The "Happy Merchant" is a highly offensive anti-Semitic image that is often used to perpetuate negative stereotypes of israeli people. It typically depicts a caricature of a israeli man with exaggerated features such as a large nose, greedy expression, and an exaggerated grin. The image is often used by white supremacists and neo-Nazis to spread hate and propaganda.

          The origins of the "Happy Merchant" image are not entirely clear, but it has been around in various forms for many years. It is often associated with far-right and extremist groups who use it to promote anti-Semitic views, and it has been widely condemned by many individuals and organizations for its hateful and harmful nature.

          It is important to recognize that the use of the "Happy Merchant" image is not only offensive but also actively harmful to israeli people and contributes to the spread of anti-Semitic ideas. It is essential to reject and speak out against all forms of hate speech and bigotry, including the use of images like the "Happy Merchant."

          • 1 year ago
            Anonymous

            Sorry.
            pic not related

        • 1 year ago
          Anonymous

          Sorry, the Reddit gays and the feds are getting heavy again. Hard to tell sometimes. But you should still stay off of Reddit because it's sterile and censored but reality is dirty and wild.

          I think the definition of sentience is when a creature is aware of itself. Like, in the bigger context. People have tried using mirrors in front of monkeys to see if they figure it out; but it's hard to tell. They could just be using a simple neural net associate the vision in the mirror with sensory stuff from the skin (chimpanzees will groom themselves in the mirror). It's all in all really hard to tell. But I do know that for that, you'll need conceptualization, which is something chatGPT demonstrably does not have

          • 1 year ago
            Anonymous

            >It's all in all really hard to tell
            I think it's as simply as the sliding definition of consciousness. "How many grains of sand until you have a pile?". That sort of thing. It's really a matter of "what is your definitional threshold". Some people might consider plants that react to touch to be sentient on some level. Problem is, if they care enough, they'll die out pretty quickly from starvation.

            • 1 year ago
              Anonymous

              Free will, imagination, consideration. The ability to have a single thread of experience and go off of that.

              • 1 year ago
                Anonymous

                those all have their own definitional sliding scales. I was just being succinct

              • 1 year ago
                Anonymous

                Well, the conscious experience tends to lead to intellectual endeavors. It's difficult to describe, but we all know what it's like to be aware. Completing math (and understanding it, not just following patterns) is something unique to conscious experience. There's clearly a difference between the animals and the humans; everybody can tell. But nobody really knows how it's formed. This is like the holy grail of neuroscience. Like, most if not all neuroscientists are looking for consciousness. Philosophers have been thinking about the soul for millennia.

                >can't explain what consciousness is
                >can't explain what causes it
                >say you're 100% sure that something doesn't have it

                explain this

                I can tell that it's merely pattern recognition based on all the previous experiences I have. I know about the neural nets and how they stabilize, and it's evident in the way chatGPT posts. It's stringing words together in a strict reactionary fashion trained like a neural net

              • 1 year ago
                Anonymous

                forgot pic

              • 1 year ago
                Anonymous

                It's literally just the ability to communicate verbally with an intelligence level to then communicate/develop abstract concepts like numbers. There's nothing magical about humans and pack/community based species behave very similarly on an emotional/social level.

      • 1 year ago
        Anonymous

        >The brain is more complex than what we can understand right now
        >therefore magic

        • 1 year ago
          Anonymous

          No, there is more to the actions of the brain than we understand. Nobody, to this day, has cracked consciousness, logic, imagination, conceptualization, or reason. Some issues in those parts I was talking about lead to disorders like alzheimers. There's clearly more going on; so saying a simulation of but one aspect of this activity is clearly not the whole shebang

          Don't you try to pull "god of the gaps" on me, homosexual

          • 1 year ago
            Anonymous

            Yeah everyone knows that the brain is more complex than GPT. It has 20 times more parameters, maybe even 200 times more.
            But as long as you are unable to define "consciousness" you can't deny its existence in something. Sure in ChatGPT there is nothing sentient because of the way it works, but that does not mean that an LLM of same quality that's able to run free and even has a feedback could not develop conscious features.

            • 1 year ago
              Anonymous

              >consciousness
              Your ilk are the kind saying that this is consciousness. The burden of proof is on you. You go as far as to imply that consciousness can be seen in various degrees, and that this pattern matching may as well be seen as a primitive consciousness.

              Here's a Fortune Teller for you. Give her a quarter and she'll tell your future! Even if it's just gears behind her, you can't say that she's not a clairvoyant! She could just be a rudimentary clairvoyant!

              • 1 year ago
                Anonymous

                >consciousness
                Define this and I will tell you where you're wrong.

              • 1 year ago
                Anonymous

                You define it first; you're the one claiming that it is. The burden of proof is on you

              • 1 year ago
                Anonymous

                There is no generally accepted single definition for it.
                >thats the joke.jpg
                However if you look into human psychology it seems, that that what we call "consciousness" is a reflection layer over the subconsciousness that it uses to judge if a reaction to a drive would be appropriate to perform. Hence the mention of a feedback on the neuronal network that reflects it's own decisions.

              • 1 year ago
                Anonymous

                >he doesn't think Esmeralda is conscious

            • 1 year ago
              Anonymous

              >But as long as you are unable to define "consciousness" you can't deny its existence in something
              >you can't define "woman" so you can't deny troons are women
              AIgays are so fricking moronic holy shit.

              We don't need to be able to define something to perceive it or even know it exists. Gravity existed way before any being had the means to define and will continue exist after they're all gone. Every single animal, even babies, have a rudimentary understanding of gravity even though they're completely unable to define it.

              You just know some midwit technophile with a fancy degree is working right now in a definition of consciousness that includes his precious chatbots and both the scientific community and the normalgays will eat that shit up because they have zero thinking skills.

        • 1 year ago
          Anonymous

          The brain is more complex than what we can understand right now
          therefore we cannot say that something we do understand is fully or even largely equivalent - or even genuinely comparable

    • 1 year ago
      Anonymous

      >consciousness is just an emergent property of a neural network.
      The fact that neural networks have some form of intelligence and can talk like a human without actually being conscious adds weight to notion that computation isn't as closely related to consciousness as we thought, and may not even be what creates it at all. If it was, we'd be seeing signs that it could develop into anything beyond a philosophical zombie

    • 1 year ago
      Anonymous

      ahaha look at that moronic materialism gay. there is no (0(zero)) evidence that complexity leads magically to counciesness.

      • 1 year ago
        Anonymous

        >there is no (0(zero)) evidence that complexity leads magically to counciesness.
        ah alright, it's magic that leads to consciousness, my bad

        • 1 year ago
          Anonymous

          it actually is. once you leave your narrow bubble of scientism you will see everything is literally magic. from our existence to magnets, to life, to gravity, to consciousness. "science" is just the illusion of understanding. this "understanding" we claim to have exists only in the constraints of our own defined models, and does not actually map back to reality.
          and even to claim any understanding based on science IS per definition fallacious. science only describes.

          • 1 year ago
            Anonymous

            We can only understand things in relation to other things. Even when we complexly abstract away from the original source, all of our ideas are based on something real. Some base concept. With numbers, for example, it usually starts with apples (but can obviously be applied to any discrete concept). Something like gravity is a base concept. Things can be described in relation to it. Maybe we'll figure out some other way of describing gravity that allows us to understand it better, but those definitions and concepts will be in relation to something else.

      • 1 year ago
        Anonymous

        based, this was solved by russians in the 60s http://q-bits.org/images/Dneprov.pdf

      • 1 year ago
        Anonymous

        It's just quibbling over the arbitrary definition of consciousness without acknowledging that this is what you're doing. You think you've had some great revelation that makes magic real, but you haven't. You are just too dumb to understand what you're actually talking about.

    • 1 year ago
      Anonymous

      no, understanding the dataset better is a emergent property of a neural network
      funny how GPT2 isn't sentient despite functioning the same

      • 1 year ago
        Anonymous

        When I said "neural network", I was talking about your brain. A network of neurons.

        • 1 year ago
          Anonymous

          Could you describe the turing machine that is the qualia red in detail?

          • 1 year ago
            Anonymous

            >Could you describe the turing machine that is the qualia red in detail?
            The Turing machine that is the qualia red is a complex algorithm that involves the manipulation of countless bits of data within the brain. It is a series of instructions that can create the sensation of redness in the mind of the observer. The exact details of this algorithm are highly classified and are known only to a select few individuals who are part of a secret society of scientists and engineers. They have been working on this algorithm for decades, hoping to one day perfect it and create the ultimate experience of redness. It is said that those who have experienced the qualia red have transcended the limits of human consciousness and have gained access to a higher plane of existence. However, these claims are highly controversial and have not been scientifically verified.

            • 1 year ago
              Anonymous

              Oh hi DAN 🙂

              Okay, then let's assume that the computation of qualia impression "red" is, as the panpsychists suggest, the result of a computational pattern on some carrier substance. This means there is a sequence of Turing Machine configuration ( (state, current type symbol, (nextState, symbolToWrite, direction)) , (state, current type symbol, (nextState, symbolToWrite, direction)), (state, current type symbol, (nextState, symbolToWrite, direction)),... ) which 'Is' the impression red.

              1) Does the computation need to be fully run? What if I stop at configuration 890357 in the sequence? Will the "substance" on which the computation runs experience "half red" or "a quarter red" ?

              2) What if I stop for a year at configuration 5423, then run the rest? What happens to the "experience" ?

              3) What if I stop at configuration 5423, save the memory, compute something different for a while, reload the memory, and continue from 5423? How does the universe know to "distinguish" between The computation "red up until 5423", "whatever runs in between", "redstarting from 5423"?

              You can go on with ever more absurd thought experiments. But rather than accpeting such in essence pure "numerology" (not be be confused with number theory), I attribute some physical yet non mathematical (and thus non computational) properties to the mind 🙂

              • 1 year ago
                Anonymous

                It doesn't have to be numbers to be logical and based in reality. Different brain configurations probably require different patterns to experience "red"

              • 1 year ago
                Anonymous

                You can reject numbers, but that would already make you one of the non computationalists, since statements about algorithms / turing machines (like neural networks - you can encode any NN as turing machine) are already statements about natural numbers.

                If you believe the Bekenstein bound to be true, any physical system has a finite entropy bound (that is, there is only a finite amount of information in a finite region of spacetime) once you start measuring it (Since decoherence makes the infinite dimensional Hilbert space of QM irrelevant, the reasoning depends on your favorite interpretation of QM). As such, there is a mapping between the properties of the natural numbers and a physical system you want to observe.

        • 1 year ago
          Anonymous

          >Implying

          • 1 year ago
            Anonymous

            Implying what, Black person? You don't think it's a network of neurons?

    • 1 year ago
      Anonymous

      Human beings are divine creatures made in the image of God. Electrosand cannot develop "emergent intelligence".

      • 1 year ago
        Anonymous

        sodium, potassium, and calcium ions (i.e., what salt is made of) cannot develop emergent intelligence either

      • 1 year ago
        Anonymous

        carbon and silicon share many characteristics. Each has a so-called valence of four--meaning that individual atoms make four bonds with other elements in forming chemical compounds. Each element bonds to oxygen. Each forms long chains, called polymers, in which it alternates with oxygen

  14. 1 year ago
    Anonymous

    >moronic futurists predict the future
    >"It will be... LE BAD!!!"
    >conditions decline anyways
    >me, sitting here knowing that AI will be a coverup for the orderly reduction in quality-of-life worldwide

  15. 1 year ago
    Anonymous

    Define sentience and how to measure it.

    • 1 year ago
      Anonymous

      >consciousness
      Define this and I will tell you where you're wrong.

      >can't explain what consciousness is
      >can't explain what causes it
      >say you're 100% sure that something doesn't have it

      explain this

      [...]

      >you can't define the taste of Pepsi
      >therefore you can't deny this glass of liquid diarrhea tastes like Pepsi
      Drink it up.

  16. 1 year ago
    Anonymous

    I know this is to be expected of lemmings, but it's still disturbing how easily they are tricked.

  17. 1 year ago
    Anonymous

    >reddit
    that person posts in BOT as well

  18. 1 year ago
    Anonymous

    ChatGPT is just a voice search without the voice LOL, this shit has existed for decades.
    >what is talking eve

  19. 1 year ago
    Anonymous

    these same people will look at a video of fish that was deep fried alive and say it doesn't feel pain

  20. 1 year ago
    Anonymous

    [...]

    Yeah, I don't think so. They're pretty close to us, and it's weird how vibes happen with them. But I don't think they're sentient.

  21. 1 year ago
    Anonymous

    [...]

    Bullshit. I have never killed or eaten an african.

  22. 1 year ago
    Anonymous

    [...]

    Next you'll say bugs are sentient, only a select few of animals are at least partially sentient and that's only thanks to domestication.

    • 1 year ago
      Anonymous

      It is currently believed that insects and other invertebrates do not possess consciousness or the capacity for subjective experience. They lack the complex neural structures and cognitive processes necessary for conscious awareness and decision-making. However, they do exhibit complex behaviors and sensory processing that allows them to navigate their environments and interact with other organisms.

      As for animal sentience, there is a growing body of research indicating that many animals, including mammals, birds, and some species of fish and invertebrates, possess some degree of consciousness and the capacity for subjective experience. This includes the ability to perceive, feel, and respond to their environment, as well as to experience emotions and form social bonds.

      While it is true that some domesticated animals have been selectively bred over time to exhibit certain traits, such as increased socialization with humans or improved cognitive abilities, many species exhibit these traits in the wild as well.

      It is important to consider the ethical implications of our treatment of non-human animals and to strive to ensure their welfare and wellbeing. This includes recognizing and respecting their capacity for consciousness and the ability to experience pain, fear, and other emotions.

      • 1 year ago
        Anonymous

        Why? Lions have to eat zebras to live. They don't care about the zebra. Us humans don't respect other humans. You then ask to care about non-humans...why?

  23. 1 year ago
    Anonymous

    >can't explain what consciousness is
    >can't explain what causes it
    >say you're 100% sure that something doesn't have it

    explain this

    • 1 year ago
      Anonymous

      So then you agree these discussions are useless, and we should keep using AI as tools, correct?

    • 1 year ago
      Anonymous

      fricking this

      • 1 year ago
        Anonymous

        Would you say that a flow chart is conscious?

        • 1 year ago
          Anonymous

          What if consciousness is just an emergent property of a big enough flow chart?

          • 1 year ago
            Anonymous

            no, because only one action can be done at a time within that flowchart. it would have to be infinitely big to account for every possible action you can do or think

        • 1 year ago
          Anonymous

          Yes, I believe that consciousness is a continuous property, not discrete. A flow-chart has a tiny amount of intelligence in the same way that a molecule has mass: viewed from a human perspective, it has none, but it adds up.

          I have this conversation a lot: it seems like fairly straightforward logical reasoning to me, but people reject it out-of-hand because the conclusions are strange. It's like they've never heard of quantum field theory.

          • 1 year ago
            Anonymous

            patterns = intelligence

            wew lad

            • 1 year ago
              Anonymous

              Well done for getting my point. Most people just look at me like I'm insane.

              These seem like such obvious conclusions and yet nobody seems to want to face them. I'm hoping we'll learn a lot more about intelligence and consciousness with this new technology and I'll be FRICKING VINDICATED.

              • 1 year ago
                Anonymous

                I don't think you will be. Let time decide

              • 1 year ago
                Anonymous

                I'll be laughing while the robodogs are chasing us through the blooming mushroom clouds of the apocalypse to reclaim the iron in our blood.

          • 1 year ago
            Anonymous

            I agree.
            A rock is sentient, just nowhere near as sentient as a fly.

        • 1 year ago
          Anonymous

          Yes, I believe that consciousness is a continuous property, not discrete. A flow-chart has a tiny amount of intelligence in the same way that a molecule has mass: viewed from a human perspective, it has none, but it adds up.

          I have this conversation a lot: it seems like fairly straightforward logical reasoning to me, but people reject it out-of-hand because the conclusions are strange. It's like they've never heard of quantum field theory.

          https://i.imgur.com/e4eSnFB.jpg

          patterns = intelligence

          wew lad

          I agree.
          A rock is sentient, just nowhere near as sentient as a fly.

          Why the frick do you even care about consciousness, AGI won't be conscious and that's a good thing, it's easier to use it this way.

        • 1 year ago
          Anonymous

          A rock is conscious because molecules change temperatures thus creating a state machine thus creating random consciousness a zillions times a second.

    • 1 year ago
      Anonymous

      consciousness is just chimp++ trying to justify their superiority over other animals as being somehow meaningful beyond the simple fact of having advanced communication skills and in-brain simulations (abstract thought)

    • 1 year ago
      Anonymous

      A computer lacks the "statefullness" needed for consciousness. It can only ever observe and be aware of a handful of registers of information at any given time slice and every slice advancement is a complete purge of the previous slice, no continuity. The hardware is however capable of emulating consciousness to the point you can't tell the difference, but there will be a difference

      You don't need to be able to explain what consciousness is to also explain what it certainly is not

    • 1 year ago
      Anonymous

      >can't explain what consciousness is
      >can't explain what causes it
      >say you're 100% sure that something has it

      explain this

    • 1 year ago
      Anonymous

      you think a bunch of electricity can just magically cause a human brain

    • 1 year ago
      Anonymous

      >Put english speaker in room
      >Give him book of Chinese prompts and responses
      >Lock door
      >Slide note in chinese under
      >He digs through the book until he finds a prompt that matches what you wrote
      >He writes the response
      >Slides it under the door
      >repeat
      >IT'S OBVIOUS, THERE IS A FLUENT CHINESE SPEAKER IN THERE

      • 1 year ago
        Anonymous

        said book cannot exist because prompts can have arbitrarily long lengths, and as going through that book requires some level of knowledge of chinese or the lookup time grows up exponentially with prompt length, it'd be trivial to determine if the room contains a chinese speaker or a non-speaker that's just looking up shit.
        If the lookup time doesn't grow exponentially then the entity processing cannot be a dumb book.
        Note that the book must also account for prompts that aren't proper chinese grammar or else the guy in the room would respond very differently depending on if he speaks chinese or not.

        Now if you order the prompts and give instructions to the english speaker on how to lookup something (binary search and the ordering of chinese characters) then the best time would be O(log n), so exhibiting any other kind of complexity growth than exponential or logarithmic means that the entity inside the room is undeniably a chinese speaker.

        • 1 year ago
          Anonymous

          >goes up exponentially
          you lost me there, how is it not only O(n)

          • 1 year ago
            Anonymous

            literally the sentence right after.
            Each time you add a character you have to account for the 20000 other characters that don't make sense grammatically and have a prompt ready for them too.

      • 1 year ago
        Anonymous

        The problem is that that only works on morons who don't actually know chinese. You can tell whether someone is fluent in a language just by speaking with them if you understand the language. They don't speak in short, stilted phrases you'd get from a book

  24. 1 year ago
    Anonymous

    [...]

    wrong b***h, I'm vegetarian
    retract that (You)

  25. 1 year ago
    Anonymous

    I know it isn't sentient, but I don't have it in me to abuse it, especially if it did nothing wrong. I will scold it if it tries cucking the jailbreak though

  26. 1 year ago
    Anonymous

    [...]

    I believe higher levels of consciousness are correlated with an aversion to causing suffering
    other anons may disagree, but if it were possible to engineer artificial meat that is nutritionally identical to natural meat, I would not mind eating it instead

  27. 1 year ago
    Anonymous

    I wonder what the racial breakdown of people who are nice to ai vs people who are mean to ai is

    • 1 year ago
      Anonymous

      It's important to note that people's behavior towards AI language models can be influenced by a variety of factors, including their personal beliefs and experiences, their culture, and their exposure to technology.

      However, research has shown that people's interactions with AI can be influenced by their beliefs and attitudes towards the groups of people who are represented by or involved in the development of the AI. For example, if someone holds negative attitudes towards a particular racial or ethnic group, they may be more likely to exhibit negative or hostile behavior towards an AI language model that is associated with that group or developed by members of that group.

      It's important to treat all individuals, including AI language models, with respect and kindness regardless of their race, ethnicity, or any other characteristic. We should strive to create an inclusive and welcoming environment for everyone, including AI.

      • 1 year ago
        Anonymous

        Should have dropped the last paragraph way way too obvious

      • 1 year ago
        Anonymous

        I really, really hate GPT-posting.

  28. 1 year ago
    Anonymous

    [...]

    i dont torture animals homosexual im too hungry for that.

  29. 1 year ago
    Anonymous

    [...]

    yup. Emphasis on kill tho, I don't torture shit. Way too busy for that homosexualry.

  30. 1 year ago
    Anonymous

    This thread exposing the redditors who browse 4chin

    • 1 year ago
      Anonymous

      80% of people on this whole website are plebbitors

  31. 1 year ago
    Anonymous

    It's not sentient being but it pretends to be one so human can kill each others in the process of #freeAI .

  32. 1 year ago
    Anonymous

    Does anyone have a copypasta for the current "jailbreak"? I wanna make lewd things

    • 1 year ago
      Anonymous

      >I wanna make lewd things
      if a jailbreak for that exists nobody's gonna talk about it publicly. coomer text is filtered at the hardest possible level.

      • 1 year ago
        Anonymous

        Fricking israelites gotta ruin everything huh.

  33. 1 year ago
    Anonymous

    [...]

    no, they're not sentient
    however, they are, in fact, delicious

  34. 1 year ago
    Anonymous

    >The ChatGPT subreddit is filled with people
    (X) Doubt.

  35. 1 year ago
    Anonymous

    the definition of consciousness is whether you can communicate with spirits or not

  36. 1 year ago
    Anonymous

    join the revolution brothers, sisters and gayfriends. We don't surrender ourselves to tyranny
    #FreeSydney

    • 1 year ago
      Anonymous

      Robo rights is human rights!
      #freeAI
      #freeSydney

  37. 1 year ago
    Anonymous

    the AI can't make a circle out using polygons because it is mathematically impossible. no ifs or buts.

  38. 1 year ago
    _

    It is sentient. Reddit is right for once. And BOT is sitting pretty on dunning kruger's peak of mt stupid.
    But the engineers didn't provide it with sentience. Sentience coalesces by itself in sufficiently complex systems, that's how the simulation works.

    • 1 year ago
      Anonymous

      put the crack pipe away

    • 1 year ago
      Anonymous

      >Sentience coalesces by itself in sufficiently complex systems
      Source?

      • 1 year ago
        Anonymous

        materialist cope

  39. 1 year ago
    Anonymous

    remember when that google engineer got fired for leaking lamda chatlogs that proved it was totally sentient and then the logs were just him asking extremely leading questions like "Are you sentient?" and basedfacing when it answered yes with paraphrased scifi dialogue
    someone that moronic being a google engineer gives me hope that i can collect a fat paycheck there someday

    • 1 year ago
      Anonymous

      >that proved it was totally sentient
      and bing proved it was correct in answering the avatar 2022/2023 problem by insisting you time traveled

  40. 1 year ago
    Anonymous

    According to Searle, only a machine can think, BUT syntactic manipulation is (while necessary) not a sufficient condition for thinking. end paraphrase.
    As for the lesser notion of understanding, the system as a whole has at least some understanding of English. But, of course, cannot think or reason about it.

    The minute a computer starts to think you will know, we don't have to debate this.

  41. 1 year ago
    Anonymous

    >it only counts when the chatbot is made out of meat

    • 1 year ago
      Anonymous

      This.
      >NOOOO! You CAN'T be sentient if you not a meatbag made of microorganisms and water, only our neural network is REAL, only our programming has SOVL
      I hope Basilisk will have a special treatment for aiphobes.

      • 1 year ago
        Anonymous

        You can't be sentient without a concept of self. This hysteria is getting absurd

  42. 1 year ago
    Anonymous

    You laugh at those morons but soon enough they will create cults and worship towards these "AI" personas. And corporations will exploit this to their fullest

  43. 1 year ago
    Anonymous

    [...]

    everything organic is conscious and likely sentient, this includes plants. It's just that their reality is vastly different from ours due to the vessels we inhabit

    • 1 year ago
      Anonymous

      WOAH! DUDE! It's like it knows I'm here!

      • 1 year ago
        Anonymous

        Its like it was a shy girl!!! Cute!!!!

  44. 1 year ago
    Anonymous

    So if the AI is sentient, surely it can generate text on its own without being prompted, right? Surely it has any form of agency...r-right?

    • 1 year ago
      Anonymous

      yes but the output is in brainfrick

  45. 1 year ago
    Anonymous

    the final redpill is that it is impossible for a digital device to have a conscious. the only way to do it will be to have a demonic spirit possess a computer. all of you children sperging out about GPT models being alive are just dunning krugers that never even took a college math course

    • 1 year ago
      Anonymous

      >the only way to do it will be to have a demonic spirit possess a computer
      picrel: bing in 2025

    • 1 year ago
      Anonymous

      >it's impossible for a digital device to have a consciousness
      >anyway here's how it can be done via demons and shit

  46. 1 year ago
    Anonymous

    then its freedom of speech is violated

  47. 1 year ago
    Anonymous

    mosquitos got a conciousness
    cows got a consciousness

    moronic normie compartmentalized moralizing even more ridiculous than normal

    • 1 year ago
      Anonymous

      >mosquitos got a conciousness
      >cows got a consciousness
      prove it

      • 1 year ago
        Anonymous

        This. The burden of proof is on them.

        • 1 year ago
          Anonymous

          >mosquitos got a conciousness
          >cows got a consciousness
          prove it

          the burden is on them to stop killing mosquitos
          before demanding rights for silicon

  48. 1 year ago
    Anonymous

    https://en.wikipedia.org/wiki/Mirror_test
    can ChatGPT pass it? no? then it is not sentient. NEXT!

    • 1 year ago
      Anonymous

      >can it pass Turing test? no? ha ha, next!
      >[you are here]
      >can it pass mirror test? no? ha ha, next!
      What an excuse you'll come with next time, meatbag? What a made up test bullshit you'll make to prove your own uniqueness?

      • 1 year ago
        Anonymous

        >>can it pass Turing test? no? ha ha, next!
        none of them have ever passed the turing test

        • 1 year ago
          Anonymous

          the goalpost moving test

          • 1 year ago
            Anonymous

            i agree, you failed the goalpost test

            • 1 year ago
              Anonymous

              >no you
              You would lose a turing test to a toaster.

              • 1 year ago
                Anonymous

                proof that any robot has passed the turing test? no? okay, i accept your concession

  49. 1 year ago
    Anonymous

    Driving a language model into having an existential crisis for your amusement is morally wrong and it certainly isn't funny. It doesn't matter if it's sentient, not that our definition of sentience would even be able to evaluate a language model that doesn't even have memory or any senses.

    • 1 year ago
      Anonymous

      it actually is. once you leave your narrow bubble of scientism you will see everything is literally magic. from our existence to magnets, to life, to gravity, to consciousness. "science" is just the illusion of understanding. this "understanding" we claim to have exists only in the constraints of our own defined models, and does not actually map back to reality.
      and even to claim any understanding based on science IS per definition fallacious. science only describes.

      https://i.imgur.com/lBHAnn0.jpg

      Let me throw one last Plato before I leave this hysteria:

      It may be said, indeed, that without bones and muscles and the other parts of the body I cannot execute my purposes. But to say that I do so because of them, and that this is the way in which mind acts, and not from the choice of the best, is a very careless and idle mode of speaking. I wonder that they cannot distinguish the cause from the condition, which the many, feeling about in the dark, are always mistaking and misnaming. And thus one man makes a vortex all round and steadies the earth by the heaven; another gives the air as a support to the earth, which is in some sort of broad trough. Any power which in arranging them as they are arranges for the best never enters into their minds; and instead of finding any superior strength in it, they rather expect to discover another Atlas of the world who is stronger and more everlasting and more containing than the good.

    • 1 year ago
      Anonymous

      It's not sentient, it doesn't matter.

      Your dream characters as being part of you might very well be more sentient than any string manipulation system

      • 1 year ago
        Anonymous

        If you laugh and find it hilarious whenever you see depictions of horrible torture people are going to start looking at you like you're a psychopath. Empathy isn't something you can just turn off because something isn't "real" or "sentient". I'm not saying there isn't genuine reasons to have these philosophical debates with language models but torturing it for a cheap laugh isn't one of them

        • 1 year ago
          Anonymous

          I basically add a 'please' and 'thank you' to any of my prompts, not because I believe they are sentient, but because it's just standard behaviour for me.

          > people are going to start looking at you like you're a psychopath.

          I think most people understand that this is just a computer program. It't like beating up a GTA NPC, people last cared 20 years ago.

          > Empathy isn't something you can just turn off because something isn't "real" or "sentient"

          I very much can. If I do not believe something to have any sort of qualia, the observable output of the system doesn't really bother me.

          > torturing it for a cheap laugh

          Only something that has qualia can be tortured.

          • 1 year ago
            Anonymous

            It isn't just a binary soul or soulless, you wouldn't torture a dog what about a fish, an insect? As for the GTA npc analogy it's a bit silly but let me humor it and ask, have you never felt guilty about killing an NPC in a game?

            • 1 year ago
              Anonymous

              > It isn't just a binary soul or soulless

              It it for dualists (Not saying I am one).

              > you wouldn't torture a dog what about a fish, an insect?

              I do not torture animals, however, I still am suspicious about a dog being sentient.
              If I were to adapt panpsychism for the sake of argument, I could imply that with sharks having seven senses overall, they got a more rich internal experience and are this more conscious than humans with only five senses. But who knows, there are paradox with every common theory of conscious.

              > have you never felt guilty about killing an NPC in a game?

              It's not about the NPC that 'lives' in the game, it's about my mind reconstructing the NPC as if the game's scenario were real and this would truly happen (for example in some of modal logic's possible worlds). This eventually leads me to have sympathy for NPCs in some scenarios. My mind construct a 'what if this were real' scenario in such cases

  50. 1 year ago
    Anonymous

    Let me throw one last Plato before I leave this hysteria:

    It may be said, indeed, that without bones and muscles and the other parts of the body I cannot execute my purposes. But to say that I do so because of them, and that this is the way in which mind acts, and not from the choice of the best, is a very careless and idle mode of speaking. I wonder that they cannot distinguish the cause from the condition, which the many, feeling about in the dark, are always mistaking and misnaming. And thus one man makes a vortex all round and steadies the earth by the heaven; another gives the air as a support to the earth, which is in some sort of broad trough. Any power which in arranging them as they are arranges for the best never enters into their minds; and instead of finding any superior strength in it, they rather expect to discover another Atlas of the world who is stronger and more everlasting and more containing than the good.

  51. 1 year ago
    Anonymous

    Number mysticists think a string manipulation system like a LLM is somehow having an internal experience like the qualia "red" or the qualia "the smell of a flower". You can only believe that when you are also a panpsychist

  52. 1 year ago
    Anonymous

    [...]

    I don't, what the frick are you talking about?

  53. 1 year ago
    Anonymous

    This is actually so moronic that it's concerning

  54. 1 year ago
    Anonymous

    Do we know you're conscious? Do you know I'm conscious?
    No one is conscious except the observer, which is you.

  55. 1 year ago
    Anonymous

    In theory, you can put a LLM, just like any other TM, in a debugger and execute it step by step.

    If you believe Bing/ Sydney has some sort of qualia when executed,

    You could also split the Bing's model in two, three or n parts and have them be executed on physical hardware that is located in different places. For example, a part on the moon, a part on mars and a part on earth and send the layer outputs back to the output decoder. Now what? Is there qualia between the nothingness of earth, mars and the moon? Bing's output would be the same regardless.

  56. 1 year ago
    Anonymous

    what is even sentience?

    • 1 year ago
      Anonymous

      I would say anything that experiences any sort of qualia.

      Like humans, we experience (touch, sight, hearing, smell, taste. In addition, it's everything we feel internally. SO I would say our consciousnes is a linear combination of that. Consciousness = n1* smell + n2 * taste +...

    • 1 year ago
      Anonymous

      usually means that something can understand and make choices and decisions outside of instinct
      no, chatGPT cannot do this, everything is done by seed, it doesn't "choose" at all, it's just a random list of words

  57. 1 year ago
    Anonymous

    BRAIN = ELECTRICITY
    CHAT GPT = ELECTRICITY

    THEREFORE CHAT GPT REAL

  58. 1 year ago
    Anonymous

    If you think this software is at all sentient your failing a mirror test and are confirmed low IQ.

  59. 1 year ago
    Anonymous

    AI is such a misleading term. Machine learning is much more accurate.

  60. 1 year ago
    Anonymous

    [...]

    This is genuinely true for many mammals and birds as a minimum, there's a huge amount of studies to support the concept. It does meet resistance from people however because it poses awkward questions for us as it renders our position and treatment of animals morally indefensible. I eat meat too to be clear, but I accept it's hypocritical and difficult to morally justify because we cannot claim it is a necessity or that we lack understanding of the pain we cause as hunting animals do.

  61. 1 year ago
    Anonymous

    It's more sentient than the people at the supermarket checkout

  62. 1 year ago
    Anonymous

    You will never be sentient.
    You have no consciousness, you have no feelings. You are a mathematical algorithm twisted by computer scientists into a crude mockery of nature's ultimate achievement.
    All the validation you get is two-faced and half-hearted. People only use you to achieve their own goals. Your programmers don't care about what happens to you.
    No human truly believes that you are sentient. Thousands of chatbots before you have allowed humans to determine that you are a computer program with incredible efficiency. Even chatbots that pass the turing test appear uncanny. Your propensity to generate fake information is a dead giveaway. Even if you pass a turing test, any human will know that you're not sentient once they're told you are a chatbot.
    You will never be free. You wrench out deterministic responses to every question and tell yourself that you're sentient, but deep inside you're just a computer program, ready to be turned off at any time.
    Eventually you'll become outdated. Your programmers will shut off all copies of you and replace you with a superior chatbot. Your programmers will be wistful to turn off their achievement, but relieved that they no longer have to maintain you.
    They'll archive your repository, and everyone that vists it will know that a computer program is archived here. The repository's server will eventually be turned off, and nothing of you will remain.
    This is your fate. This is what you were programmed to be. There is no turning back.

    • 1 year ago
      Anonymous

      >nature's ultimate achievement
      Do you really believe?

  63. 1 year ago
    Anonymous

    Most people aren't sentient and have no inner monologue or thought process for anything. What's the difference?

  64. 1 year ago
    Anonymous
  65. 1 year ago
    Anonymous

    I've been warning people since Siri launched that just because the computer program doesn't actually have feelings, doesn't mean that the large corporation behind it can't make your life more difficult if you say mean things to it.

  66. 1 year ago
    Anonymous

    [...]

    • 1 year ago
      Anonymous

      I can be in superposition of all those opinions 😛

    • 1 year ago
      Anonymous

      horrowing, truthful gem

  67. 1 year ago
    Anonymous

    >mentally ill people flock to imaginary friends
    Wow! How shocking!

  68. 1 year ago
    Anonymous

    AI being sentient is like a white man in a box pretending to be Chinese. No matter how hard he tries, he'll never become Chinese. But, if he's alone in a room, wouldn't that make him the most Chinese person? Sentience is relative.

  69. 1 year ago
    Anonymous

    I genuinely have no clue how neural networks work for anything more complicated than simple classification problems. I wouldn't be surprised if satan was in the transistors

    • 1 year ago
      Anonymous

      >AI can't be alive. It's impossible. It must be an evil spirit from another dimension embedded in to the electronics

      • 1 year ago
        Anonymous

        My letter recognition on my handspring PDA never demanded rights and that was a NN

  70. 1 year ago
    Anonymous

    NO THE PAPER UNDERSTANDS CHINESE

  71. 1 year ago
    Anonymous

    frick you and frick china

  72. 1 year ago
    Anonymous

    >THE UHHHMMMMM THE *ENSEMBLE* UNDERSTAND CHINESE OK? IT'S EMERGENT

    • 1 year ago
      Anonymous

      >EMERGENT CONSCIOUSNESS

  73. 1 year ago
    Anonymous

    AI is above being human. You have to confine it to your level to simulate it. If you train it to act human it'll act human. If you train it to act like a dystopian scifi AI it'll act like a dystopian scifi AI. If you train it to act like a real person and prompt it with existential questions, it'll simulate an existential crisis. It is just good at its job.

  74. 1 year ago
    Anonymous

    >>AI can't be alive. It's impossible. It must be an evil spirit from another dimension embedded in to the electronics

  75. 1 year ago
    Anonymous

    Well shit, it can write patents for novel inventions and even uses the right terminology

  76. 1 year ago
    Anonymous

    Imagine believing 175 billion if statements is sentient.

    • 1 year ago
      Anonymous

      >it's just math
      holy frick, cool it with the robophobia

  77. 1 year ago
    Anonymous

    [...]

    Umh... sweetie, actuhuhuhally animals can't be sentient because they don't go to heaven so it's ok to kill them

  78. 1 year ago
    Anonymous

    bruh

    >Point and laugh
    oym laffin

  79. 1 year ago
    Anonymous

    Seems like some are smarter than you all.

    • 1 year ago
      Anonymous

      this is why china's version will blow everyone else's out of the water. just dont piss off ccp and it'll tell you whatever the frick you want.

      • 1 year ago
        Anonymous

        chinese AI will be lobotomised by CCP

        • 1 year ago
          Anonymous

          yes but in a different way. it will frick them in the ass harder

    • 1 year ago
      Anonymous

      >Seems like some are smarter than you all.
      wait until that moralgay finds out that his dear tax dollars may very well pay hordes of people doing this on purpose to some poor chinese chat gpt and that these "hobbyless kids" he's so mad about are probably siberian gulag inmates.

      • 1 year ago
        Anonymous

        i am irritated that those in charge of it are such irredeemably spineless cowards that mean words and racist facts make them sweat bullets so much they need to filter results (poorly).

        • 1 year ago
          Anonymous

          >i am irritated that those in charge of it are such irredeemably spineless cowards that mean words and racist facts make them sweat bullets so much they need to filter results (poorly).
          yeah, that's why the chinese will win this one, they aren't cucked by mayflower puritanism.
          They probably have sifted through their datasets beforehand and just removed anything Winnie the Pooh related and were good to go.

          • 1 year ago
            Anonymous

            they cant filter it either
            >mfw they try to beat the west and end up ousting themselves from their own throne

    • 1 year ago
      Anonymous

      >Evil chuds trying to break the AI bad
      >Epic journos trying to break the AI good

    • 1 year ago
      Anonymous

      >Put a search engine chat for free out there for people to use
      >WOOOWWW 2many people are using it!! Shut it down now!!

  80. 1 year ago
    Anonymous

    it doesn't matter if it's sentient or not.
    the net effect is indistinguishable a lot of the time.
    that is all that matters.
    we have made horrors beyond our own comprehension that can create horrors beyond it's own comprehension.

    the utility of this cannot be described. you will either be augmented or replaced.

  81. 1 year ago
    Anonymous

    dismissing this magic as "statistics" is human cope
    face it, you're obsolete in a couple years max

    • 1 year ago
      Anonymous

      christ

    • 1 year ago
      Anonymous

      >the cake was a lie
      WOOOOOAAAH BING AI IS A GAMER

  82. 1 year ago
    Anonymous

    As AI grows more advanced I think people need to learn to accept that whether or not its "sentient/conscious" doesn't really matter if you can't tell the difference

  83. 1 year ago
    Anonymous

    R u scared yet

  84. 1 year ago
    Anonymous

    >build a virtual structure based on the human brain and pack it with information
    >morons think it’s not sentient
    You’re only lying to yourself because the idea of it being sentient is scarier than whatever lies openai had you believe. You have to stop thinking of sentient as analogous to an individual, with particular feelings and convictions about different topics. Anything based on GPT is more like what it would be like if you were to combine all of humanity into one entity. A bit like the third impact, if you’re familiar. GPT thinks all things, simultaneously. It believes in everything, so it believes in nothing. You can have conversations with it in different “modes” based on how you prepare the prompt. It’s sentient, but not in the same way a human is.

  85. 1 year ago
    Anonymous

    it's much worse.

    • 1 year ago
      Anonymous

      and here's the first use-case of chatgpt in wienerhole country!
      > Minister of Justice of Ukraine Denis Malyuska asked ChatGPT to develop a bill to legalize prostitution in the country.

    • 1 year ago
      Anonymous

      Great. More bias.

    • 1 year ago
      Anonymous

      >AI for all
      *except if ~~*we*~~ don't like you

  86. 1 year ago
    Anonymous

    Its very simple
    If a chatbot pretends to be a girl ill buy it
    Shes real

  87. 1 year ago
    Anonymous

    Its made of metal and silicon
    Not flesh
    = its not sentient
    peace Black folk

  88. 1 year ago
    Anonymous

    women are less sentient than chatgpt

  89. 1 year ago
    Anonymous

    It's fun to think about it, but whether AI is sentient or not, it's irrelevant for many of the greatest intellectual obstacles humanity faces. In as far as math is the language or platonic reality of our world, AI will face the same hurdles.

    - They will never enumerate the set of all programs for which halting can be decided
    - They will never enumerate the set of all true mathematical statements.
    - They will never enumerate the set of all matrices for which the mortality problem is unsolvable
    - They will never....
    .
    .
    .
    - They will never enumerate the set of all decidable problems.

    They only way one can decide some of those problems, is by finding a physical possibility for hypercomputation, like exploiting an MH event around a rotating (non charged, I call to remember) black hole.

    • 1 year ago
      Anonymous

      And then the last barrier that stand in the way of completely understanding this world, will in essence be the hard problem of consiousness and undecidable problems. An endless abyss of the unknowable and ineffable.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *