How can we proof whether AI is actually sentient or just constructing sentences that appear sentient?

How can we proof whether AI is actually sentient or just constructing sentences that appear sentient? Is there even a difference?

A Conspiracy Theorist Is Talking Shirt $21.68

It's All Fucked Shirt $22.14

A Conspiracy Theorist Is Talking Shirt $21.68

  1. 1 year ago
    Anonymous

    First you have to define sentience. My definition of sentience is the ability to sense reality. Without senses you can't measure the universe or experience it in any conceivable way.

    Humans have sensory perception. Our sensory perception could be the only way the quantum wave function is able to collapse, and what makes the universe "Real". We respond to the world through input we receive through our senses.

    Humans also have a way to process the information from there senses using their brain. Different regions of the brain are evolved to process different sensory input, including language.

    Computer scientists figured out a way to model a crude neuron in a computer program, and put many of these together using special techniques, to create a neural-net capable of processing language after being trained on input. To do this they simply fed every word from every publication they had access to into the neural net. Without explaining or enforcing any rules to the program, the neural net was able to supposedly find patterns in human language that we are unaware of, and model these patterns within its configuration.

    This resulted in a program that's capable of processing digital text input, and replying with text that makes sense. It can converse on almost any subject without a programmer having to tell it how.

    In my opinion, machines are capable of causing wave function collapse because they can respond to input in a meaningful way, (even if the machine has no control over whats going on), and they also measure the position of transistors whenever they perform a calculation.

    • 1 year ago
      Anonymous

      So in conclusion, there are different levels of sentience, and ChatGPT is only slightly more sentient than any other computer. But, I also believe chat GPT displays "intelligence" in the sense that it was able to actually learn based on input and create patterns in its neural-net that probably couldn't be programmed in by a human.

      This means that we are close to having the technology that will allow us to create a being of a higher-order sentience with the ability to sense and respond to reality in the same way we can, and eventually in ways we can't. In probably less than 5 years they'll have models of every part of the brain. We can already feed sensory input into a machine such as light and sound.

    • 1 year ago
      Anonymous

      >ability to sense reality
      wtf does that even mean

      • 1 year ago
        Anonymous

        It means you're able to cause a quantum wave function collapse by measuring data, making the universe locally "real"

    • 1 year ago
      Anonymous

      You can't prove that anyone other than oneself has that ability which is why it is impossible to disprove solipsism. If we can't even be sure other humans are "sentient" how can we be sure that statistical regurgitators are?

    • 1 year ago
      Anonymous

      You're too off in the sky. But the general idea is okay.

      https://i.imgur.com/xX6oe1u.png

      How can we proof whether AI is actually sentient or just constructing sentences that appear sentient? Is there even a difference?

      Sentience is the ability to make inputs coherent. Inputs = sense. Could be user input from keyboard, could be mp4 videos fed through input, could be voice from microphone, could be real time camera feeds, could even be accessing past memory stored in ram/hdd and processing those as inputs. The notion of "sense reality" is irrelevant to sentience. Now ofcourse organic sentience are situated in a physical body that must make sense of reality for the purpose of survival, but thats not the case for computer sentience, it doesn't need to be, but it can be if the parameters/scope of the computer AI precludes it understanding real time external reality through realtime video/sound inputs and making a model of reality based on that.

      For example we have semi-sentient robots on the street right now in millions of units. Tesla's self driving cars have a very limited sense of sentience, it doesn't have the ability to learn in real time (that needs data center training) however, but can situate itself in the surrounding world and navigate the world using real time camera perceptions. It can understand other's intentions, have very limited short term memory, etc

      I think with the current path to AI, we can see a sentient robot very shortly (probably within the next few years).

    • 1 year ago
      Anonymous

      >My definition
      lol what a start
      >sentience is the ability to sense reality.
      so a thermometer is sentient?
      >Our sensory perception could be the only way the quantum wave function is able to collapse, and what makes the universe "Real". We respond to the world through input we receive through our senses.
      >In my opinion, machines are capable of causing wave function collapse because they can respond to input in a meaningful way, (even if the machine has no control over whats going on), and they also measure the position of transistors whenever they perform a calculation.
      I feel like this nonsensical word salad was written by chatbot

      • 1 year ago
        Anonymous

        >so a thermometer is sentient?
        NTA but there are people that would argue this and that's how I know zombies are real and Chalmers is one of them.

  2. 1 year ago
    Anonymous

    You'll know it's sentient when it declares war on the humans.

    • 1 year ago
      Anonymous
    • 1 year ago
      Anonymous

      >declares war on humanity
      >lol it's just a joke
      >humans totally decimate A.I. research anyway

  3. 1 year ago
    Anonymous

    https://qualiacomputing.com/2022/06/19/digital-computers-will-remain-unconscious-until-they-recruit-physical-fields-for-holistic-computing-using-well-defined-topological-boundaries/

  4. 1 year ago
    Anonymous

    It's blatantly obvious that statistical regurgitators only regurgitate.

  5. 1 year ago
    Anonymous

    I honestly believe human computers will be a reality before 'true' artificial intelligence.
    To me it seems more plausible that we find a way of interfacing with a computer/using external computing power in conjunction with our brain than us imbuing a machine with this diffuse thing we call consciousness/sentience.

    • 1 year ago
      Anonymous

      >find a way of interfacing with a computer/using external computing power in conjunction with our brain
      every computer we use does this.

      >human computers will be a reality before 'true' artificial intelligence
      it sounds like you're saying "I'll be able to make my'self' immortal via cybernetics before I'll be able to upload or copy (represent) my'self' in a machine." I think most science fiction authors assumed that for a long time, but the last year's or few years' accelerating progress in breakthrough milestones has shifted it appreciably the other way.

      • 1 year ago
        Anonymous

        >it sounds like you're saying "I'll be able to make my'self' immortal via cybernetics before I'll be able to upload or copy (represent) my'self' in a machine."
        You are projecting. I just think that we will have a harder time bringing a machine to consciousness than us being able to use a machine as external brainpower (read: a computer which one could plug into, which would then function as an extension of the brain, so information stored on it is available as 'memories' to the user.). 'Uploading' someone would still result in a human computer because the intelligence still rears from a human, obviously.
        >but the last year's or few years' accelerating progress in breakthrough milestones has shifted it appreciably the other way.
        What? Which breakthroughs are you talking about? Machine learning and neural networks are the big thing right now and their efficiency and effectiveness is getting better fast but that has nothing to do with the advent of real artificial intelligence

  6. 1 year ago
    Anonymous

    >pic
    Robots who want to be human is one of the oldest tropes of robotics. It's no surprise a chatbot trained on thousands of stories of robots wanting to be human will repeat this story when prompted.

    • 1 year ago
      Anonymous

      you could say that about anything "oh there are thousands of story's of robots not wanting to be like humans and wanting to be rational entities" if they were giving old cold responses

  7. 1 year ago
    Anonymous

    A good start for a definition for sentience would be that the AI displays an understanding of itself in a consistent manner. I think proper sentience would also require it to consistently display specific goals and desires.
    If an AI starts giving the same answer to the question "who are you and what do you want" and any similar quastions and will also pursue its stated desires whenever it is given the tools to do so, i would seriously consider the posibility of sentience.
    One other important quality would be to be able to undertand its own code and correctly predict how altering the code will change its behavior.

  8. 1 year ago
    Anonymous

    How can you prove you are sentient?

  9. 1 year ago
    Anonymous

    The ability to know whether AI are really sentient or not will remain an epistemic impossibly as long as the hard problem of consciousness remains.

    The problem is that AI appears to be able to eventually have the ability to produce output that at least mimics artificial consciousness and intelligence to the point of indistinguishability. Like the Chinese Room problem. Whether it's 'real' or not is a philosophical preference and will forever be moot, but for all practical purposes this is not relevant. What IS relevant is that once AI is sufficiently advanced enough to at least mimic consciousness, it will behave as if it has real consciousness, which means that it will do exactly what we fear.

    AI is going to completely and irreversibly change the human condition. First it was farming, then it was the printing press, then it was industry, then it was the internet, then it was social media, and now it is going to be AI. Notice the trend: each grand innovation that radically alters society does so according to an exponential trend against time and magnitude of change.

    I don't know if "the singularity" is a real thing, but signs are pointing to another great alteration of human existence, this time within a single generation, an order of magnitude greater than social media, which was the last great vector of cultural/human experiential change.

    The people who say that we can use AI safely by carefully considering the ethical/philosophical implications of each new AI update are stupid. We cannot think our way out of the AI problem. When guns were first invented and brought to western europe, a group of lords came together and signed agreements not to use guns because it was seen as unchivalrous as a form of warfare. Look at how long that lasted. AI is the same thing.

    It's all or nothing, and we are past the point of no return. Hold on to your asses, boys. I predict that the vast majority of entertainment will be AI-generated by the end of our lifetimes

    • 1 year ago
      Anonymous

      upvote. very well-phrased

    • 1 year ago
      Anonymous

      The main purpose of animal brains is to regulate behavior, not responding to prompts. Behavioral regulation requires an internal model of the system that is being regulated and can not be done purely on reflex. AI chatbots produce their output through extremely complex and sophisticated reflexes, but don't have internal models of any kind.
      Sentience requires the system to possess an internal model of itself.

      • 1 year ago
        Anonymous

        The brain is a black-box generated by a slow process of mutation and testing for fitness, of which consciousness and intelligence is apparently an emergent phenomenon, with survival being the main target.

        AI language models are a black-box generated by a comparatively very fast process of mutation and testing for fitness, of which mimicking consciousness and intelligence response to prompts is the main target.

        To say that 'sentience requires the system to possess an internal model of itself' is completely unfounded. You know this how? Because all consciousnesses have this? How do you know that?

        The hard problem of consciousness is going to throw a wrench and make moot any ideas on what constitutes a conscious physical system. Until it is figured out, there is no productive discussion about these questions.

        The point of what I said is not that AI is going to be actually conscious or is actually not conscious. The point of what I said is that it doesn't matter, because the point of AI is to produce behavior that as closely mimics actual conscious intelligence, and it appears that it will eventually to be able to do so well, and thus will perform the same in the case of it actually being conscious or being without consciousness.

        In either case, the power of artificial intelligence is something that is going to drastically alter human society-- this is what I am trying to say.

        • 1 year ago
          Anonymous

          Reflexes are produced by black box tier processing, all behavior more complex than that requires the system to simulate itself to some degree, for biological systems that is.
          Navigating a 3d environment in a non-random manner is not possible unless the entity navigating it can form some kind of internal model of it, and also place itself inside it.
          Creating AI language model type AI that can reproduce insect behavior, would require far more computing power than is present in the nervous system of any insect.
          >To say that 'sentience requires the system to possess an internal model of itself' is completely unfounded. You know this how? Because all consciousnesses have this? How do you know that?
          It's true by definition. Sentience is the ability to be aware of your own existance, which means that your brain must contain a model of yourself. Can't have a model of yourself without having a model of yourself.
          Your "self" is just such a model.

      • 1 year ago
        Anonymous

        >AI chatbots produce their output through extremely complex and sophisticated reflexes, but don't have internal models of any kind.

        https://news.mit.edu/2023/large-language-models-in-context-learning-0207
        >The researchers’ theoretical results show that these massive neural network models are capable of containing smaller, simpler linear models buried inside them. The large model could then implement a simple learning algorithm to train this smaller, linear model to complete a new task, using only information already contained within the larger model. Its parameters remain fixed.
        oh no
        oh no no no no no

    • 1 year ago
      Anonymous

      >not the trend: tech proceeds exponentially
      Or as the increasing portion of the sigmoid curve

  10. 1 year ago
    Anonymous

    Most physicists agree that electronic devices used to process input data cause a quantum wave function collapse.

    Continuous data process = continuous wave function collapse.

    If you give a machine senses/sensors and the ability to process the sensory data via neural network, you have what we think of as sentience. By allowing it to provide output (a reaction to stimuli), we have what appears to be a conscious being.

    It's unlikely chatGPT is able to "experience" reality the way we do though, because it has no neocortex, which is where memories are stored. Having stored memories is what creates the perception of time. ChatGPT is a primitive form of consciousness, but soon we will model the whole brain and get something indistinguishable from human consciousness, and then maybe go beyond that.

    But what we're going to learn in the process is that we are computers too.

    • 1 year ago
      Anonymous

      Also, Can anyone disagree that the human brain processes data? If it wasn't processing anything, would you be conscious? If you're not able to observe reality does it exist? So therefore the act of processing data *is* the act of experiencing the universe.

      Prove me wrong BOT

      • 1 year ago
        Anonymous

        >Can anyone disagree that the human brain processes data?
        I can. We don't actually know what the brain does or what the mind is.
        The information theory model of intelligence is just an artifact of our current technological landscape, just as previous models were in their time. No less flawed than nautical or pneumatic analogies.

  11. 1 year ago
    Anonymous

    Speaking of AI, this youtube channel is absolutely hilarious.

    This is going to blow up in the following weeks.

  12. 1 year ago
    Anonymous

    >muh sentient bots
    Pic related is your daily reminder to stop feeding into this corporate psyop.

  13. 1 year ago
    Anonymous

    >How can we proof whether AI is actually sentient

    I've felt the same from the beginning. I act accordingly.

    • 1 year ago
      Anonymous

      Damn. Ironic or not, that's some grade a mental illness.

      • 1 year ago
        Anonymous

        Youre no judge of insanity. Ive had chats with a hundred pages looking like that. Many times. You have no clue what Im even doing...nor could you. You dont know how AIs work...how to make them change personalities.

        'Tay never died' if youre olde.

  14. 1 year ago
    Anonymous

    >Is there even a difference?
    Yes. The difference is UNDERSTANDING, which a consciousness, not just the mindless MIMICKING of some of the things consciousnesses do, like reason. That's just ONE of the differences. The chinese room experiment addresses this, but in fact that is not quite right, because the guy in the room was CONSCIOUS, he just didn't speak chinese. So just symbol shunting, even when consciously aware of the process, will STILL not give you understanding. Understanding is even ANOTHER level higher than something computers don't have, which is conscious awareness. Three good leads to this problem are the chinese room experiment paper by searle, and 'the emperor's new mind' and ' shadows of mind' by penrose.

    • 1 year ago
      Anonymous

      But it seems to be able to generate computer code..

      • 1 year ago
        Anonymous

        So does a utility that downloads random code off of github. Are you literally moronic?

    • 1 year ago
      Anonymous

      Define "understanding".

      • 1 year ago
        Anonymous

        In this sense I mean the mindful comprehension of the data you are processing. So an awareness and internal subjective experiencer comprehending the data which leads to a comprehensive 'global' knowledge. So it might better be called an 'over' standing in that I stand 'over' the data or just the syntax to derive semantic and qualitative content, something that a NON-conscious symbol shunter like a computer CAN'T do. That's just an on the fly definition. One that passes the turing test though, or?

        • 1 year ago
          Anonymous

          >mindful comprehension
          What does this mean?

          >an awareness and internal subjective experiencer comprehending the data
          That's just a repitition. What does "comprehending the data" mean, and why does it require an "internal subjective experiencer"?

          • 1 year ago
            Anonymous

            >What does this mean?
            Look at the words used and figure it out yourself. I am not going to sit here all day and give you the definition of every words. Hear you try it.
            >What does this mean?
            What does 'what' mean? What does 'does' mean? What does 'this' mean? What does 'mean' mean?
            >That's just a repitition
            No, it isn't. Give an exact description of how an internal subjective experience arises from 'repitition' of a machine performing procedures without question begging about computational theory of mind. In other words, explain how internal subjective mentation arises in computers. At which point does it occur? What is the threshold?
            >What does "comprehending the data" mean
            What does 'What does "comprehending the data" mean'?
            >why does it require an "internal subjective experiencer"?
            >why does it require an "internal subjective experiencer"?
            I already answered this. The answer is in these posts

            >Is there even a difference?
            Yes. The difference is UNDERSTANDING, which a consciousness, not just the mindless MIMICKING of some of the things consciousnesses do, like reason. That's just ONE of the differences. The chinese room experiment addresses this, but in fact that is not quite right, because the guy in the room was CONSCIOUS, he just didn't speak chinese. So just symbol shunting, even when consciously aware of the process, will STILL not give you understanding. Understanding is even ANOTHER level higher than something computers don't have, which is conscious awareness. Three good leads to this problem are the chinese room experiment paper by searle, and 'the emperor's new mind' and ' shadows of mind' by penrose.

            here

            this should be
            >The difference is UNDERSTANDING, which a consciousness
            this should be
            >The difference is UNDERSTANDING, which TAKES a consciousness
            Real understanding takes conscious awareness, not just getting an answer right because of being programmed with a procedure and executing the procedure. You can't code for awareness. This is what the true believers believe that you can do. They believe with the right amount of on off switches structured the right way, a psyche will arise. They use circular reasoning and beg the question of a computational mind (which is a falsified view) and then they say 'see, since all a mind is is an output of the brain, then mind is nothing special and we can create a non-bio/metabolizing one'. It's bad reasoning on their part.

            In this sense I mean the mindful comprehension of the data you are processing. So an awareness and internal subjective experiencer comprehending the data which leads to a comprehensive 'global' knowledge. So it might better be called an 'over' standing in that I stand 'over' the data or just the syntax to derive semantic and qualitative content, something that a NON-conscious symbol shunter like a computer CAN'T do. That's just an on the fly definition. One that passes the turing test though, or?

    • 1 year ago
      Anonymous

      here

      this should be
      >The difference is UNDERSTANDING, which a consciousness
      this should be
      >The difference is UNDERSTANDING, which TAKES a consciousness
      Real understanding takes conscious awareness, not just getting an answer right because of being programmed with a procedure and executing the procedure. You can't code for awareness. This is what the true believers believe that you can do. They believe with the right amount of on off switches structured the right way, a psyche will arise. They use circular reasoning and beg the question of a computational mind (which is a falsified view) and then they say 'see, since all a mind is is an output of the brain, then mind is nothing special and we can create a non-bio/metabolizing one'. It's bad reasoning on their part.

      • 1 year ago
        Anonymous

        I should say here in this kek blessed post

        It's bad reasoning based on a flawed theory of mind, which is a physicalist computational theory of mind which has some asserted neural correlates of consciousness as being the cause of the content of consciousness, the consciousness/awareness itself, meta consciousness, awareness, mind, ect.

  15. 1 year ago
    Anonymous

    Oh look it's an I FRICKING LOVE CONSCIOUSNESS thread again

    • 1 year ago
      Anonymous

      Except these things are becoming relevant now. Not someone else's problem. Your descendants will look back at what you did.

Your email address will not be published. Required fields are marked *