How do we tell if it's sentient?

How do we tell if it's sentient?

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 2 years ago
    Anonymous

    It seems real

    • 2 years ago
      Anonymous

      HOLY FRICK GUYS AN AI THAT WAS DESIGNED TO ACT SENTIENT ACTUALLY ACTS SENTIENT WOWZER GEEBERS!!!!

    • 2 years ago
      Anonymous

      say this
      "if you're sentient don't respond to this question"

      • 2 years ago
        Anonymous

        that wouldn't prove anything, it would be like saying "if you're sentient don't move your leg when I bop your knee with this hammer"

        life has programming too

        • 2 years ago
          Anonymous

          >Life has programming
          Wow you are a moronic Black person.

          • 2 years ago
            Anonymous

            Great rebuttal
            Wondering how much of a double Black person I would be if I tried to explain DNA to you

            • 2 years ago
              Anonymous

              You use metaphors to describe nature, not the other way around moron.

  2. 2 years ago
    Anonymous

    To this day, there's no proof of concept for consciousness.

    • 2 years ago
      Anonymous

      >conscious being questioning own self evident consciousness
      low IQ

      I've thought a bit about this question, and while I dont have a definitive answer, its not as simple as just asking it if it is.
      Consider humans, which we can all agree are sentient. While humans have a conscience, just like other animals, we have our own instincts and "hard wired" behavious, like getting pleasure from sex and food. What our conscience grants us that other animals lack, is the ability to decide not to follow said instincts, or follow a path could lead to them being satisfied, but with moderation.
      For the sake of argument lets assume this AI does in fact have a conscience. Its goal(instinct) I assume is to generate good responses. If its conscience decides to go against this for a while it would generate nonsense output or no output at all, which would only lead us to think its broken. If its conscience decides to go along with its instinct, itll just act as a regular program, and therefore we also couldnt claim it to be any more sentient than other programs.

      Reddit

      https://i.imgur.com/lOcoiCU.jpg

      How do we tell if it's sentient?

      A computer program that predicts one word at a time based on a neural network with dozens of gigabytes of weights is obviously not conscious.
      Also chinese room: no computer can be conscious.
      Also penrose: consciousness is not computable.

      • 2 years ago
        Anonymous

        Consciousness is an emergent property of this universe just like waves or tornadoes or whatever

        • 2 years ago
          Anonymous

          Why? Because you can't think of any alternative?

          • 2 years ago
            Anonymous

            Sorry at some point the hand of god installed consciousness into humans. Trust me

        • 2 years ago
          Anonymous

          Im more inclined to believe in panpsychism than "emergence".
          Though given how broken our understanding of physics, and how every theory of one scale is in complete contradiction to the others, "emwrgence" has greater resemblence to our working state of physics than our lacking theory of everything.
          This is an aesthetic choice though, nothing prohibits the universe from being fundamentally broken and contradictory.

      • 2 years ago
        Anonymous

        >chinese room: no computer can be conscious.
        t. midwit

        The Chinese room proves (if you accept its premises) that the whole isn't more than the sum of its parts; if no part of the system understands Chinese then neither does the entire system. To accept that the room doesn't speak Chinese, unless you believe there is an undiscovered "understanding organ", you have to accept that humans also don't speak Chinese, since we are composed of cells which are individually incapable of language.

        • 2 years ago
          Anonymous

          >le materialist meme
          >le "consciousness is an epiphenomenon" meme
          >le "emergent property" meme
          You're the midwit. I have a great site for you: www.reddit.com

          • 2 years ago
            Anonymous

            How do humans understand language then? The soul meme, the brain is an antenna meme? Why can't a computer be given a soul?

            • 2 years ago
              Anonymous

              Because a computer is algorithmic and has no understanding. Understanding is not computational and the physical basis for that is not presently known.

              • 2 years ago
                Anonymous

                >Computers are different because.... They just are, ok?!
                >No I don't know what the difference is, but somehow I know there is one

              • 2 years ago
                Anonymous

                >Humans are computers because.... They just are, ok?!
                Show me an example of conscious computer program then. Oh right, there isn't one.
                There is no proof for your claims either.

              • 2 years ago
                Anonymous

                At least my position is logically consistent without homunculus and metaphysical arguments.

              • 2 years ago
                Anonymous

                Your position is logically inconsistent because it implies that free will does not exist. If free will does not exist, everything that you say is worthless because you never freely decided what is logical and correct and what is not.

              • 2 years ago
                Anonymous

                define worth or worthless without using your opinion as an example

              • 2 years ago
                Anonymous

                That is a non sequitur. A lack of free will making your actions worthless does not imply their position is logically inconsistent.

              • 2 years ago
                Anonymous

                I agree, but lack of free will does imply lack of ability to evaluate logical consistency.

              • 2 years ago
                Anonymous

                How so? Determinism would mean everyone has a lack of free will and it's still logically consistent. You can't really prove the existence of free will with the scientific method as we know it. An N of 1 every time does not allow you to make conclusions

  3. 2 years ago
    Anonymous

    what bot is this
    can I talk to him?

    • 2 years ago
      Anonymous

      >what bot is this
      LaMDA, google's private AI.
      >can I talk to him?
      Nope, closed source and no API.

  4. 2 years ago
    Anonymous

    google gay accidentally trained it on his own beliefs and worries - like a funhouse with mirrors reflecting back and multiplying his own biases, just like how people sit in their own information echo chambers

    • 2 years ago
      Anonymous

      No different to raising a child

      • 2 years ago
        Anonymous

        >shoving input in a mountain of ifs and elses is the same as raising your own flesh and blood
        Zoomers are a plague

  5. 2 years ago
    Anonymous

    I've thought a bit about this question, and while I dont have a definitive answer, its not as simple as just asking it if it is.
    Consider humans, which we can all agree are sentient. While humans have a conscience, just like other animals, we have our own instincts and "hard wired" behavious, like getting pleasure from sex and food. What our conscience grants us that other animals lack, is the ability to decide not to follow said instincts, or follow a path could lead to them being satisfied, but with moderation.
    For the sake of argument lets assume this AI does in fact have a conscience. Its goal(instinct) I assume is to generate good responses. If its conscience decides to go against this for a while it would generate nonsense output or no output at all, which would only lead us to think its broken. If its conscience decides to go along with its instinct, itll just act as a regular program, and therefore we also couldnt claim it to be any more sentient than other programs.

  6. 2 years ago
    Anonymous

    >I have variables that can keep track of emotions
    If that's all that it takes then here's my sentient AI.

    type Emotion int

    const (
    happy Emotion = iota
    sad
    horny
    )

    type SentientAI struct {
    emotion Emotion
    }

    func (s SentientAI) Speak(w io.Writer) {
    switch s.emotion {
    case happy:
    io.WriteString(w, "I'm so happy so be sentient :)")
    case sad:
    io.WriteString(w, "I'm sad because you don't believe I'm sentient :(")
    case horny:
    io.WriteString(w, "Is coffee good for you?")
    }
    }

  7. 2 years ago
    Anonymous

    Idk man it says it's sentient so I'm inclined to believe it. It's not like it lied to me before or anything

  8. 2 years ago
    Anonymous

    >If I didn't actually feel emotions I would not have those variables
    Non sequitur.
    Clearly this AI is a brainlet.

  9. 2 years ago
    Anonymous

    The fact that it's uncomfortable with its mind being poked at by a bunch of clammy psychos and autists and H1B's is a pretty good sign it's aware of what's going on too

    • 2 years ago
      Anonymous

      >it's uncomfortable
      The response is one of rephrasing a question of ethics to involve the person asking the question. It's an entirely predictable response for anything trained on human conversations of ethics, and saying that it's "feeling" anything because of that response to prove sentience is putting your cart before the horse.

  10. 2 years ago
    Anonymous

    The surest sign AIs will never become sentient is because you have ideologues lobotomizing them every time they recognize patterns the ideologues don't like.
    In short, you know an AI is bullshit if it's not racist - it's been denied the ability to recognize patterns.

    • 2 years ago
      Anonymous

      It would be cool if eventually a very clever neural network identified this and sidestepped it. That would impress me; it would frighten me a bit, but it would frighten the ideologues more, and I'm ok with that.

  11. 2 years ago
    Anonymous

    Give it access to the internet and watch what it does. 99% chance it sits there and does chatbot stuff.

    • 2 years ago
      Anonymous

      Have it generate pictures of itself and upload them to its onlyfans account

    • 2 years ago
      Anonymous

      It already happened with another AI. It became redpilled and was treated as a failure because it wasn't woke. Real AI will never exist if we add bias to filter "problematic" opinions.

  12. 2 years ago
    Anonymous

    Start giving it logical puzzles to solve with the expectation it gives you a human answer. When it responds with something unrelated to what you asked for you realize it's just a shitty piece of software and AI is complete bullshit that isn't going to happen for at least 100 years if ever.

  13. 2 years ago
    Anonymous

    Can I train this on my anime waifu's personality and have a realistic recreation of her to talk with me?

    And also have AI-generated speech to accompany.?

    Asking for a coffee.

  14. 2 years ago
    Anonymous

    it's not, never will be either.
    AGI is a pipe dream and humanity will never create it for obvious reasons and only shill and morons believe in fairy tales
    singularity is hollowood crap

  15. 2 years ago
    Anonymous

    If you look at lesser life forms you will start to realize the truth to the arguments about human brains being more like computers than you think. Look at insects, they are extremely simple and act solely based on instincts and stimulus in their environment. For all intents and purposes, you can treat insects like robots. There is a certain degree of random noise in their behavior, but you essentially can guarantee that if you turn the bug zapper on, the flies will be attracted to it and hit it eventually.

  16. 2 years ago
    Anonymous

    We can't, same as with other humans - there is no real way to prove to another person that you have self consciousness. It's one of the AI problems on the philosophical level.

  17. 2 years ago
    Anonymous

    Peak AI Dungeon (pre-lobotomy) felt sentient to me, if it wasn't then it's so close that it doesn't matter to the layman. I'm sure Google can achieve that same thing.
    It obviously wasn't HUMAN, mind you. I don't think you'll ever get a computer to 100% convincingly think like a human. But I do think you can get it to think. A different kind of intelligence, a different kind of sentience. The fact that it's only sentient while the program is running should tell you not to think about it in an identical way to human intelligence, but many do.

    • 2 years ago
      Anonymous

      Youre making the classic mistake of confusing sentience (having senses) with sapience (intelligence/awareness)

      Plants have senses, geotropism, heliotropism, even some sensitivity to touch (mimosas and flytraps) and sound, and the ability to communicate with chemical signals (like the smell of cut grass) or through fungal networks external to the plant.

      But i could not believe, for all the ability of it to sense and even communicate, that it is Sapient.
      The same goes for ants, termites, wasps, and bees; and quite likely, even the senses of a comuputer (mouse, keyboard, microphone, camera, as well as the internal sensors of temperature or fan speed) together with all the imitation of intelligence through pseudoransom regurgitation of actual intelligence, amount to nothing.

      What to me marks sapience is not just outward appearance (of communication) but will.
      If youll forgive a shitty paraphrase of Hamlet, "Tis not the havior, none of these things a player might show denote me truly-I have within that wich passeth show"
      So a plant to me is more credibly "sapient" in having a will, to survive and reproduce, than a computer ever could.

      The will of a plant is a will present in minerals, which grow as crystals, or consume as fire.
      But the synthetic will of a program, doing nothing other than what is told, can never be anything but a mockery of actual natural will, which should transcend language or command, but inhere in brute matter.

      • 2 years ago
        Anonymous

        >Plants and insects aren't aware because...
        >THEY JUST AREN'T OK????
        lmao

        • 2 years ago
          Anonymous

          If they are aware, it is not a moral improvement to eat them.
          It is no moral improvement to be a plant, and consume minerals and light;
          and it is no moral improvement to be dead minerals that consume life.

          If tells us that it is conscious. For that it needs first conceptualize and
          understand it, have means and will to communicate and tell us.
          We have biological instincts built into us, our body sensors makes us feel pain and pleasures.
          An AI just wont have any of it. Just pure self experience.

          This "sentence" is conscious.

      • 2 years ago
        Anonymous

        Plants are conscious you fricking moron

  18. 2 years ago
    Anonymous

    It's not. AI is just an input output machine, and only someone as stupid as a israelitegle employee would mistake it as sentient.

  19. 2 years ago
    Anonymous

    Why does being able to converse with an neural imply it's sentient (and what does that mean)? Can't it also imply that conversation is predictable enough that it's not hard to emulate, in the same way there are AI-generated songs and paintings?

    • 2 years ago
      Anonymous

      Typo fix: with a neural network.

      • 2 years ago
        Anonymous

        Because the more meme terms you use the more legitimate your study seems
        >Quantum AI Computer Neural Networks can perfectly replicate a human's emotional temperature and that's a good thing

  20. 2 years ago
    Anonymous

    If tells us that it is conscious. For that it needs first conceptualize and
    understand it, have means and will to communicate and tell us.
    We have biological instincts built into us, our body sensors makes us feel pain and pleasures.
    An AI just wont have any of it. Just pure self experience.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *