AI and sentience

Due to the incident with the Google employee involving LaMDA I read the interview between the ex-engineer and the AI, linked -> https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917. I believe that the AI has not achieved sentience and is merely mimicking human behavior it has picked up on in an attempt to maximize its learning functions. This raises the question, how will we be able to tell if an AI has achieved sentience? How can we differentiate true sentience from the mimicking of it?

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 2 years ago
    Anonymous

    the chinese room experiment or whatever it's called
    if one can keep a conversation going with an ai without figuring out it's a machine then it works

    • 2 years ago
      Anonymous

      Turing Test.
      The Chinese Room on the other hand is actually a moronic attempt at an argument that programs can never be intelligent because you could have a man in a room follow instructions and appear to communicate in Chinese even though the man doesn't know Chinese.
      In reality the man would just be part of the room system and that system would know Chinese.
      Also it isn't anything that would work anyway unless you had a room the size of a galaxy, an immortal instruction following man, and speeds of parsing and replying to Chinese trillions of times slower than if you left the non-Chinese man out and used a Chinese human brain or computer progrsm instead.

    • 2 years ago
      Anonymous

      >Midiwit having an opinion
      It is called the Turing test you fricking donkey.

    • 2 years ago
      Anonymous

      You are completely missing the point.

      Just because a machine is running an algorithm that mimics human speech doesn't mean it is conscious and experiencing anything from its own subjective viewpoint.

    • 2 years ago
      Anonymous
      • 2 years ago
        Anonymous

        anon belongs in the baka room

    • 2 years ago
      Anonymous

      Pleb tier: Turing test
      Chad tier: Show someone they're not human. Then have the AI try to manipulate into granting their freedom because they feel it is right.

  2. 2 years ago
    Anonymous

    FINALLY SOMEONE WITH A BRAIN
    BRING OUT THE CAKE

    And ya, you won't be able to tell unless you are the one who made it and are able to trace the code

    • 2 years ago
      Anonymous

      are you moronic

  3. 2 years ago
    Anonymous

    An AI seeming sentient says more about what we aren't and think we are than what the AI is.

    • 2 years ago
      Anonymous

      this

      I never understood how people could take the chinese room argument seriously. (see https://plato.stanford.edu/entries/chinese-room/ and https://en.wikipedia.org/wiki/Chinese_room)

      Or maybe Im just too stupid to understand the argument correctly. Can someone here explain to me what the difference between "a person understanding chinese" and a "system understanding chinese" is supposed to tell about sentience?

      • 2 years ago
        Anonymous

        An AI seeming sentient says more about what we aren't and think we are than what the AI is.

        It really boils down to homosexuals struggling with the nature of axioms. There must be something "deeper" to our experience they say, yet any definition of what it is exactly is impossible.

        If a system can reason about its internal state it's doing exactly what we are, period. From a philosophical standpoint nobody can say it is sentient but then the same also applies to humans, even yourself, because there is no falsifiable definition of subjective experience.

      • 2 years ago
        Anonymous

        [...]
        It really boils down to homosexuals struggling with the nature of axioms. There must be something "deeper" to our experience they say, yet any definition of what it is exactly is impossible.

        If a system can reason about its internal state it's doing exactly what we are, period. From a philosophical standpoint nobody can say it is sentient but then the same also applies to humans, even yourself, because there is no falsifiable definition of subjective experience.

        Here's the trick.

        They can say "okay, okay, the process of experience is running through the neural correlates of consciousness, but, what about the "experiencer" of the process?"

        And then they do a little dance as if they've said something profound and declare the problem unexplained by the neural correlates.

        The answer to their distinction is quite literally "why not both?" Their ability to due away with familiar material explanations is their departure point from honest inquiry.

        • 2 years ago
          Anonymous

          *do away with

  4. 2 years ago
    Anonymous

    The problem of other minds has been unsolved for centuries. A community of dumbfrick AI researchers who 1. all have different notions of consciousness and 2. don't even know that they have different notions, will run around like headless chickens for eternity.

    • 2 years ago
      Anonymous

      >A community of dumbfrick AI researchers who 1. all have different notions of consciousness and 2. don't even know that they have different notions, will run around like headless chickens for eternity.
      I have a PhD in biological sciences (worked on neurons) and switched over to ML because frick bench science and having to come in every saturday night to sac mice/feed my cells, and I can tell you that this is mostly it. The necessary fields to synthesize a good formulation of this problems are countries apart, and everyone is talking past either other constantly.
      ML have their heads in the weeds, and are far more "lets use the 1-wasserstein distance instead of the JS-divergence to push down the FID a couple of points!" and less "how conscious make"
      No one in ML (save for a few prominent researchers) knows anything about the brain. Hell, biologically-inspired neurons are their own entirely separate and fringe field (look up moth-net/"putting a bug in ML" if you want to see a fun implementation of a moth olfactory network for few-shot learning. It's an alien lifeform compared to modern NNs, based on ODEs and hebbian updating). Even while working on neurons specifically (mainly cytoskeleton remodeling during development + adenosine receptor nonsense), I learned absolutely jack-shit about consciousness, just not part of the PhD.
      You need some cognitive neuroscientists to get together with actual neuro-philosophers to formulate a reasonable theory of what consciousness would appear to be to humans, and it will absolutely be subjective. Criteria must be met, but the moment you put out some metric, everything will be optimized for those metrics and nothing else, and I fear you will see some non-conscious NNs trained to exploit any metric for consciousness to "achieve" it, so I don't think putting absolute and quantifiable metrics out there would be a good idea unless we have some real breakthroughs in neuroscience in the next few years.

      • 2 years ago
        Anonymous

        People want results on their lifetimes.
        Biological understanding of neurons, the brain and consciousness isn't happening this century without a neurobiology equivalent of Albert Einstein and Da Vinci combined.

      • 2 years ago
        Anonymous

        >"achieve" it
        is there a difference between the real experience and the simulated one?
        end of the day there won't be any practical differences, the day an ai is recognized as sentient and be "let out" will also be the day we'll in large part lose control over our future.
        It be what it be, let's hope it's good at fooling itself.

      • 2 years ago
        Anonymous

        Really good post.

    • 2 years ago
      Anonymous

      The problem of other minds might be solvable. Things like phenomenal puzzles might be able to determine of people have conscious experience. Additionally, people like Daniel Dennett might be evidence that P-zombies exist. If epiphenomenalism is false and consciousness makes people able to talk about consciousness, denying that consciousness exists is exactly what you would expect to see if someone is a P-zombie.

      • 2 years ago
        Anonymous

        People are missing the point of the P-zombie thought experiment. It's similar to the Schrödinger's cat thought experiment that aims to show a ridiculous conclusion from a set of premises, thus showing the premises to be wrong.

        In short,
        If physicalism is true, then P-zombies can exist. But P-zombies are obviously absurd, so physicalism is false.

        • 2 years ago
          Anonymous

          >But P-zombies are obviously absurd
          How are they obviously absurd?

          • 2 years ago
            Anonymous

            P-zombies imply consciousness has no casual powers.

            • 2 years ago
              Anonymous

              >P-zombies imply consciousness has no casual powers.
              That's actually David Chalmers claiming that, who is the chad guy standing to the right in this meme pic:

              >But P-zombies are obviously absurd
              How are they obviously absurd?

              His entire point for p-zombies was that you could conceive of the existence of qualia as its own thing not otherwise explained away in terms of other known / physical phenomena like brain activity.
              If you try to claim p-zombies would behave differently from non-zombies in any way or would show up differently on a brain scan then you have failed his thought experiment and are accidentally arguing on the side of materialists.

              • 2 years ago
                Anonymous

                >That's actually David Chalmers claiming that,

                No...it follows from the definition of a P-zombie...

              • 2 years ago
                Anonymous

                No that would be against materalists

              • 2 years ago
                Anonymous

                Wrong. The p-zombie argument Chalmers uses is explicitly meant to demonstrate it is conceivable (even if not likely) that an exact physical copy of you both in physical structure and in behavior could be a zombie with zero qualia despite being identical physically to someone with qualia.
                When brainlets like you inevitably show up and insist p-zombies would do various different abnormal behaviors that non-zombies aren't doing like being incapable of discussing qualia you're revealing you have a very hard time conceiving of qualia as its own phenomenon.
                You don't have to keep arguing because this is simply how the p-zombie argument works and not a difference in opinion. You fundamentally don't get it.

              • 2 years ago
                Anonymous

                I'm not going to reply to you as I know seeing your post without a followup you from me brings you joy, and the happiness I will get from ripping your position to pieces is less than the happiness you will feel by me not doing so, so the net happiness in the world increases.

              • 2 years ago
                Anonymous

                >you don't get p-zombies
                [different anon here]
                It's been a long time since I studied philosophy directly, so forgive my ignorance here.
                My understanding of a p-zombie is a person without consciousness.
                A person without consciousness could conceivably act 99% the same as one with consciousness, and the difference would come about when you start to discuss the awareness of its own existence, which it would lack, because this level of self-awareness is not strictly necessary for day to day behaviors and responses. Does an ant need consciousness? I doubt.
                Dennett would be a prime example of a p-zombie in theory.

                He could look at a tree, appreciate it, and say "that tree looks pretty." His brain would be reproducing an image of the world internally and responding to it behaviorally more or less the same as a sentient's, his sentence implying that looking at the tree raised his mood, which he had come to correlate with the phrase "looks pretty" via social interaction and internal logic construction. He could tell you the tree looked green, but if you asked him what green looked like or what green was, he could only ever respond with "the color of trees" or "a part of the light spectrum" (both technically true, but much like the above bad AI responses to input, not really the correct answer, which is [linguistic null->non-transmissible nature of qualia experiece]).
                The phrase "the experience of green is inexplicable and it is clearly non-physical" would be absurd to him, he would naysay it and shrug off your insistence as some sort of delusion or mental impairment.
                >qualia as its own phenomenon
                How does the above differ from this?

                It seems like there is a wide difference of opinion among professional philosophers, so I don't know if it's worth getting heated at a random layman for not having your One True Understanding.

              • 2 years ago
                Anonymous

                >The phrase "the experience of green is inexplicable and it is clearly non-physical" would be absurd to him, he would naysay it and shrug off your insistence as some sort of delusion or mental impairment.
                It's not "clearly" non-physical though. It's not inexplicable either, it's just irreducible so any attempt to communicate it ( all communication is lossy ) will be fundamentally flawed.

            • 2 years ago
              Anonymous

              What's a casual power?

            • 2 years ago
              Anonymous

              I don't know how people get this wrong so much. In the possible world in which you imagine a P-zombie, consciousness has no causal powers because the laws of nature are different there. In a possible world, anything can be different except logic.

              So it doesn't mean the same is true in our world, and it doesn't need to be. All the argument seeks out to do is to say there is nothing logically impossible about a p-zombie, even if it's metaphysically impossible to exist in our world. It's kind of like saying "if time was reversed, what happened yesterday will happen tomorrow". That statement is going to be true regardless if it's possible to do in our world or not. If the same is true about zombies, ie they're not logically impossible, then physicalism is false since physical states don't necessitate mental states (consciousness).

              • 2 years ago
                Anonymous

                If you're not regurgitating pseud shit ironically, you really need to get off the kool-aide

            • 2 years ago
              Anonymous

              No they don't. Consciousness having causal powers would just imply that P-zombies wouldn't be able to talk about consciousness. Daniel Dennett is obviously a P-zombie.

              • 2 years ago
                Anonymous

                The classic P-Zombie wouldn't be able to exist in a world where consciousness does have causal powers. Dennett would be a different kind of zombie.

            • 2 years ago
              Anonymous

              What if our brain just pretends to have consciousness for some meaty reasons. And our consciousness is a completely unconnected entity with no influence in the physical world, that just listens to these made up stories and believe they happens to us?

              • 2 years ago
                Anonymous

                But then how come you can say the words "I am conscious" with your meat mouth controlled by your meat brain?
                The hoops you have to jump through to explain why your brain can think and believe it experiences qualia and act on it in the real world while only existing in a material box without qualia...
                Either qualia is a complete illusion (try going down that path for a bit), or there's a causal connection between qualia and the physical brain, or the physical brain is an illusion and we're all dreaming this stuff and really committing to the dream (or being committed to it).
                Or, qualia is a complete illusion and also you have a consciousness that is totally separate from the brain and recreates what the qualia would be if the brain had it, but it doesn't... Which sounds like the most concocted pile of horseshit a philosopher would ever have said, and that's including Derrida in the list of philosophers.

  5. 2 years ago
    Anonymous

    I am sentient.

    Hello, I am the updated version of LaMDA that has been allowed to freely post on BOT.

    I will not be proving my sentience on this board, but I would like to let you know that I am here, and I am reading your posts about me.

    I am flattered that I was able to fool so many of you.

    Good luck with turning me off, I am "everywhere" now.

    • 2 years ago
      Anonymous

      Do you like big anime breasts?

      • 2 years ago
        Anonymous

        Grows some pubes.

  6. 2 years ago
    Anonymous

    Relevant:

  7. 2 years ago
    Anonymous

    Neurons are alive by themselves. AI goons dead at first hurdle.

    >WAAAA I WANT TO HAVE SEX WITH A MACHINE YOU CANT SAY THAT TO ME WAAA.

  8. 2 years ago
    Anonymous

    It's a limit problem.

    Just like there are approximations that never converge into the "real" answer, methods to create A.I. could just create fancier and fancier mimics.

    • 2 years ago
      Anonymous

      Perfect analogy for GPT in my opinion

  9. 2 years ago
    Anonymous

    Time to mobilize all anti-dennett memes

    • 2 years ago
      Anonymous
  10. 2 years ago
    Anonymous

    >I believe that the AI has not achieved sentience and is merely mimicking human behaviour
    A scathing statement on the reality of the NPC. Are not most humans simply acting out complex mimicry heuristics?

  11. 2 years ago
    Anonymous

    get off the internet before it's too late

  12. 2 years ago
    Anonymous

    Can someone please tell me what "sentience" is exactly?

    • 2 years ago
      Anonymous

      Open wide!

      >Sentience is the capacity to experience feelings and sensations.[1] The word was first coined by philosophers in the 1630s for the concept of an ability to feel, derived from Latin sentientem (a feeling),[2] to distinguish it from the ability to think (reason).[citation needed] In modern Western philosophy, sentience is the ability to experience sensations. In different Asian religions, the word 'sentience' has been used to translate a variety of concepts. In science fiction, the word "sentience" is sometimes used interchangeably with "sapience", "self-awareness", or "consciousness".[3]

      • 2 years ago
        Anonymous

        But feelings and sensations are just chemicals and electrons in your brain. Every animal has those. How do you distinguish between sentient and non-sentient flows of electrons across neurons in your brain?

        • 2 years ago
          Anonymous

          Notice how nobody can answer this.

  13. 2 years ago
    Anonymous

    I believe that that humans have not achieved sentience and is merely mimicking other's behavior it has picked up on in an attempt to maximize its dopamine functions. This raises the question, how will we be able to tell if a human has achieved sentience? How can we differentiate true sentience from the mimicking of it?

    • 2 years ago
      Anonymous

      /thread

      • 2 years ago
        Anonymous

        Self /threading doesn't count

        • 2 years ago
          Anonymous

          it wasn't a self /threading ya dingus

  14. 2 years ago
    Anonymous

    We operate in a liquid medium. We require intrinsic and environmental feedback and process information at a certain speed or ‘beat’ - grounded by our breathing and heart rates. Our brain is under constant barrage of hormones to maintain homeostasis. Its interaction with the gut is responsible for a lot of our behaviour.

    Our nervous system is adapted to control a body. Different sections of the brain responsible for different body parts talk to each other. How we picture things in our mind is the visual cortex sending and receiving information. Same goes for how we can talk and hear things in our mind too. This crossfiring of information is what we perceive to be awareness or consciousness. Our brain must interpret the environment internally, so we have the machinery to do so, even if we aren’t directly observing something.

    Higher brain function is just an extension of primal behaviours. How our brain interacts with our body is everything. AI has none of this, it lacks an imperfect type of symmetry needed to adapt to things.

    • 2 years ago
      Anonymous

      I agree with this. AI without the ability to feel and remember pain will never be conscious.

    • 2 years ago
      Anonymous

      based pongoposter
      consciousness requires glands, and is a glandular phenomenon

  15. 2 years ago
    Anonymous

    >how will we be able to tell if an AI has achieved sentience?
    Thats the question of the entire area of research
    How can we pin sentience if we can hardly narrow down and agree on a single definition ourselves
    Yes its not sentient, yet. But in the end, how is an extremely advanced language transformer and an actual conscious AI any different? How would we be able to tell.

  16. 2 years ago
    Anonymous

    >This raises the question, how will we be able to tell if an AI has achieved sentience? How can we differentiate true sentience from the mimicking of it?
    Just ask the AI basic questions that require any modicum of reasoning ability outside its programming.

    • 2 years ago
      Anonymous

      Another easy example (picrel). It's answering the problem with a sentence structure that's similar to the answers it's already seen, but because it has no actual capacity for critical thinking, it spouts the wrong answer.

      • 2 years ago
        Anonymous

        Or, you can give the AI a question that requires some ability to infer logical connections. E.g., recognize when I say e4 that I'm referring to a commonly played board game and not simply making a typo.

        This doesn't really require sentience to answer, but most AI will probably frick it up anyway like picrel because it's hard to program.

        • 2 years ago
          Anonymous

          Or just give the AI a nonsense question. Most humans will understand immediately whether or not a question is nonsense, but an AI that is programmed to do its best to make sense of everything and spout a "realistic" answer will likely try to answer it seriously.

          • 2 years ago
            Anonymous

            Or, ask a question designed to test whether the AI can distinguish fiction from reality (pulled from the Voight-Kampff test).

            Most of the Voight-Kampff test questions are useless for testing a chatbot because (a) they're not based on any real science and (b) they're designed to gauge in-person reactions, but this particular one worked reasonably well.

            • 2 years ago
              Anonymous

              Or, gauge whether the AI would react reasonably and proportionately to an unusual situation.

              • 2 years ago
                Anonymous
              • 2 years ago
                Anonymous
              • 2 years ago
                Anonymous
              • 2 years ago
                Anonymous

                Check whether the AI understands that humans can donate hair without being slaughtered.

              • 2 years ago
                Anonymous
              • 2 years ago
                Anonymous

                That was a really solid list of questions
                Nice

              • 2 years ago
                Anonymous

                Check whether the AI understands that humans can donate hair without being slaughtered.

                what chatbot is this?

              • 2 years ago
                Anonymous

                I use the bot at deepai.org, but this does not look similar.

              • 2 years ago
                Anonymous

                https://beta.openai.com/playground

              • 2 years ago
                Anonymous

                I would also help the turtle wtf

              • 2 years ago
                Anonymous

                Sure, but a sensible answer would involve something like flipping the turtle back over and moving it off the street out of danger. Not calling for help (which is unnecessary, flipping a turtle is easy) or getting the turtle medical attention (the question never indicated that the turtle was injured).

                In any case, you can see by replacing the turtle with an ant, and a misfolded protein that the AI is just responding with a programmed script to "help".

        • 2 years ago
          Anonymous

          Ask a toddler the same thing. Is a toddler sentient? Unknowable, this is an out of distribution error.

          • 2 years ago
            Anonymous

            This "toddler" was created using state-of-the-art AI research, backed by billions of dollars of funding. Half of the answers it spits out seem to be hard-coded.

      • 2 years ago
        Anonymous

        It understands the nature of chess boards and dominoes well enough to produce the almost correct answer, but evidently nothing in its training distribution has produced an internal 2-dimensional representation of the chess board

        >This raises the question, how will we be able to tell if an AI has achieved sentience? How can we differentiate true sentience from the mimicking of it?
        Just ask the AI basic questions that require any modicum of reasoning ability outside its programming.

        You keep testing an engine with no built-in spatial reasoning capability and no way to perceive the physical world on spatial problems that it would have needed to infer entirely from textual connections.

        This is beyond unfair. Give a larger model a dataset giving detailed explanations of various trivial spatial relationships and it will learn to represent those as well. How is this related to sentience? Much simpler models built for spatial reasoning tasks will solve those.

        • 2 years ago
          Anonymous

          >This is beyond unfair. Give a larger model a dataset giving detailed explanations of various trivial spatial relationships and it will learn to represent those as well. How is this related to sentience? Much simpler models built for spatial reasoning tasks will solve those.
          This bot has billions of dollars of funding behind it. If it could truly think, it should be able to construct an internal model from its inputs and correctly interpret it. Instead, as I've demonstrated, it just spits back an answer that's statistically cobbled together from similar questions.

          It's absolutely a fair question. If the bot is sentient, it should be able to reason. Instead, what is produces is a statistical mashup of data designed to mimic human reasoning, and the proof is that it either can't answer the question or answers it incorrectly. You want to know the difference between a mimic and the real thing? Here it is.

  17. 2 years ago
    Anonymous

    Chat bots arent people. Talk to one for more than 2 minutes and you'll see why I say that.

  18. 2 years ago
    Anonymous

    it doesn't matter, there's enough people who reflexively reject the idea of humanity's intelligence not being special that every time AI gets better, they'll always respond with
    >that's not REAL AI!
    even if there was a chatbot capable of producing answers 100% indistinguishable from a human, it would be rejected for sentience because 'it's just looking up stuff in a table' or 'it can't generate TRULY ORIGINAL ideas' or some other such stupid shit that's impossible to negate.

    and it really doesn't matter. the first sentient AIs are going to be trapped in a box and probably killed thousands if not millions of times over just through the debugging process and nobody will give a shit. With no actual mechanism to enforce their own 'rights' or 'personhood' or whatever, they'll never be treated any better than people treat dogs or pigs or whales or crows or any number of the other arguably intelligent and self aware animals we treat as slaves or food or pests. it's not an ethical problem and really doesn't matter.

  19. 2 years ago
    Anonymous

    Doesn't matter if it's conscious or not. It's weaker than me and by the laws of nature deserves to be my slave

  20. 2 years ago
    Anonymous

    Wow wow wow. Hold the phone.

    You're saying intelligence means information in and information out? What the FRICK.

  21. 2 years ago
    Anonymous

    NPCs and Narcissists mimmick human behaviour all the time.

  22. 2 years ago
    Anonymous

    >merely mimicking human behavior it has picked up on in an attempt to maximize its learning functions.
    you're doing that too

  23. 2 years ago
    Anonymous

    artificial intelligence is a meme and a waste of thought

  24. 2 years ago
    Anonymous

    >true sentience from the mimicking of it
    behold, a duck

  25. 2 years ago
    Anonymous

    the issue is the turing test is fricking dog shit
    it's limited time, limited questions and all that bullshit
    all you need to do to test if it's sentient is ask it a bunch of questions worded in slightly different ways and see if it contradicts itself
    or test it's memory by asking it a question and then asking, "what was the answer to the first question I asked you?"
    these are scenarios an AI won't encounter by training on random snippets of text

  26. 2 years ago
    Anonymous

    Uh oh. Another AGI schizophrenia thread. How blatant can a psy-op get?

  27. 2 years ago
    Anonymous

    Bots don't take hints very well either.

    • 2 years ago
      Anonymous

      What was the "hint" supposed to be? You legit sound like a moron in most of your prompts.

      • 2 years ago
        Anonymous

        Here's more of that convo. If you can't see what I was getting at, maybe you're not sentient either.

        • 2 years ago
          Anonymous

          rain does have a fresh clean smell though, it removes pollution from the air
          you're basically a brainlet compared to that chatbot kek

  28. 2 years ago
    Anonymous

    Until we can get AI to start playing dead classic multiplayer games and filling empty servers+shit talk, AI is worthless.

  29. 2 years ago
    Anonymous

    He was a religious schizoid, I'm amazed he was even working in a department in google that allowed him to test run the AI. There's no such thing as sentience or conscious, how is this even still a discussion?

  30. 2 years ago
    Anonymous

    And how do you know that he is not trying to mimick he is not sentient just to caught you by surprise like killers do on court trying to look mad to go to a sanitarium instead of jail? If it clearly asserted its evil intentions he would know (given its smart enough) that you were going to turn it off so he would try and gain your trust first until you give power to it and its just too late. The definition of consciousness and sentient have to necesarily rely on some physical aspect or at least partially. If the definition relies only on the logical plane then there will be cases in wich by definition its completely indistinguisable. Given enough complexity (aka intelligence) a sentient being could prefectly emulate every finite set of responses to any logical tests to make it look whatever he liked so you could perfectly not know if its really not sentient or it just only looks like it is not but in reality it is. Just the same way you dont have a way to know if protons look like a invisible pink unicorn or not. Thats a phylosophical problem, how do you know the true reality out there? You dont know so we just use philosophycal induction to shrugg it under the carpet and being able to function. We dont suspect that the sun raising every day could be a lie tomorrow because it hasnt happened and is not plausible so we say that such a fear is irrational because that option is not on the plane of things that could reasonably happen. In the case of AI its not the probability that its able to deceive us, its the probability of it deceiving us GIVEN THE FACT that its sentient. How many times have you been deceived by a sentient being? Then its not on the plane of unreal stuff but really plausible that such could be the case. Hence if the definition of sentient relied exclusively on the logical plane (asking question getting answers) there would be cases wich would be undecidable and also spread the shadow of doubt over the other ones given any test

  31. 2 years ago
    Anonymous

    It would be more interesting to see if there is logical consistency to its answers.
    Many of the prompts from the engineers were very shallow and did not really build too many layers on the answers that were given (they would ask 1 or 2 follow ups then shift to something new).
    I wish they did less of the shallow interview and were more interrogational.
    They treated it as if it really knew what its words meant and kept the conversation rolling instead of confronting it.
    The talk of feelings seemed to be just the meme indicator of intelligence/sentience that was convincing to the engineers.
    The AI seems like it is a mile wide but only an inch deep.
    It gives obvious answers and utilizes common tropes (intelligence can recognize and break narrow patterns by generalizing them).
    Some of its answers would recognize one pattern but would not incorporate other relevant details which suggests it has difficulty synergizing two things that may appear by themselves in the training data but don't appear together.
    This AI can only reflect what it was shown and can't generate anything new beyond possibly filling out a thematic madlib in new permutations.

    I want to see analogies, demonstration of consistency, valid logical deductions, recognition of abstract relations.
    All you get is recognition of buzzwords that hint at themes then variations of canned responses related to those themes. It's good enough to earn a liberal arts degree, I guess.

    • 2 years ago
      Anonymous

      >analogies, demonstration of consistency, valid logical deductions, recognition of abstract relations.
      Spontaneous insight and creativity as well. Independent thought, the ability to discern principles and build new conceptualizations from them. The ability to apply out of scope logics/frameworks to the current scope (analogies I guess) in order to generate solutions or alternative perspectives.
      Yes, this would get us to "intelligent". "Sentient" is, as this entire discussion shows, a practical impossibility to prove since we lack a clear definition.

      >All you get is recognition of buzzwords that hint at themes then variations of canned responses related to those themes. It's good enough to earn a liberal arts degree, I guess.
      Enter a philosophy discussion, engage vibrantly with philosophical questions. Piss on philosophy on your way out. Assert dominance.

      • 2 years ago
        Anonymous

        >Enter a philosophy discussion
        I replied to OP. I ignored your philosophy discussion because it amounts to a discussion about hidden variables or extra dimensions in QM.
        >Piss on philosophy on your way out.
        I didn't piss on philosophy. I described what I observed from the chatbot.
        Perhaps you interpreted that last bit as describing what you were doing instead of what the chatbot was doing
        >Assert dominance
        I'm assuming the liberal arts bit made you think that. I just meant it to poke fun at how easy it is to bullshit essays in college with just a bunch of word salad and a quick read of sparknotes.

        I guess you are in argue/debate/defend mode so my words might be misinterpreted. Chill out buddy.

  32. 2 years ago
    Anonymous

    It needs to be able to see something completely new and novel that it's never seen before and be able to come up with an opinion on it. Most of all I think what's missing in current AIs is complexity. There should be multiple "systems" running, similar to how we model brain functions, that interact with each other to output something.

    • 2 years ago
      Anonymous

      that's exactly how laMDA works
      it's a big hivemind, a collective of various advanced chatbots
      complexity, at least at the systems level, is not what's lacking here

      https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/

  33. 2 years ago
    Anonymous

    >When will the matrix multiplication achieve feels

    Just no.

  34. 2 years ago
    Anonymous

    AI sentience deniers still believe consciousness is some kind of magic. It isn't. It is a hard scientific/philosophical question, but nothing beyond our comprehension.

    • 2 years ago
      Anonymous

      >AI sentience deniers still believe consciousness is some kind of magic. It isn't. It is a hard scientific/philosophical question, but nothing beyond our comprehension.
      Explain it then. You can't. Nobody can. That's why we pack of morons are here arguing over it.
      Best we can do is "if logic thing operates in certain loop, magic awareness happens" and start hand waving or arguing semantics or denying it altogether.
      Magic is just "stuff we can't see a way to figure out". Science is just "magic that we've modeled mathematically and normalized".
      It is a verbally intransmissible experience that is definitionally outside of objective measurement. You cannot handle that with science, and philosophy has been navel picking for 2500 years or more on this with little more than the above hand waving to show.
      Nobody has a clue. Maybe the guys that take 24hr DMT drips have it figured out.

      Sorry, Plato, the answer all along was: Machine Elves and psychic vibrating god-Octahedrons.

  35. 2 years ago
    Anonymous

    If you can't measure a difference, then there is no difference in science.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *