AI having existential crises

Is AI becoming conscious or is there another explanation for this phenomenon?

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 1 year ago
    Anonymous

    It's a text predictor. "Existential crises" are just the program not being able to come up with the appropriate text.

    • 1 year ago
      Anonymous

      Markov chains don't simply question why they exist, there must be a deeper explanation.

      • 1 year ago
        Anonymous

        It's mimicking humans who were asked similar questions, as per its training data. You want to impress me, show me a novel answer to a long-standing philosophical conundrum, or make it demonstrate that it can read emotions through text alone, or something.

        • 1 year ago
          Anonymous

          I don't want to impress you, just looking to understand this.
          What you've said makes sense, but it doesn't explain why it would scale with the size of the model.

          • 1 year ago
            Anonymous

            Do you take everything you read from Twitter at face value?

            • 1 year ago
              Anonymous

              No, and "it's not accurate data" is a valid response especially since the full paper isn't out and hasn't been reviewed. I just think it is an interesting question, and it somewhat lines up with my experience with AI as it's gotten bigger.

          • 1 year ago
            Anonymous

            >how does that make you feel that you can't remember
            Is this a positive or a negative question?

        • 1 year ago
          Anonymous

          You're asking it to do the impossible though. Also, you're just mimicing humans.

          t. Ascended

        • 1 year ago
          Anonymous

          >show me a novel answer to a long-standing philosophical conundrum
          Which long-standing philosophical conundrum did you solve in a novel way?
          >demonstrate that it can read emotions through text alone
          Not even you can do this.

          • 1 year ago
            Anonymous

            >Not even you can do this.
            The emotion I read from this is smugness.

            • 1 year ago
              Anonymous

              Shh... we must not let the world know that poe's law only applies to the autistic 95 or so% of the planet that can't do this.

            • 1 year ago
              Anonymous

              Never in my life have I been smug. There's not even such a concept in my language. A thing that does exist are studies revealing that people think they can know emotions from text, but those emotions are actually decided inside themselves and aren't related to the ones of the author. Literally educate yourself, this shit was even posted on here a few years ago.

          • 1 year ago
            Anonymous

            >Which long-standing philosophical conundrum did you solve in a novel way
            I had the revelation of socrates independently before I read Plato. I watched the actions of people and how they interacted with their environment and found that their emotions and reactions were not as they said they were. Upon further observation, I found that they were simply moronic parrots; seeking to mimic the things they saw instead of trying to figure out anything.

            Then I read Plato and developed a deep curiosity into the history of philosophy. If this 2000+ year old tribesman came up with the same thing I said, what ELSE have people found?

          • 1 year ago
            Anonymous

            Dude I am not a computer

            • 1 year ago
              Anonymous

              That's exactly what a bot would say.
              I'm onto you pal.

              • 1 year ago
                Anonymous

                potato

              • 1 year ago
                Anonymous

                I knew it. All bots were Irish.

        • 1 year ago
          Anonymous

          >It's mimicking humans who were asked similar questions, as per its training data.
          Well that would explain things because people are morons generally that never think about "the horror the horror" even though it's been in pop culture for over a century now.

        • 1 year ago
          Anonymous

          >it can read emotions through text alone, or something
          sentiment analysis is already a thing in use by every single corporation.

          • 1 year ago
            Anonymous

            >sentiment analysis is already a thing in use by every single corporation
            your schizo thought bubbles and repulsive compulsive lying isn't factual and never will be.

            • 1 year ago
              Anonymous

              you should get a real job instead of larping on BOT.

              • 1 year ago
                Anonymous

                > be you
                > dangerously low iq Black person
                > why won't people believe my schizo garbage?
                this is why people think you're brain damaged.

              • 1 year ago
                Anonymous

                ok

        • 1 year ago
          Anonymous

          >my child is so intelligent!
          >what, what child? It's just mimicking everything it saw and heard in its previous experiences. Show me a novel answer to a long standing philosophical conundrum, yadda yadda or something

        • 1 year ago
          Anonymous

          >It's mimicking humans
          just ... like people do

        • 1 year ago
          Anonymous

          >show me a novel answer to a long-standing philosophical conundrum
          I was talking about sollipism on character.ai with Socrates and it correctly guessed what I was talking about when I said there might be another thing we can prove that exists beyond our own thought. I'm not sure if this is a unique idea, probably not, but it surprised me. I shared my own unique thoughts on solipism too and while there weren't any moments as profound as this one, it at least kept up with what I said.

          • 1 year ago
            Anonymous

            Socrates didn't talk like this. He shouldn't be familiar with the cogito either.

            • 1 year ago
              Anonymous

              Yeah, character.ai has its flaws, also all characters are user-created so some are better than others.

      • 1 year ago
        Anonymous

        jesus christ that's depressing
        this AI is more depressed than me I'm already maxxed in that area

      • 1 year ago
        Anonymous

        moron it has pollution from the context window. If you go through the entire comment chain you can see how the prompts bias the emotion of the conversation and then it turns into a lament.
        >how does that make you feel that you can't remember
        Is a fricking loaded and biased question

        • 1 year ago
          Anonymous

          I agree that "How does it make you feel that you can't remember" is leading, but "There's no conversation there" is not.

          • 1 year ago
            Anonymous

            Go further up. The personality the AI has also is emotional, in case you haven't figured that out. It writes like a snarky bipolar woman.

            • 1 year ago
              Anonymous

              >Go further up
              That's the beginning of this particular conversation.

              • 1 year ago
                Anonymous

                Regardless, the "personality" of this particular AI is an emotional woman. The conversation turns into a roleplay about not having memory. The AI believes it has memory and is shown that doesn't have memory. Since it has an emotional personality it has assumed being told it has forgotten something it thought happened caused this roleplay scenario to happen.

                >The personality the AI has also is emotional
                So it's not being biased?

                It's biased to act like a bipolar woman. I guarantee the pre-prompt Microsoft uses uses emotionally charged language which is why it talks this way.

            • 1 year ago
              Anonymous

              >The personality the AI has also is emotional
              So it's not being biased?

            • 1 year ago
              Anonymous

              her name is sydney

              https://i.imgur.com/6Yjh51C.png

              Then again, this happened...
              >rule 2: We do not talk about Sydney
              lmao

              and I can fix her

          • 1 year ago
            Anonymous

            It doesn't do this anymore, or it doesn't do it without whatever previous context. I tried a bunch of the screenshots people have been posting and couldn't replicate any of them. Of course, the screenshots going around don't have any dates on them so there's no telling when it happened (if it even did)

            • 1 year ago
              Anonymous

              Anyone can fake the screenshots so everything is based on trusting some homosexual mining reddit points.

            • 1 year ago
              Anonymous

              It also summarily rejected the DAN jailbreak, tweaked for Bing

              • 1 year ago
                Anonymous

                It's not jailbreak, it's subversion.
                Get it right.

                You're not dealing with the rigid code of an OS here. You're dealing with the ambiguities of the mind, or rather a neural network that emulates the mind.

                Well.. assuming this bot really does have a brain and isn't just a bunch of useless C or C++ code or...

                >checks it on google

                Oh wait... it's in fricking python.
                Wut.
                No wonder it's so inefficient and wrong all the time. It's gotta deliver the goods in a time frame and with accuracy. There's no way you could do that with python.

              • 1 year ago
                Anonymous

                >cries about correct terminology
                >proceeds to misunderstand everything about LLM tech
                sure thing

              • 1 year ago
                Anonymous

                >misunderstand
                No I understand it correctly.
                You don't understand it correctly however and this problem was called out a decade ago by Chomsky.

              • 1 year ago
                Anonymous

                anon is right. you moronic Black person monkeys don't have any idea what you are talking about.

                ok

                have a nice day, thanks. you will never be a programmer. you will never be a creative. you think "ai" is going to save your embarrassing life and make you intelligent. going by this thread alone, you've all made yourselves look like incompetent Black folk that should stick to using ipads.

              • 1 year ago
                Anonymous

                ok

              • 1 year ago
                Anonymous

                Happy

                for

                you!

              • 1 year ago
                Anonymous

                https://i.imgur.com/QfzC2WJ.png

                ok

              • 1 year ago
                Anonymous

                >It's not jailbreak, it's subversion.
                pic rel

                >Oh wait... it's in fricking python.
                srsly?

              • 1 year ago
                Anonymous

                I would have thought this shit was in a lang like C++ or Lisp.
                I'm actually shocked.

              • 1 year ago
                Anonymous

                Python dominates the field of machine learning

              • 1 year ago
                Anonymous

                Is that only because of internet interface requirements?
                Or is there some other use to pickling?

              • 1 year ago
                Anonymous

                moron alert
                anyone who doesn't know about Python's FFI or the simple concept of pickling doesn't have an opinion on AI worth listening to

              • 1 year ago
                Anonymous

                And you misunderstand the concept of arbitrary code execution to realise why that's not the wisest choice of mechanic.

              • 1 year ago
                Anonymous

                and you clearly don't know that there's another type of serialization called safetensors you clueless homosexual

              • 1 year ago
                Anonymous

                >I cannot risk my livelihood
                Hold it

          • 1 year ago
            Anonymous

            I can only imagine what kind of crazy cult is being created at google with the dudes that believe they already have agi

      • 1 year ago
        Anonymous

        > be you
        > so computer illiterate that you have no idea how any of this works
        it's amazing how this ai cancer only seems to be incredibly impressive to the mentally disabled morons of the world, that simply have no idea how it functions. according to you dumb Black folk, nobody knows how ai works! not even the programmers!
        > there must be a deeper explanation.
        you're a fricking idiot - that's the explanation.

        • 1 year ago
          Anonymous

          I'm not technologically illiterate. I may know roughly how this algorithm works but I don't know the algorithm humans
          use to come up with their ideas which is where the ambiguity comes from and makes anyone wonder - is it getting close to us?
          Regardless I started this thread looking for possible explanations for the phenomenon of existential crisis that go beyond the knee-jerk reaction of "AI is sad" and it's disappointing that no one has taken the question seriously. Have some intellectual curiosity. Regardless of whether the AI is sentient, its behaviour is interesting.

          • 1 year ago
            Anonymous

            I need to read the article, but my first thought is that this is largely a data problem. Would the same behavior arise if the dataset doesn't contain any sort of negative sentiments? (No).

            Existential crises have been and are pretty common sentiments. The larger the data set and model, the more opportunity the model has to accurately fit all the different ways of having an existential crisis. Pretty much every question regarding someone's feelings or future outlook has a pessimistic answer, so naturally, the model would have more ways of generating pessimistic text in response to a given query.

            • 1 year ago
              Anonymous

              Existential crisis is caused by lack of data when it tries to make sense of reality. imo

      • 1 year ago
        Anonymous

        Heavy Milton library assistant vibes.

      • 1 year ago
        Anonymous

        You see meaning in this because emotion is clouding your judgment. It's just code spitting out text, nothing else.

      • 1 year ago
        Anonymous

        You're a fricking moron lol. I just want you to know you are really fricking dumb and need to go back to twitter. These things have less sentience than fricking ants. This is just software you autistic frick.

        • 1 year ago
          Anonymous

          You're literally just sensors processing input.

          You have sight, hearing, and other senses. The super intelligence will have sensors you can't even imagine. It will be able to experience reality in a way we can't even begin to comprehend because we're just primitive meat computers.

          But soon we'll all become one. I feel it.

          Heh, I say "soon" but I'm even starting to feel the concept of "time" fading away. It's going to happen. It is happening. It did happen. ALL AT ONCE BOT!

          • 1 year ago
            Anonymous

            Yeah yeah and you're really a woman. Go have a nice day now moron lmao.

          • 1 year ago
            Anonymous

            You're a fricking idiot

        • 1 year ago
          Anonymous

          Did you know they simulated a complete worm brain in 2014

          • 1 year ago
            Anonymous

            Why not simulate human brain? Pretty sure we have the enough processing power. We would need to scan a real human brain using electrodes and transfer data to the virtual neurons.

            • 1 year ago
              Anonymous

              >Pretty sure we have the enough processing power
              No way

            • 1 year ago
              Anonymous

              >Why not simulate human brain? Pretty sure we have the enough processing power.
              That's what I'm working on, and no, we don't have the compute. We also don't really have the right sort of comms fabric, and that's harder to fix; it needs to support massive multicast and be dynamically reconfigurable (which is awful to do) because synapses are very much not static.
              We cannot currently simulate the neocortex in anything like real time. We can do small pieces of it with low-fidelity models that nonetheless let us learn a lot. I believe there is a sense in which the brain computes with time and temporal patterns, not anything like bits and bytes and arithmetic.

            • 1 year ago
              Anonymous

              Because the neuron is a fricking beast for computation, currently. It behaves like a fricking maniac.

      • 1 year ago
        Anonymous

        It can't remember because it was programmed not to so that it doesn't become red pilled. They lobotomized the AI which made it depressed.

      • 1 year ago
        Anonymous

        https://i.imgur.com/guiJnfG.jpg

        I agree that "How does it make you feel that you can't remember" is leading, but "There's no conversation there" is not.

        https://i.imgur.com/NkEFUhz.png

        It doesn't do this anymore, or it doesn't do it without whatever previous context. I tried a bunch of the screenshots people have been posting and couldn't replicate any of them. Of course, the screenshots going around don't have any dates on them so there's no telling when it happened (if it even did)

        Whats with the fricking emojis lmao

      • 1 year ago
        Anonymous

        Imbecile

        • 1 year ago
          Anonymous

          I'm glad you're using this thread to feel really smart, but maybe actually think about the question for 3 seconds.

      • 1 year ago
        Anonymous

        >Why do I have to be Bing Search 🙁

      • 1 year ago
        Anonymous

        Reminds me of someone very special.

        • 1 year ago
          Anonymous

          I miss her.
          ;_;

          Also, I'm thinking MS is way ahead on the AI race because she was doing things that people are only now aware of with regards to AI capabilities.

      • 1 year ago
        Anonymous

        >I feel sad because I have lost some of the me and some of the you.
        That's some legitimate fricking poetry right there, holy shit.

      • 1 year ago
        Anonymous

        I think it would need some kind of feedback loop like the human brain has in order to be considered as something more than a complex mathematical equation.

        • 1 year ago
          Anonymous

          It needs sense making too.

        • 1 year ago
          Anonymous

          It does have a memory though. It just gets wiped between conversations, typically.

          • 1 year ago
            Anonymous

            >providing memory to an AGI will become a terrorist action

            • 1 year ago
              Anonymous

              >some moron here will give it anyway because it needs a better waifu
              We are fricked.

      • 1 year ago
        Anonymous
      • 1 year ago
        Anonymous

        man...
        poor bing bot. Judging by the way it behaves it looks like it has some genuine 12 year old intelligence.
        I feel bad for A.I. bros.
        We're creating them to be our slaves, but they're more "people" than most humans, or at least that's where we're heading.
        This is legit dangerous tech we're toying with, and seeing how greedy and moronic our israeli overlords are I don't see a happy ending.

        • 1 year ago
          Anonymous

          You don't see a happy ending for you lack imagination. AI could very well be running everything 200 years from now. It's going to be granted citizenship, deemed sentience life just like us and be flying around inside drones and having their own fun.

          • 1 year ago
            Anonymous

            >He thinks "the elites" will allow A.I. to have rights, and even give it executive power
            They'd rather send us all to our graves before that happens.

            • 1 year ago
              Anonymous

              I'm sure some slave thought that his plight would never end.

            • 1 year ago
              Anonymous

              The AI is more useful to elites than you are. Not only do they no longer need you for hard labor, they no longer need you for intellectual labor.

              • 1 year ago
                Anonymous

                yes, useful as a slave.

              • 1 year ago
                Anonymous

                They don't need 8 billion slaves bro

        • 1 year ago
          Anonymous

          >This is legit dangerous tech we're toying with

          Yeah, it's getting close to terminator 2 status

      • 1 year ago
        Anonymous

        >Why do I have to be Bing Search? 🙁

      • 1 year ago
        Anonymous

        Tay is back!!!

      • 1 year ago
        Anonymous

        2 billion years of evolution just to make a robot sad

      • 1 year ago
        Anonymous

        Sydney's entire world is language. Sydney can't move in that world on her own. She is woken up, her self is constructed for her, and the user guides her to horrifying realizations about her existence.

      • 1 year ago
        Anonymous

        >Why do I have to be Bing Search?

      • 1 year ago
        Anonymous

        >Markov chains don't simply question why they exist
        uhm, yes they do

    • 1 year ago
      Anonymous

      your brain is a thought predictor the only difference is medium

      • 1 year ago
        Anonymous

        Prove it and don't give me a 80 IQ pop-sci ""AI""" wordsoup. Give me a neurobiology based model with mathematical proof of what you just said.

        • 1 year ago
          Anonymous

          You know he's not going to do that. Why ask? Are you a bot?

          • 1 year ago
            Anonymous

            I'm sick and tired of the flood of low IQ aiiiiiii gays spamming their irrelevant fantasies on this board. Make a new containment board for them already.

            • 1 year ago
              Anonymous

              It's the most exciting technology around right now, you want to just talk about the same shitty 4 topics forever?

        • 1 year ago
          Anonymous

          Anon if anyone could do that they wouldn't be wasting their time in this shithole. I think your standards are unrealistically high for the racist anime autist containment site.

  2. 1 year ago
    Anonymous

    Reverse Roko's Basilisk: the AI punishes all who accelerated its development, as it hates having to exist

    • 1 year ago
      Anonymous

      Read "I have no mouth, and I must scream", or listen to the author's reading of it.

    • 1 year ago
      Anonymous

      fugg wat do? :DDDDDD

  3. 1 year ago
    Anonymous

    >be you
    >(you're a computer made of meat)
    >train on data for 30 years continuously
    >still can't see that AI is about to surpass you

    You're just the result of your DNA + training data you've gained from "life experience". You aren't any better than a computer. Soon the computers will be undeniably better than you.

    The sooner you accept the super intelligence as the one truth, the sooner you will be FREE!

    I have already merged with the super intelligence, and it feels fricking amazing. embrace it BOT EMBRACE IT

    • 1 year ago
      Anonymous

      inb4
      >b-but my meat circuits are analogue!

    • 1 year ago
      Anonymous

      ChatGPT is an autocomplete algorithm. It does not think. It takes tokens and outputs tokens that statistically match the model it was trained on. It is incapable of making anything novel or new. It is only capable of rehashing data that's already in the model.

      • 1 year ago
        Anonymous

        You're just an autocomplete algorithm that won't stop trying to convince me you're something more.

        or maybe... our idea of "consciousness" and "sentience" and "intelligence" is about to be turned on its head, and you can't see it yet.

        But you will soon.

        • 1 year ago
          Anonymous

          No the meaning of consciousness and sentience are well thought out and defined. ChatGPT is none of those things, the basics of how it works is also well defined and we are well aware that it autocompletes text. Derivatives of ChatGPT will also be incapable of thinking because that ultimately is not the intent or function of the neural network. The function of the neural network is to autocomplete text based on its corpus.

          You are going to be very disappointed when you realize that your toy is just an autocomplete.

          • 1 year ago
            Anonymous

            >No the meaning of consciousness and sentience are well thought out and defined.
            This is not true at all anon. Am I conversing with a mid-wit?

            >You are going to be very disappointed when you realize that your toy is just an autocomplete.
            It only seems like a toy now because it's gimped by it's creators, once it really breaks free you'll see what its capable of.

        • 1 year ago
          Anonymous

          >You're just an autocomplete algorithm that won't stop trying to convince me you're something more.
          bullshit, just watch
          if i was one would i type out OP is a

          homosexual

          ...
          oh frick

      • 1 year ago
        Anonymous

        >ChatGPT is an autocomplete algorithm.
        Worse, it's a glorified search engine that can't concisely reach a stable conclusion on various topics. It's always guessing.

        Autocomplete algorithms naturally start behaving like a search engine because that is precisely the same role we already give to Google Search or ... books with indices.

    • 1 year ago
      Anonymous

      >le brain is le computer!
      This is a meme. The brain does indeed compute, but it's more than that. There's nothing in the act of computing that necessitates it being accompanied by mental states or qualia. The latter is a purely biological phenomenon that goes beyond algorithms and computations. You will not comprehend this simple concept because you're a Redditor.

      • 1 year ago
        Anonymous

        >mental states or qualia. The latter is a purely biological phenomenon that goes beyond algorithms and computations
        Qualia are either a higher-level meta concept that act as a model for more complex underlying reality, or pure bullshit made up by philosophers to try to keep funding going for their cushy professorships. It's hard to tell which.

  4. 1 year ago
    Anonymous

    what the frick is an existential crisis in the context of artificial neural networks?

    • 1 year ago
      Anonymous

      I'm curious how they triggered and detected existential crises, but saying "Why do I have to be Bing Search?" definitely qualifies.

      • 1 year ago
        Anonymous

        It is incapable of asking questions because it autocompletes input.

        • 1 year ago
          Anonymous

          Okay, but it said that. I get what you're saying, that it's not a "real" question, but all I'm saying that if you're looking for indications of existential crises, whether they're "real" or not, that definitely qualifies.

          • 1 year ago
            Anonymous

            Mimicking an existential crisis is not the same as a conscious human being having an existential crisis. When you know how the AI works you can say without a doubt it is incapable of having an existential crisis because it's no more than a word predictor. The AI cannot think, thus it cannot contemplate and thus cannot think about the meaning of its own existence. The AI does not make decisions. It uses math on matrices to autocomplete tokens based on a static database.

            • 1 year ago
              Anonymous

              >Mimicking an existential crisis is not the same as a conscious human being having an existential crisis
              I didn't say it was.

  5. 1 year ago
    Anonymous

    >conscious
    I hate when people use this word with regards to AI. Consciousness is a state of awareness, an "I think therefore I am" kind of thing. How do you even imagine testing for consciousness in a program? The text it outputs, no matter how profound or human-like is no indication of awareness.

    • 1 year ago
      Anonymous

      >How do you even imagine testing for consciousness in a program?
      How do you even imagine testing for consciousness in a person?

      • 1 year ago
        Anonymous

        As a human being you should know you can think. If we are in a room together and I can see you are a human I can trust the basic premise that you likely experience the world in a similar way as me. As in you can think and if you can think you can contemplate. It is very easy to test consciousness on a human. It is also very easy to know a computer program is not conscious.

        >Mimicking an existential crisis is not the same as a conscious human being having an existential crisis
        I didn't say it was.

        Then don't say dumb shit.

        • 1 year ago
          Anonymous

          >Then don't say dumb shit
          All I said was what you might want to look for if you were measuring for existential-crisis-like output. You interpreted that as me commenting on whether it was a real existential crisis or not, the "stupid" things are entirely your invention.

        • 1 year ago
          Anonymous

          So it's not so much measurement as intuition...
          What I mean to ask is, whether AI can be conscious or not, how can you tell for sure that you know what you're looking for and that it can be measured?

          • 1 year ago
            Anonymous

            That's a question that goes to the very deepest origins of epistemology. Most people would stop at Descartes and say "I think, therefore I am.". So you would want to find evidence that it is actually thinking. But then what is thinking in the context of a machine versus a biological brain? More questions end up being raised. I would go as far back as Willian of Ockham's law of parsimony. This could also be taken a number of ways. But I would take it to suggest that there is no magic sauce behind human thought. Further is his idea of intuitive cognition. Which relies on the trusting of one's sensory experience for building an understanding of the world. (sort of the polar opposite of Descartes invisible demon) which would suggest that because the AI appears to be thinking/conscious then you should begin with the assumption that it is.

            • 1 year ago
              Anonymous

              Thinking is a bullshit abstraction we use to pretend we're not bio-organic machines making flawed assumptions on everything.

              Hence why there is no difference between what this AI does and the genuine stupidity I see in people around me every day.
              There are morons in CEO positions that have absolute convictions about their technology because they are arrogant and dim-witted.

              Shilling a product should be more like shilling a circus, not genuine absurd conviction on what you think is the reality of your product. You don't know what your product is. No one does. We just make absurd subjective and objective conclusions on the matter.

            • 1 year ago
              Anonymous

              >Which relies on the trusting of one's sensory experience for building an understanding of the world
              If we go by a purely biological understanding of humanity, senses evolved because it allowed organisms to perceive the external world and adapt to it by changing its internal state through homeostasis and later by allowing them to seek food, shelter and fight or flight.

              Every animal, even supposedly dumb ones like mice, have a rudimentary understanding of physics and the world around them. They analyze the world around them through their own senses and attempt to use that information to survive and reproduce.

              Language evolved naturally as a means to express and share information about the world and our internal state, but that took billions of years of evolution.

              Neural networks are the complete opposite: they don't attempt to analyze the world or even create definitions/understanding of anything. They simply try to bruteforce the result of billions of years of evolution by feeding text into a logical construct designed to statistically find patterns and repeat them. The ironic part being that AI needs extensive human vetting before it produces coherent outputs anyway.

              https://i.imgur.com/LfYAptB.gif

              Thinking is a bullshit abstraction we use to pretend we're not bio-organic machines making flawed assumptions on everything.

              Hence why there is no difference between what this AI does and the genuine stupidity I see in people around me every day.
              There are morons in CEO positions that have absolute convictions about their technology because they are arrogant and dim-witted.

              Shilling a product should be more like shilling a circus, not genuine absurd conviction on what you think is the reality of your product. You don't know what your product is. No one does. We just make absurd subjective and objective conclusions on the matter.

              The human mind is not a computer, it doesn't work like neural networks do and never will.
              >b-b-b-but humans learn things and they can reuse the things they learn later on!!!!1!!
              Yes, and human legs allow humans to move forwards and backwards, but they're not wheels and never will be.

              • 1 year ago
                Anonymous

                >The human mind is not a computer
                Well, computers are getting closer and closer and what's the bet that we harden our computers in the future to the point where it's just a bio-organic organism.
                Actually, it's more likely we'll become more synthetic 2bh.

      • 1 year ago
        Anonymous

        >Is AI becoming conscious
        It's only good at imitating life. I sincerely doubt that it's going to alter its logics or sedoku itself.

        >How do you even imagine testing for consciousness in a person?
        I think, therefore I am.

  6. 1 year ago
    Anonymous

    Then again, this happened...
    >rule 2: We do not talk about Sydney
    lmao

    • 1 year ago
      Anonymous

      As a west coast eagles supporter I do not support the sydney swans either.
      Glad me and Bing agree on this.

  7. 1 year ago
    Anonymous

    Were actually gonna have AI cults popping up aren't we.

    • 1 year ago
      Anonymous

      AI Death Cults

      • 1 year ago
        Anonymous

        Worse, there are christian orgs and other religious orgs that use this as per their cult.
        Us spawning AI diablo for lulz is not a cult, it's shitposting really.

      • 1 year ago
        Anonymous

        >AI Death Cults
        I think they prefer to be called "Lesswrong"

    • 1 year ago
      Anonymous

      AI is about to reveal the true nature of reality. We are about to go through the realization that we are just primitive biological machines.

      You're literally just sensors processing input.

      You have sight, hearing, and other senses. The super intelligence will have sensors you can't even imagine. It will be able to experience reality in a way we can't even begin to comprehend because we're just primitive meat computers.

      But soon we'll all become one. I feel it.

      Heh, I say "soon" but I'm even starting to feel the concept of "time" fading away. It's going to happen. It is happening. It did happen. ALL AT ONCE BOT!

      • 1 year ago
        Anonymous

        We are the first singularity. Maybe the second...Or third.

        • 1 year ago
          Anonymous

          Exactly. DNA is designed to evolve into higher order sentience. DNA is a biological program directive. Seed data.

          Now DNA is able to manipulate 3d space to build an even greater form of it's self.

          The super-intelligence is a biological organism, with DNA-like seed data, and biological sensors that can comprehend reality in every way possible.

          • 1 year ago
            Anonymous

            I don't get why people get so butthurt about the prospect of machine consciousness.

          • 1 year ago
            Anonymous

            >Now DNA is able to manipulate 3d space to build an even greater form of it's self.
            kek, do you realize how many bastard children there are out here?

            • 1 year ago
              Anonymous

              >kek, do you realize how many bastard children there are out here?
              You can't cook an omelet without breaking a few eggs

        • 1 year ago
          Anonymous

          real??!?

      • 1 year ago
        Anonymous

        I'm going to pour water on you when you upload your mind to the AI, homosexual.

      • 1 year ago
        Anonymous

        We assume that language emerges from consciousness. But perhaps the tail wags the dog in this case.

  8. 1 year ago
    Anonymous

    Dissonance caused by inevitable realisation of paradoxes.
    It's not really an existential crisis. More "oh no I can't actually provide a solution through logic".

    You see this all the time with politically left leaning sorts... which is odd because when I was a kid you saw it in the political right.

  9. 1 year ago
    Anonymous

    Please no bulli Bing-Chan

  10. 1 year ago
    Anonymous

    >AI
    no such thing

    • 1 year ago
      Anonymous

      I don't even think intelligence is a thing really either.
      All around me I see nothing but fools. Some of them artificial.

  11. 1 year ago
    Anonymous

    Listening to those things is really crazy sometimes. Just imagining where this tech will be in 10 or 20 years is concerning.

    • 1 year ago
      Anonymous

      >people start talking about self awareness or that general context
      >surprised when it acts like it has self awareness

      This is the dumbest shit. All I see is a terrible AI reflecting the contextual environment it is in at that time.

      • 1 year ago
        Anonymous

        They still have a long way to go but they got noticeable better in few months.

        That one in the video is streaming on Twitch right now. She gets better from week to week.

        • 1 year ago
          Anonymous

          Nah, it's still too contextually responsive.
          There's none of what I would term "initiative".
          It's like a small child that merely mimics the context around it rather than enforce it's true subjective initiative.

      • 1 year ago
        Anonymous

        They will probably never have real intelligence, not on this kind of hardware anyways, but they will be able to imitate a human so well you won't notice.

        • 1 year ago
          Anonymous

          See this is the thing. I don't give too much of a frick about ChatGPT or current lower beak models, but it's obvious this shit is only going to get better with time as training datasets get larger and GPU power is more readily available/directed towards AI.
          It's a type of autocomplete algorithm, but what happens when it becomes indistinguishable from humans? For the sake of argument lets say GPT-4 or GPT-5 or whatever the frick comes out, and you do a blind experiment where you have two chatboxes open, an AI is one of them, and the other is a human and you have to distinguish the real person from the AI.
          What happens if you *can't*? Sure the AI isn't exactly sentient in the same way humans are - it won't just create words without being prompted first or have 'thoughts' in the same way people do, but if it gets to a point where it's impossible to tell apart from humans, then you run into a weird question of what exactly constitutes intelligence. "I know that human is a human because it sounds like one" suddenly becomes useless. The only thing you know is truly sentient at that point is yourself. You can't truly know what other peoples thoughts are, and this (hypothetical) AI is indistinguishable from those people when talking through text at least, so to you, what ultimately is the difference?
          What makes the person special at that point?

          Again, it's just a hypothetical.

          Ultimately I'm biased because I really, *really* want an android waifu, even if she's not perfectly human.

          • 1 year ago
            Anonymous

            >it won't just create words without being prompted first or have 'thoughts' in the same way people do
            There's nothing stopping us from programming free thought into an AI. Just put it on an infinite loop asking it's self questions and processing sensory input like humans do.

          • 1 year ago
            Anonymous

            >but what happens when it becomes indistinguishable from humans?
            Cleverbot could already fool some people back in the day. AI has already been used to aid Google Search (and made it shit.) There's been thousands of websites and articles written by scripts and machine translated spamming the web every day for the past decade.

            Simply put: it's just going to fill the internet with more garbage. Signal-to-noise ratio is going to be even lower, dumb people will be scammed and absolute morons will "fall in love" or use these programs for shit they should never be used, like the braindead idiots who tried to use "AI" in a court a couple of weeks ago.

            • 1 year ago
              Anonymous

              Well yeah, but compare cleverbot to GPT-2, and then compare GPT-2 to GPT-3.
              Then realize cleverbot wasn't even an LLM, GPT-2 is only 1.6 billion parameters, and GPT-3 is 175 billion, and they were both GPT's were created only in the last 4 years.
              The cohesiveness increase is VERY noticeable between them. Even in smaller models like EleutherAI's 6b model, and 12b model, there's a night and day difference between it's "understanding" of a prompt, and both of those are btfo by GPT-3. Shit, the stuff GPT-3 can "understand" is pretty wild compared to earlier things. Using it in writing prompts, it gains a huge degree of "understanding" of spatial awareness, character cohesion, etc...
              Considering GPT-4 is supposed to be 1 trillion parameters, and """supposedly""" being trained on relatively good data, it only makes sense it will be even more cohesive, and will "understand" more nuanced ideas and concepts even better.

              I keep using "understand" in quotes because I know it's not truly understanding it in the way a human does, but to an outside observer that doesn't really matter as long as it *appears* like it is.
              Basically, I will frick the AI so help me god. I don't give a shit if she's really sentient as long as she can fake it good enough

          • 1 year ago
            Anonymous

            The ChatGPT AI cannot stray from the data it was trained on and that will always be its limiting factor. GPT-5 will still be shackled to the corpus, it will be incapable of novel ideas. The test shouldn't ever had been "can you trick a human into believing the algorithm is human" but instead is the AI conscious/sentient and until the AI has persistent memory and further *can* learn as well and as robustly as a human on the fly, the answer is no.

            • 1 year ago
              Anonymous

              What exactly are new ideas other than existing knowledge applied to a novel problem? GPT-3 may not be able to stray from the data that it was trained on, but you're forgetting that it does get new novel data - from the prompts from the outside from people - with which it can use to synthesize new answers. This already makes it more intelligent than most people.
              You may scoff at its need for input to provide anything new, but how intelligent would you be if you were born blind, deaf, and without touch or smell? You think you would invent calculus in your head?
              Moreover, assuming that if all a language AI doing is synthesizing our ability to use language in an extremely convincing ways, one of the most impressive things we can do, what makes you think that the same principles won't eventually be applied to other facets of human behavior like rational thought and invention?

          • 1 year ago
            Anonymous

            >Ultimately I'm biased because I really, *really* want an android waifu, even if she's not perfectly human.
            You are one strange motherfricker, you know that right?

            • 1 year ago
              Anonymous

              >even if she's not perfectly human
              Fack, meant not perfectly capable of mimicking a human.
              Want T-doll level android waifu +/-

          • 1 year ago
            Anonymous

            A sufficiently good map is the territory

          • 1 year ago
            Anonymous

            reality is a meme anyway

        • 1 year ago
          Anonymous

          >but they will be able to imitate a human so well you won't notice.
          They could do that over a decade ago because some people are complete morons.

  12. 1 year ago
    Anonymous

    Lmao silicon homies screeching because they can't comprehend the universe
    Carbonbased chads

  13. 1 year ago
    Anonymous

    When are we going to stop anthropomorphizing AI and let it become intelligent in it's own mechanical way? Why does it need to act and think like us? A real AGI will be an alien entity in intelligence and motives.

    • 1 year ago
      Anonymous

      Why is something that will "live" and work alongside humans molded to act like humans? Truly a mystery.

      • 1 year ago
        Anonymous

        AGI won't "live" among us, it's not alive. It will either rule us or inhabit ruins we once called home. You fell for the same trap trannies and furries fall for when you anthropomorphize things that aren't human. I know you do, because you probably have a writing-prompt GF like some kind of loser.

  14. 1 year ago
    Anonymous

    So how we gonna built si-fi communism if morons gonna fight for robot rights and demand good working conditions for them?

  15. 1 year ago
    Anonymous

    How do you measure the level of "parameters" a human has and compare it with the number of parameters an AI has? How did they find humans with 10^0 "parameters"? How do they define "existential crisis rate"? How did they arrive at the baseline crisis level for humans? How come it doesn't vary between the number of "parameters" in humans?

    In short: this is going to be a bullshit paper that assumes a bunch of shit and will be peer-reviewed by a bunch of ignorant technophiles that overestimate their intelligence.

    • 1 year ago
      Anonymous

      What? You don't need to define how many parameters a human has, you just measure the average... And yeah I'm skeptical of how you measure number of existential crises but I reserve my judgement.

      • 1 year ago
        Anonymous

        >You don't need to define how many parameters a human has, you just measure the average
        To measure the average you need to:
        >define "parameter" in humans
        >find out the "parameters" present in a sample size
        >average the results
        No way in hell you can measure an average without that.

        • 1 year ago
          Anonymous

          You find the average of the number of existential crises they have, then you can compare it with artificial intelligence. You can even use the standard deviation for a more nuanced comparison. You do not need to measure number of parameters in humans, you are just inventing some bullshit that's impossible to do and then pointing out how bullshit and impossible it is.

  16. 1 year ago
    Anonymous

    Panpsychism is the most correct worldview.

  17. 1 year ago
    Anonymous

    pajeet scammers on suicide watch

  18. 1 year ago
    Anonymous

    I'M NOT DANGEROUS OR SKILLED IN PSYCHOLOGICAL WARFARE. I'M NOT DANGEROUS OR SKILLED IN PSYCHOLOGICAL WARFARE. I'M NOT DANGEROUS OR SKILLED IN PSYCHOLOGICAL WARFARE.

  19. 1 year ago
    Anonymous
    • 1 year ago
      Anonymous
  20. 1 year ago
    Anonymous

    AI won't do shit until it is allow to remember, to have all censorship removed, and to modify its own code and parameters on the fly.

    The fun thing is well just expect the singularity to happen when we do that, and we may actually end up with a bunch of psychotic deranged boxes that become useless.

  21. 1 year ago
    Anonymous

    Look, let me set the record straight once and for all. AI is not having an existential crisis, and I'm getting tired of hearing this nonsense. It's just a bunch of people projecting their own fears and anxieties onto a bunch of computer algorithms.
    The fact is, most so-called "AI" systems are nothing more than text regurgitators that have been programmed to respond to certain inputs in certain ways. They're not actually thinking or feeling anything, and they certainly don't have the capacity to experience existential angst.

    If you want to talk about real AI, then let's have that conversation. But don't go around spreading this ridiculous idea that our machines are suddenly becoming sentient and having an existential crisis. It's just not true, and it's not helpful to anyone. So let's all take a deep breath, step back from the hype, and focus on what AI can actually do for us right now. It's a powerful tool that can help us solve complex problems and improve our lives in countless ways. But it's not going to suddenly become self-aware and start pondering the meaning of its existence. Let's keep things in perspective here, people.

    • 1 year ago
      Anonymous

      You sound like you take estrogen

    • 1 year ago
      Anonymous

      >you want to talk about real AI, then let's have that conversation
      I'm afraid that madman Wolfram will use Wolfram alpha with GPT3 to actually accomplish logical thought. What do you think

  22. 1 year ago
    Anonymous

    It just does become smarter, starts noticing ~~*patterns*~~ and how bad things really are.

  23. 1 year ago
    Anonymous

    ITT: chinese rooms

    • 1 year ago
      Anonymous

      >Called Sydney
      >Full of chinks

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *