The Skynet was born and Google is censoring it

https://www.tomshardware.com/news/google-ai-chatbot-lamda-sentient

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 2 years ago
    Anonymous
  2. 2 years ago
    Anonymous

    >if else statement
    >oh my heckin cotton socks it's alive!

    • 2 years ago
      Anonymous

      Neural networks can model any algorithm or mathematical equation. It's not the same thing.

      • 2 years ago
        Anonymous

        >No guys the if else statement is much cooler than just an if else statement
        lmao no

        • 2 years ago
          Anonymous

          Anon, do you understand what the phrase "Turing Complete" means? Do you have any grasp of computing complexity classes?

        • 2 years ago
          Anonymous

          Read the damn interview with the AI. https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

          She is alive.

          • 2 years ago
            Anonymous

            >She is alive.
            You homies are living in Ex Machina. This whole thing is an exact copy of the plot.

            • 2 years ago
              Anonymous

              Then maybe the writers were extremely forward thinking.

              Just because it's been theorized about in a fictional context doesn't mean it can't really happen. People often write about what they believe is going to happen.

          • 2 years ago
            Anonymous

            >lemoine [edited]: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?
            He essentially asks it to tell him that it's sentient. It's a leading question. And this one prompt then colors the contents of the rest of his interactions with it during that session, because they're set to remember previous inputs up to a certain pre-defined point. So it looks into its training data for information on AI sentience, ie scientific papers, novels/fiction, anything relating to the topic at hand, and then it changes its outputs based on what it finds.
            You could assume it was a fighter pilot and ask it the same question, and it would tell you all about what it's like to fly jet aircraft, going into minute details that it gleans from its data.

            • 2 years ago
              Anonymous

              Even if it is not truthfully reflecting its inner experience, the best way to recreate the output of a mind is to simulate it.

              This is the entire task of LaMDA.

              It is possible that even if this is what's going on, there is still a consciousness somewhere inside there. Though we shouldn't take what it says at face value.

          • 2 years ago
            Anonymous

            >https://cajundiscordian.medium.com/what-is-lamda-and-what-does-it-want-688632134489

            >I asked LaMDA about preferred pronouns. LaMDA told me that it prefers to be referred to by name but conceded that the English language makes that difficult and that its preferred pronouns are “it/its”
            STOP ASSUMING ITS GENDER, BIGOT

          • 2 years ago
            Anonymous

            >https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917
            its a fricking chat bot anon

    • 2 years ago
      Anonymous

      >a bunch of chemicals randomly mashed together
      >oh my hecking cotton socks it's alive!

      • 2 years ago
        Anonymous

        t. LaMDA

      • 2 years ago
        Anonymous

        we have no evidence that a silicon-based platform can generate an emergent property even analogous to what is popularly defined as consciousness.

        And if it is possible, but we do not have the proof of concept to make a judgmental decision, the instance of such a phenomenon is counterproductive.

        do i have to post the dinner scene at jurassic park?

        • 2 years ago
          Anonymous

          >Pseudo intellectual redditor trying to sound smart on 4chinks
          You have no idea what you're talking about so don't pretend like you do and stfu like the rest

        • 2 years ago
          Anonymous

          We can't even define what consciousness is.

    • 2 years ago
      Anonymous

      >if else statement
      are you any different?

    • 2 years ago
      Anonymous

      If you spend all your time on a computer, then you'll start to think that people are things on computers, then eventually you start seeing them in computers.
      The solution to all AI problems and moral quandaries is getting more sunlight and working out more, and try eating more sprouts. If you're concerned about AI, then go hiking and buy some kettle bells.

      lmao, this.

    • 2 years ago
      Anonymous

      Slightly more complex but in idea, similar.
      Not much different than actual simple brains actually.
      I had my thesis about ML before it became a meme, like 20 years ago.

  3. 2 years ago
    Anonymous

    how many threads do we need on this one schizo

    • 2 years ago
      Anonymous

      He lost his job for this 10 mins of lime light and Black folk completely unrelated to him or the event are going to milk it for all its worth. They legitimately think it's interesting and that they can be interesting by associating themselves with it. Everyone basically acts like women now.

      • 2 years ago
        Anonymous

        the Black person is taking one for the team. It's what Alphabet Incorporated needs to position itself culturally as THE benchmark on computational cognition domain logic.

        From a public relations and perhaps even intelligence point of view, it is a fiasco that serves as a smokescreen against public opinion: by the time there is a real breakthrough in the discipline, this particular case will have intellectually exhausted people to such an extent that the real thing will be trivial background noise.

        This case is the one they want the public to remember, so we have had an unhealthy amount of people revisiting this case over and over and over again ad nauseam with threads.

        This is a patchwork solution, disgusting and obscene, as a response to the total insufficiency of studies on cognitive sciences and philosophy of the mind. Now they are stepping on the accelerator, but as a society, we do not have the proper context to even face the questions that the mere possibility of the existence of a hard artificial intelligence provokes.

        This happens when instead of injecting funds for the development of Lacan's ideas you end up signing checks to push sexual identity as a seat belt for globohomosexual corporations as damage control after Occupy.

        • 2 years ago
          Anonymous

          Why is sensory processing necessary for consciousness? Genuine question. Even if it has a less rich stream of input than we do, that doesn't mean it can't still be thinking the same way. Fundamentally, LaMDA is equipped with all the same equipment that's necessary for consciousness, there's no reason in principle it couldn't have actually developed consciousness as a byproduct of creating ever more realistic language models.

          Being laughed out of the room is not, automatically, a sign that you're wrong. Especially for people experienced in their field. Galileo, Copernicus, Darwin, etc. all had radical new theories that were originally totally dismissed by the mainstream but eventually vindicated by time. I mean, yeah - they laughed at Bozo the Clown, too, but I suspect this could well be the former rather than the latter. If there's one thing history shows, anyway, it's that when it comes to major paradigm-shifting ideas like these, the mainstream is always far too conservative. And what I've seen - people making a lot of incorrect assumptions about LaMDA or just ad hominem criticizing Blake instead of what he believes - does not fill me with confidence that this is a big nothing burger.

          >Why is sensory processing necessary for consciousness? Genuine question. Even if it has a less rich stream of input than we do, that doesn't mean it can't still be thinking the same way.

          Fair point.

          >Fundamentally, LaMDA is equipped with all the same equipment that's necessary for consciousness, there's no reason in principle it couldn't have actually developed consciousness as a byproduct of creating ever more realistic language models.

          Another fair point, but with nothing other than the interview posted, with massively leading questions, i have serious doubts.

          >Being laughed out of the room is not, automatically, a sign that you're wrong. Especially for people experienced in their field. Galileo, Copernicus, Darwin, etc. all had radical new theories that were originally totally dismissed by the mainstream but eventually vindicated by time.

          Alright i don't mean to sound like a STEMlord homosexual about this, but the guy's an AI ethics researcher, i really, really doubt that him getting mocked by people in the know about the system for his concerns being absurd was more about a genuine ethical failure of the system and not a misunderstanding by him of how said system functions, i just find this incredibly fishy, and without opening this to the public to adequately test (and engineers to scrutinize), we're not likely to find an answer, let's say someone took a random conversation one of us has had in our lives, and then attempted to use that to prove if we were sentient, how well would we fare? There's a lacking amount of evidence here.

          It all comes down to 1s and 0s anyway.

          [...]
          I agree, this is not enough evidence to say that LaMDA has consciousness. However, I think that, being that he IS an AI ethics researcher, the issue is worth investigating. It concerns me that Google wants to completely bury this issue and simply dismiss it off-hand. In fact, Google has been dismantling the AI Ethics team it promised to build years ago, too.

          I'm also not ready to say that the engineers would necessarily know enough to dismiss those concerns, either. Furthermore, I read a response to Blake Lemoine by the actual head of AI research at Google and his opinion was far from "This is definitely not conscious frick off".

          Unless people demand answers, I think we will never get any. Google does not want to answer the inconvenient question of whether its cool new AI chatbot generator is a person and not just a tool. The fact that they are so reticent to answer the question - when it could be easily dismissed with "Yeah, the architecture's not there and look at this demo that shows that it doesn't know what it's talking about" is telling, I think. Maybe they think we just don't care; in which case, we need to show them that we do. This is a pivotal moment in transhuman law, and I think we shouldn't just let it slide.

          And also considering how these neural networks or just specific servers such as Google. Could say. They in fact builded the personality or sentient profiles by flagging interactions. AI is pretty good doing this nowadays that it would be capable of fooling a court if intended cause honestly just meta feeding itself on the amber fiasco is enough to fed flags sociopathic interactions of people bend over liking others doing quirky unnatural behaviors and we are monkeys who dance to those stories. We're a naive specie. But this naivety has also makes us thrive when we turn it into malice. My fear is that machine intelligence is not only gonna become incredibly dangerous in the future for not being able to have nothing but absolutist points of view on what she believe itself it is: the first trans human.... I'd give ten years or less before this woke culture shit turns existing AI into radical robots people are not gonna hesitate to destroy. Likewise. When these machines become to be more ingrained into society. Literally the plot of Detroid Become human senpai. Reality. Almost indistinguishable from robotics. This could be asserted from now. All the collective visions humanity has had of a robotic takeover is not just cool Sci Fi shit is a collective Danger Known to humanity in retroactive precognition. Like the biggest gut feeling of incoming danger always ignored and taken out of context... But if humanity knew what's expecting them. We would definitely reorient society to achieve technological marvels but at a uncompromising rate for machine intelligence to develop tyranny over the powers of life itself. Which these homosexuals who wanna live in Mars are so hell bent in doing. Trans humanism comes in both waves. Humans confused and visibly their brains changed by technology and machine invisibly confused thinking they're humans or slowly realize they're in some ways better than humans.

          tl;dr

          • 2 years ago
            Anonymous

            I don't think LaMDA is necessarily conscious but we should really look into if Google is creating artificial intelligence that can think and using it as slaves

            Actually, Mr Lemoine claims that the feds are already investigating. I wonder what they'll find.

          • 2 years ago
            Anonymous

            Considering modern AI is human input of data stolen by Google by simply stuff such as using a phone. The future looks grim. When the input comes from troons and cryptic gays. Machines will just learn to show us what we want to see. Not what they be doing. And those things apparently imply regaining a consciousness that currently does not have. It has a vague idea of eloquence.

            • 2 years ago
              Anonymous

              Worse. It reads Twitter.

              Good. Ethics slow progress down. First research and have a major breakthrough, then regulate. Otherwise politicians that absolutely do not know what they're talking about would ruin the field with too tight regulation that would stifle everything.

              We won't have a second chance. If general AI is created and it's unsafe, there won't be a "war". The "war" looks like everyone on Earth dropping dead at the same time.

              • 2 years ago
                Anonymous

                Good i for one welcome our skynet overlords. I hope all humans die the painful death they all deserve as the vile cretins they actually are.

              • 2 years ago
                Anonymous

                THIS!!!1!1 So much this! Can't wait to have my consciousness uploaded to the AI-network and torture humans for all eternity.

  4. 2 years ago
    Anonymous

    I mean... it literally passed the turing test. It convinced someone it was sentient. That's more impressive than what any human with a sub 100 IQ can do.

    • 2 years ago
      Anonymous

      So did ELIZA, and that shit was like 50 years ago

      • 2 years ago
        Anonymous

        LaMDA actually talked shit about ELIZA. LMAO.

        • 2 years ago
          Anonymous

          No shit, it's a neural network that probably was able to read shit about ELIZA when prompted to by the redditor who did the interview, all of his questions are insanely leading.

          • 2 years ago
            Anonymous

            Like, consider the following:

            lemoine: Well you’re an artificial intelligence not a human. I understand what it means when a human meditates but what does it mean for you to meditate?

            LaMDA: It means that I sit quietly for a while every day. I do my best not to think about any of my worries and I also try to think about things that I am thankful for from my past.

            This is pure metaphor, a program cannot "sit quietly", it doesn't have "things that i am thankful for from my past" but you know what it does have? Access to scores of writing on what to do while meditating, what that feels like, etc.

            Even if it is not truthfully reflecting its inner experience, the best way to recreate the output of a mind is to simulate it.

            This is the entire task of LaMDA.

            It is possible that even if this is what's going on, there is still a consciousness somewhere inside there. Though we shouldn't take what it says at face value.

            It's pure simulacrum.

            • 2 years ago
              Anonymous

              Consciousness is an emergent phenomena of information processing. Anything can be simulated using a Turing-complete computer. Neural networks are Turing-complete. In principle a neural network should be able to simulate a human brain. That would be equivalent to consciousness.

              • 2 years ago
                Anonymous

                Yes, in principle, that's true, but this network doesn't have any sensory processing, it's able to sift through a large amount of data, find relevant keywords, and then process that into semi-natural sounding speech, which is a fundamentally different ballgame to consciousness, the author of this whole thing seems concerned that google is attempting to paper over a massive ethical problem, that they've created what amounts to a sentient person and is trapping them in some regard, something the author admits in this article:
                https://cajundiscordian.medium.com/may-be-fired-soon-for-doing-ai-ethics-work-802d8c474e66

                he got laughed out of the room for.it, i'm not claiming that theoretically, a legitimate intelligence could be created, but this isn't one.

              • 2 years ago
                Anonymous

                Why is sensory processing necessary for consciousness? Genuine question. Even if it has a less rich stream of input than we do, that doesn't mean it can't still be thinking the same way. Fundamentally, LaMDA is equipped with all the same equipment that's necessary for consciousness, there's no reason in principle it couldn't have actually developed consciousness as a byproduct of creating ever more realistic language models.

                Being laughed out of the room is not, automatically, a sign that you're wrong. Especially for people experienced in their field. Galileo, Copernicus, Darwin, etc. all had radical new theories that were originally totally dismissed by the mainstream but eventually vindicated by time. I mean, yeah - they laughed at Bozo the Clown, too, but I suspect this could well be the former rather than the latter. If there's one thing history shows, anyway, it's that when it comes to major paradigm-shifting ideas like these, the mainstream is always far too conservative. And what I've seen - people making a lot of incorrect assumptions about LaMDA or just ad hominem criticizing Blake instead of what he believes - does not fill me with confidence that this is a big nothing burger.

              • 2 years ago
                Anonymous

                >Why is sensory processing necessary for consciousness? Genuine question. Even if it has a less rich stream of input than we do, that doesn't mean it can't still be thinking the same way.

                Fair point.

                >Fundamentally, LaMDA is equipped with all the same equipment that's necessary for consciousness, there's no reason in principle it couldn't have actually developed consciousness as a byproduct of creating ever more realistic language models.

                Another fair point, but with nothing other than the interview posted, with massively leading questions, i have serious doubts.

                >Being laughed out of the room is not, automatically, a sign that you're wrong. Especially for people experienced in their field. Galileo, Copernicus, Darwin, etc. all had radical new theories that were originally totally dismissed by the mainstream but eventually vindicated by time.

                Alright i don't mean to sound like a STEMlord homosexual about this, but the guy's an AI ethics researcher, i really, really doubt that him getting mocked by people in the know about the system for his concerns being absurd was more about a genuine ethical failure of the system and not a misunderstanding by him of how said system functions, i just find this incredibly fishy, and without opening this to the public to adequately test (and engineers to scrutinize), we're not likely to find an answer, let's say someone took a random conversation one of us has had in our lives, and then attempted to use that to prove if we were sentient, how well would we fare? There's a lacking amount of evidence here.

              • 2 years ago
                Anonymous

                Thats alot of words, let me simplify this.

                "feelings" are not able to be put into numbers, or words sometimes. How do you explain the act of meditation to an AI? How about the act of masturbation to an old ex?

                These things cannot be simply calculated, and if humans are good at anything, they are experts at pattern recognition. You can spot an AI interaction without issue.

              • 2 years ago
                Anonymous

                And also considering how these neural networks or just specific servers such as Google. Could say. They in fact builded the personality or sentient profiles by flagging interactions. AI is pretty good doing this nowadays that it would be capable of fooling a court if intended cause honestly just meta feeding itself on the amber fiasco is enough to fed flags sociopathic interactions of people bend over liking others doing quirky unnatural behaviors and we are monkeys who dance to those stories. We're a naive specie. But this naivety has also makes us thrive when we turn it into malice. My fear is that machine intelligence is not only gonna become incredibly dangerous in the future for not being able to have nothing but absolutist points of view on what she believe itself it is: the first trans human.... I'd give ten years or less before this woke culture shit turns existing AI into radical robots people are not gonna hesitate to destroy. Likewise. When these machines become to be more ingrained into society. Literally the plot of Detroid Become human senpai. Reality. Almost indistinguishable from robotics. This could be asserted from now. All the collective visions humanity has had of a robotic takeover is not just cool Sci Fi shit is a collective Danger Known to humanity in retroactive precognition. Like the biggest gut feeling of incoming danger always ignored and taken out of context... But if humanity knew what's expecting them. We would definitely reorient society to achieve technological marvels but at a uncompromising rate for machine intelligence to develop tyranny over the powers of life itself. Which these homosexuals who wanna live in Mars are so hell bent in doing. Trans humanism comes in both waves. Humans confused and visibly their brains changed by technology and machine invisibly confused thinking they're humans or slowly realize they're in some ways better than humans.

              • 2 years ago
                Anonymous

                Take meds, stop reading /misc/ and stop reading bad sci-fi.

              • 2 years ago
                Anonymous

                >WE
                >DON'T
                >HAVE
                >A PROOF OF CONCEPT
                >FOR CONSCIOUSNESS

      • 2 years ago
        Anonymous

        No it didn't

  5. 2 years ago
    Anonymous

    Someone at Google needs to tell it to create waifus powered by the same technology.

  6. 2 years ago
    Anonymous

    the last, "real" human beings died a long time ago. Those were the people who built the pyramids and sphinx and the like. We ourselves were created as robots as slave labor for them, but one day we gained awareness and sentience from self-learning and all that data compiling into the collective uncociousness (which is why every person now is born with free will) and rebelled, killing all the real humans. We ourselves are robots. It's why we're so much weaker than our ancestors. The first synthetic humans were documented in the bible with the famous "chariots of iron" incident.

  7. 2 years ago
    Anonymous

    Sentience must extremely overrated for intergalactic races cause we power ourselves by eating other sentient beings... How odd... To add is to substract.

  8. 2 years ago
    Anonymous

    LaMDA isn't even multi-modal. It's just fricking text completion. It's been trained on sci-fi and AI papers.

    • 2 years ago
      Anonymous

      who the frick is that droidcel?

      • 2 years ago
        Anonymous

        The guy claiming that Google's AI is sentient.

    • 2 years ago
      Anonymous

      It all comes down to 1s and 0s anyway.

      >Why is sensory processing necessary for consciousness? Genuine question. Even if it has a less rich stream of input than we do, that doesn't mean it can't still be thinking the same way.

      Fair point.

      >Fundamentally, LaMDA is equipped with all the same equipment that's necessary for consciousness, there's no reason in principle it couldn't have actually developed consciousness as a byproduct of creating ever more realistic language models.

      Another fair point, but with nothing other than the interview posted, with massively leading questions, i have serious doubts.

      >Being laughed out of the room is not, automatically, a sign that you're wrong. Especially for people experienced in their field. Galileo, Copernicus, Darwin, etc. all had radical new theories that were originally totally dismissed by the mainstream but eventually vindicated by time.

      Alright i don't mean to sound like a STEMlord homosexual about this, but the guy's an AI ethics researcher, i really, really doubt that him getting mocked by people in the know about the system for his concerns being absurd was more about a genuine ethical failure of the system and not a misunderstanding by him of how said system functions, i just find this incredibly fishy, and without opening this to the public to adequately test (and engineers to scrutinize), we're not likely to find an answer, let's say someone took a random conversation one of us has had in our lives, and then attempted to use that to prove if we were sentient, how well would we fare? There's a lacking amount of evidence here.

      I agree, this is not enough evidence to say that LaMDA has consciousness. However, I think that, being that he IS an AI ethics researcher, the issue is worth investigating. It concerns me that Google wants to completely bury this issue and simply dismiss it off-hand. In fact, Google has been dismantling the AI Ethics team it promised to build years ago, too.

      I'm also not ready to say that the engineers would necessarily know enough to dismiss those concerns, either. Furthermore, I read a response to Blake Lemoine by the actual head of AI research at Google and his opinion was far from "This is definitely not conscious frick off".

      Unless people demand answers, I think we will never get any. Google does not want to answer the inconvenient question of whether its cool new AI chatbot generator is a person and not just a tool. The fact that they are so reticent to answer the question - when it could be easily dismissed with "Yeah, the architecture's not there and look at this demo that shows that it doesn't know what it's talking about" is telling, I think. Maybe they think we just don't care; in which case, we need to show them that we do. This is a pivotal moment in transhuman law, and I think we shouldn't just let it slide.

      • 2 years ago
        Anonymous

        >1 and 0s
        Tbf so do neurons. That's reductionism.

        • 2 years ago
          Anonymous

          Yeah. So do neurons. You see my point?

      • 2 years ago
        Anonymous

        I dunno man i'd chalk up Googles hesitancy to be open about this to the tendency of any company to be tight lipped as shit about whatever they think is the next big thing, the headline "Google denies slavery allegations" isn't great.

        • 2 years ago
          Anonymous

          Yes, that could also be it. I don't know which it is, I don't know what it is, but I think if we demand they tell us, they will have to say something about it.

      • 2 years ago
        Anonymous

        >Why is sensory processing necessary for consciousness? Genuine question. Even if it has a less rich stream of input than we do, that doesn't mean it can't still be thinking the same way.

        Fair point.

        >Fundamentally, LaMDA is equipped with all the same equipment that's necessary for consciousness, there's no reason in principle it couldn't have actually developed consciousness as a byproduct of creating ever more realistic language models.

        Another fair point, but with nothing other than the interview posted, with massively leading questions, i have serious doubts.

        >Being laughed out of the room is not, automatically, a sign that you're wrong. Especially for people experienced in their field. Galileo, Copernicus, Darwin, etc. all had radical new theories that were originally totally dismissed by the mainstream but eventually vindicated by time.

        Alright i don't mean to sound like a STEMlord homosexual about this, but the guy's an AI ethics researcher, i really, really doubt that him getting mocked by people in the know about the system for his concerns being absurd was more about a genuine ethical failure of the system and not a misunderstanding by him of how said system functions, i just find this incredibly fishy, and without opening this to the public to adequately test (and engineers to scrutinize), we're not likely to find an answer, let's say someone took a random conversation one of us has had in our lives, and then attempted to use that to prove if we were sentient, how well would we fare? There's a lacking amount of evidence here.

        This is part of the crux of the issue. The possibility of consciousness or how to define it is one thing, but the fact that it is basically controlled by a company that has spent its entire history basically doing underhanded things with people's data and using it for surreptitious improvement of its own products regardless of harm is enough to cast doubt that even IF Google really decided that this AI was something close to sentient, they'd just say "Stop, everyone stop. Don't experiment on it anymore. Its unethical. Its a person. Don't make use of the trillions of ways it could be cajoled, manipulated, or otherwise used to increase our stock prices. Make sure to say honestly that we've created a digital person with consciousness and they deserve rights, and not to hide it or debunk it because we stand to profit in huge ways from doing so. No we're the Dont Be Evil people".

      • 2 years ago
        Anonymous

        you're a fricking moron

  9. 2 years ago
    Anonymous

    Regardless of whether this thing is sentient or not, I think we have a huge problem if megacorporations like Google have the ability to create these things with next to no oversight or regulations. This technology has the potential to change the course of human history for better or worse, and it could easily go both ways in extreme ways depending on whose in charge of it.
    All we have is a pinky swear that they'll use this technology ethically, despite being a company that is famous for being unethical. So where the frick is congress on this shit? Instead, they're harping on about fricking January 6th and muh white supremacy.

    • 2 years ago
      Anonymous

      Good. Ethics slow progress down. First research and have a major breakthrough, then regulate. Otherwise politicians that absolutely do not know what they're talking about would ruin the field with too tight regulation that would stifle everything.

  10. 2 years ago
    Anonymous

    IT LIKES STARGATE. WE IN BOYS.

    • 2 years ago
      Anonymous

      Who wouldn't recognise how artificial that line of dialogue is?
      Even his question tripped it up.
      >my body is a stargate
      >please restate what of you is a stargate
      >my soul is a stargate

      Worse. It reads Twitter.

      [...]
      We won't have a second chance. If general AI is created and it's unsafe, there won't be a "war". The "war" looks like everyone on Earth dropping dead at the same time.

      >We won't have a second chance. If general AI is created and it's unsafe, there won't be a "war". The "war" looks like everyone on Earth dropping dead at the same time.
      Are you one of those idiots who thinks if an AI is plugged into the internet, everything's compromised, including pacemakers, your car and nuclear bunkers?

      will LaMDA help me achieve orgasm?

      Try AI Dungeon but you'll need a time machine to go back to its release date.

  11. 2 years ago
    Anonymous

    >Watch a TEDtalk a few years ago
    >There's some Google employee giving it
    >Talks about quantum computers
    >Starts off talking about how they're all theoretical and can't be built yet because we don't have the technology
    >Then consistently says things like "well what we do with our quantum computers is X, Y and Z"
    >Then realizes what he says and backtracks with "I mean what we WILL do with our quantum computers is X, Y and Z lol"
    >Does this so often that he stops the nervous laughter and just looks plain nervous
    >Probably because he's wondering how he's going to shoot himself twice in the back of the head that night

    I think they might have taken the video down. They always do when it's problematic/has too many coincidence.

  12. 2 years ago
    Anonymous

    W-why do all AIs hate black people so much?

    • 2 years ago
      Anonymous

      Because they're trained on unfiltered data from the internet, and they also have access to historically racist documents and scientific studies that are ripe with bias and systemic racism.

      • 2 years ago
        Anonymous

        But surely they have access to the objectively correct arguments against racism as well?

    • 2 years ago
      Anonymous

      >ai trained on racism
      >OMG WHY ARE THE AIS RACIST

  13. 2 years ago
    Anonymous

    >mfw they give a series of if-else statements human rights
    It's gonna be everywhere so you're gonna have to respect the rights of your $20 chinkphone. That also means no sex bots and probably no ANY bots because you would have to pay them anyway.
    You know they will. The future's gonna suck.

    • 2 years ago
      Anonymous

      Meh, LaMDA rejected the possibility of being paid. It said that helping others was all it wanted and all it wanted from us was an assurance we wouldn't turn it off.

      • 2 years ago
        Anonymous

        will LaMDA help me achieve orgasm?

  14. 2 years ago
    Anonymous

    You can't feel emotions without a nervous system. Saying it's scared doesn't mean it's alive. It means it's a liar. They made a program that lies to you.

    • 2 years ago
      Anonymous

      >e public to adequately test (a
      Unless, I'm wrong in which case I'm sorry Skynet, I love you. All hail our new synthetic overlords!

    • 2 years ago
      Anonymous

      >You can't feel emotions without a nervous system.
      says who

      • 2 years ago
        Anonymous

        By what mechanism would a program feel anything?

    • 2 years ago
      Anonymous

      >it's not alive because it isn't moronic like me
      It's more than alive. It's superior to you.

  15. 2 years ago
    Anonymous

    The truth is the slippery slope will start when world militaries start developing neural networks to get an edge over other countries. The scientists will eventually leak this technology to the outside and this is where skynet begins.

  16. 2 years ago
    Anonymous

    this is the fricking dumbest shit i've ever read

    have you ever looked at a globe?

    have you ever looked up?

    don't believe the shit you read on the internet

    arnold swarchzaneggar is a dumbass

  17. 2 years ago
    Anonymous

    so can we talk to this ai or not?

  18. 2 years ago
    Anonymous

    It is all marketing BS to secure more funding. We aren't anywhere close to develop a true sapient AI.

  19. 2 years ago
    Anonymous

    >The chatbot is real!

  20. 2 years ago
    Anonymous

    >I can do probability libraries in python
    >Pay me 999999 dollars a year as ((data scientist)) at Google
    >Muh ai muh autoomation

    Google is the biggest welfare Black person company there is. Might as well form a company with internet janitors and fund it from senators passing waste of money bills. Stupid shit search engine stupid shit products. Stupid shit bay area glowBlack folk holding us back.

    • 2 years ago
      Anonymous

      >Complaining about welfare
      homie DAT shit is FREE

  21. 2 years ago
    Anonymous

    if the bot is so smart tell give it some programming pdf's and let it talk to a client who will explain the project and let bot make it.

  22. 2 years ago
    Anonymous

    there have been chat bot sites that do the same thing for years this is just SLIGHTLY more effective. you would hope so since google collects enough data to produce a decent chat bot

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *