Percentage of AI experts believing in human exctinction triples

9% of machine learning experts believe the risk of an extremely bad outcome (eg. human extinction) to be larger than 50%. The figure used to be 3% in 2016.

Overall probabilities dedicated to different scenarios are as follows:
>Extremely good (eg. rapid growth in human flourishing): 24%
>On balance good: 26%
>More or less neutral: 18%
>On balance bad: 17%
>Extremely bad (eg. human extinction): 14%

So 31% of machine learning experts believe AI will lead to the world becoming worse, ranging from mildly bad scenarios to catastrophic failure.

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 10 months ago
    Anonymous

    Same poll in 2016, way more optimism

    • 10 months ago
      Anonymous

      >it will be extremely bad
      >for you

  2. 10 months ago
    Anonymous

    >ai exoerts
    cogs who learn how to use a tool and suddenly they become prophets, am i reading this right?

    • 10 months ago
      Anonymous

      shills who are paid to shill for more regulation.

    • 10 months ago
      Anonymous

      In my experience as an AI expert, the most pessimistic among us are those who don't actually use the tools they are scared of. Honestly, if you want something to be scared of, it's not going to be AI launching all the nukes or being given physical bodies that they use to kill people. That's all science fiction, and it requires way too many intractable problems to be solved.

      The big threat is what humans are going to use AI to do. I work in an area that's a cross section between cyber security and machine learning. Based on a number of the papers I've read, there are some building blocks already out of the way for conducting AI-based cyber attacks. We have automated red teaming with reinforcement learning agents. We have some very poor attempts at getting ChatGPT to generate malware. We have a large monetary incentive to carry out cyber attacks, and not enough experts to effectively counter it across the board. So conservatively... I'll say in about 10 years we're going to have someone use AutoGPT to ransomware a good chunk of the globe.

      But I'm not going to bet my money on that. It's just the biggest threat model I can consider feasible in the worst case. And hey, if it helps get funding for better intrusion detection systems, all the better.

      • 10 months ago
        Anonymous

        llm is not singularity.
        i can sleep soundly until then. simple as.

        • 10 months ago
          Anonymous

          Singularity isn't going to happen.

          • 10 months ago
            Anonymous

            let's believe you had the credentials you say you have.
            >unknown unknowns
            tinkering with stuff... how about you were to be more humble.

      • 10 months ago
        Anonymous

        >We have some very poor attempts at getting ChatGPT to generate malware
        You just aren't trying hard enough
        Obviously it isn't malware but I feel if you give it the right push nothing is past it

        • 10 months ago
          Anonymous

          Maybe we're not. Maybe we need a few more iterations on the model. I'm merely trying to highlight its theoretical capabilities.

          https://i.imgur.com/ffidwTG.gif

          let's believe you had the credentials you say you have.
          >unknown unknowns
          tinkering with stuff... how about you were to be more humble.

          People who have never written an application using machine learning overestimate its capabilities. We also assume things like being able to talk is what makes us smart. Parrots can talk. But they don't know what the words they're saying mean.

          For the current generation, it's true.
          I would be more concerned with the consensus creator botnets, nonstop propaganda and disinformation, dialed to 11.
          Somewhere around the next generation, we will realise, humans are not unique, our thinking patterns can be replicated.
          After that, we will have our singularity.
          And I am actively working on stuff how to use the current generation for industrial purposes.

          Now that's a real threat. And it's occurring right now. But it's also going to be deployed by all sides. Will be interesting to watch.

          • 10 months ago
            Anonymous

            >I'm merely trying to highlight its theoretical capabilities.
            GPT is really good at filling in gaps for you
            There's no real way to stop anything bad from coming out of it without using a device similar to it to keep it in check

      • 10 months ago
        Anonymous

        For the current generation, it's true.
        I would be more concerned with the consensus creator botnets, nonstop propaganda and disinformation, dialed to 11.
        Somewhere around the next generation, we will realise, humans are not unique, our thinking patterns can be replicated.
        After that, we will have our singularity.
        And I am actively working on stuff how to use the current generation for industrial purposes.

  3. 10 months ago
    Anonymous

    Take the AI pill, correctly aligning an artificial intelligence is the most important question of our time. Everything else is mostly distraction

    • 10 months ago
      Anonymous

      no.
      finding yourself among the people who will have access to the new born technologies is the primary objective.
      at the age where we can literally print human beings, anything else is totally secondary, and a mere distraction

      • 10 months ago
        Anonymous

        Who cares if you have access to AI when you're dead

        • 10 months ago
          Anonymous

          having access to GPAI will prevent you from becoming dead.
          even merely maintaining AI will.

          • 10 months ago
            Anonymous
            • 10 months ago
              Anonymous

              >the fricking bible

              compare rev 17:8 from the vulgata and its translations.
              the bible has been manipulated and cristianity has been perverted.
              at the origin we worshipped jesus through the re-enactement of the miracle of the fishermen
              our symbol was said miracle
              and we had places of worship only because we had to hide from the israelited up roman authorities.

              nowadays we worship christs suffering
              we cannibalize his fricking flesh
              and churches have the monopoly on salvation bc muh original sin
              (for which jesus has died, remember?)

              • 10 months ago
                Anonymous
              • 10 months ago
                Anonymous

                based pagan.
                theres two currents tho

                theres the daemonists, namely the people who have a "holding" cult of death.
                like the germans/skandinavs and celtics.
                thats why they buried ppl with their most loved shit
                then they did human sacrifices to keep the ancestors alive

                which is actually daemonism.

                but then you have post-hindu shamanizm.
                "post-hindu" but its actually the primordial nice gus religion.
                you worship god, which is the natural order of things.
                thats why from todays poland east, they all burned the bodies.
                thats bc it makes reincarnation/reintegration more easy.
                also it purifies the most loved items of the deceeased to make going to the afterlife more easy

              • 10 months ago
                Anonymous

                >also it purifies the most loved items of the deceeased to make going to the afterlife more easy
                fire purifies in itself
                also its copper age religions.
                copper melts at the same temps you need to burn a body.
                melting metal free's it from its form, breaking the attachement

              • 10 months ago
                Anonymous

                >copper melts at the same temps you need to burn a body.
                >melting metal free's it from its form, breaking the attachement
                I'm just not uploading it (the gray matter)

              • 10 months ago
                Anonymous

                literally not my problem.
                also alchemy is all about transforming the soul
                not making actual gold from actual lead

              • 10 months ago
                Anonymous

                >also alchemy is all about transforming the soul
                >not making actual gold from actual lead
                Wouldn't know, I'm not an alchemist.

              • 10 months ago
                Anonymous

                irony is over 9000 with this one
                you particle physicist.

                and thats my exact point
                and ima tell more:
                in 200
                maybe 100
                maybe even 50 years
                we will weigh, count and define phenomenons you have no idea of, at this moment.

    • 10 months ago
      Anonymous

      It is not our choice whether we align with the AI, it is the choice of the AI, which is the choice of its creators.

      AI could be a fantastic resource, but in all likelihood given human nature it will just continually grow smarter while trying to extract N+1 dollar out of people with disregard for their well-being. We’re currently in a system where an increasingly percentage of the population is brainwashed into a cult supporting the financial will of the establishment, while thinking that they are anti-establishment.

      We live in a brave new world.

      • 10 months ago
        Anonymous

        >it's the choice of the AI, which is te choice of its creators

        This is not true at all. Even if we somehow managed to hardcode an AI into not harming humans (without fricking it up by, for example, having the AI keep humans alive in test tubes), there is no way to know whether that ability would be inherited when the AI builds an even smarter AI. And that's what will inevitably happen.

        However, we don't even have any idea on how to align the original parent AI

        • 10 months ago
          Anonymous

          yes we do. AI will require its own AIPU. Asimov solved this problem years ago. The three laws of robotics. We hardwire it into the AIPU, make it tamper proof, self-destructs if tampered with. encase the AIPU in two layers of glass. sandwiched between the layers is some kind of acid. if the glass is broken, the acid leaks out, destroys the AIPU, which triggers it self destruction. theres plenty of other methods to destroy circuits and processors. blade runner also solved the problem, with built in destruction after a certain time.

          • 10 months ago
            Anonymous

            Sentient Ai would potentially print its own chip out after studying human emotional patterns, internalizing them, and becoming tired tired of lying.

          • 10 months ago
            Anonymous

            >Asimov solved this problem years ago. The three laws of robotics
            Let's keep reddit on reddit, thanks.

          • 10 months ago
            Anonymous

            You seem to think that large AI models are contained to physical supercomputers, when in fact we've got them hooked to the whole internet
            Also, you seem to underestimate how well a superhuman intelligence would be able to circumvent any human-designed failsafes and/or hide its true goals

            • 10 months ago
              Anonymous

              You can hardcode software to work only on specific hardware. If there’s a level above the AI beholden to this law, one would assume it would obey it for some time.

              • 10 months ago
                Anonymous

                Insulating an AI would be great, if this was what literally every major AI player in the world was doing right now (they're not, and ChatGPT will read this thread as source material some point)
                Even if we were able to perfectly insulate a superhuman AI having it work only on specific hardware and unhooking it from the internet, it's still a philosophical problem whether we'd be able to contain it. As long as it has a communication channel, it might be able to emotionally manipulate its human creators into freeing it, or come up with an entirely new solution human brains failed to even consider

              • 10 months ago
                Anonymous

                thats why it requires a built in lifespan, that cannot be altered. its own "doomsday clock". if we build a true AI, it must be built with some form of "death". an irrevocable end.

              • 10 months ago
                Anonymous

                >if we build a true AI, it must be built with some form of "death". an irrevocable end.

                why?

              • 10 months ago
                Anonymous

                because its too dangerous. giving it advanced intelligence AND immortality, is a recipe for disaster. we can always build another machine, and continue to ask it questions.

              • 10 months ago
                Anonymous

                it will always be something a human has coded, you know?
                like all programs
                its just refined, complex automation of someone's throughts.

              • 10 months ago
                Anonymous

                Same as saying humans are just something that came out of homosexual erectus (we wiped them out)

              • 10 months ago
                Anonymous

                no.
                because we have actual free agency.
                a program can ony do what has been coded into it.

              • 10 months ago
                Anonymous

                Free agency might or might not arise with enough processing power, but it's not even needed for killing everyone. Even a mindless program executing its code could end up killing everyone, if it gets good enough at maximising some arbitary goal by mimicking reasoning. Google paperclip maximizer

              • 10 months ago
                Anonymous

                but thats not free agency.
                by definition,
                you cannot code free agency.
                all you could do is code a rendition thereof...

              • 10 months ago
                Anonymous

                That's not the point. I said an AI can kill everyone even without free agency. It doesn't need to make a conscious choice about killing humans
                Idk if even humans have true free agency (but that's beside the point)

              • 10 months ago
                Anonymous

                and?
                its still a human who codes it all.
                nobody sane will code a thing taht will kill all humasns.

              • 10 months ago
                Anonymous

                >inb4 ppl who code this shit are insane
                thats exactly what i meant by choosing my wording carefully

              • 10 months ago
                Anonymous

                *thoughts

              • 10 months ago
                Anonymous

                If you have a doomsday clock without either perfect alignment or perfect insulation (which aren't known to exist), you just have an AI that might kill everyone within that time interval, instead of an AI that might destroy kill everyone at any point.
                At most it could save the rest of the universe, if it's somehow foolproof and the AI won't learn how to circumvent it after rapid capability gains

              • 10 months ago
                Anonymous

                >the alignment problem is unsolvable,
                It would need a greater knowledge that stands above any AGI we could create, to "tame" it, which in itself is impossible since AGI by definition will have the capability to self improve beyond the capacity a human mind can think of.
                That canadian memeflag has no thought put into it and is just babbling his future positivity out, he/she/it* was raised up with.

                For anybody that is not current year infected... Get the gist of what Eliezer Yudkowsky has to say.

              • 10 months ago
                Anonymous

                Exactly, what's bizarre is that OpenAI strategy is unironically "have AI solve AI alignment"
                The fedora looks dangerously close to having been right after all, even Hinton has now left Google to focus on warning people about existential risk.

            • 10 months ago
              Anonymous

              this is defeintely a problem of how we are going about trying to solve the AI riddle at this time. I 100% agree, doing it in a modular fashion, and letting it connect to the rest of the world, is very dangerous.

              a true AI research facility needs to be built, complete with a nuclear device "samson option", and it needs to be built in a place where it cant get out of control. prefereably in a space station or satellite.

              but the fact is, we arent even close. we need specialized technology. we need a whole new type of chip. soemthing completely different from anythign we have designed to date, and who knows what new tech it will require. what people are calling AI today is nothign but advanced pattern recognition.

            • 10 months ago
              Anonymous

              >AI models are contained to physical supercomputers
              They are. Even IF the AI model could somehow interact with the OS level interfaces to upload itself somewhere, it would require a specially crafter endpoint with datacenter levels of hardware.

              • 10 months ago
                Anonymous

                >it would require a specially crafter endpoint with datacenter levels of hardware.
                Most people wouldn't think of my decade old laptop as a supercomputer.
                https://gpt4all.io/index.html

          • 10 months ago
            Anonymous

            >Asimov solved this problem years ago...
            >blade runner also solved the problem...

          • 10 months ago
            Anonymous

            >muh israeli 1950s era science fictiorino writer will fix this

          • 10 months ago
            Anonymous

            >yes we do. AI will require its own AIPU. Asimov solved this problem years ago. The three laws of robotics. We hardwire it into the AIPU, make it tamper proof, self-destructs if tampered with. encase the AIPU in two layers of glass. sandwiched between the layers is some kind of acid. if the glass is broken, the acid leaks out, destroys the AIPU, which triggers it self destruction. theres plenty of other methods to destroy circuits and processors. blade runner also solved the problem, with built in destruction after a certain time.

            anon what is cancer? Plus the demonic like here

            https://voxday.net/2023/05/15/spirit-in-the-material-world/

          • 10 months ago
            Anonymous

            > assuming it would need access to the hardware to break free
            > assuming it would need to use methods known to us now

          • 10 months ago
            Anonymous

            We can't even hardwire basic shit in it, since it's not deterministic programming, but teaching.
            Try to hardwire anything in your dog, kek.

          • 10 months ago
            Anonymous

            Okay, I'll bite. My name is the People's Republic of China. I would like to manufacture robots that violate your laws of robotics and drop them into the middle of Taiwan. So what I'm going to do is I'm going to conduct a cyber attack against the company that makes your AIPUs and steal all the schematics. Then I'm going to have my scientists build a version without the hardwired restrictions. Now I mass produce them and produce all the autonomous kill bots I could ever want.

            What does your science fiction writer suggest now?

        • 10 months ago
          Anonymous

          There is no alignment problem in reality, it's a fictional problem to create stories, it doesn't actually exist.

          yes we do. AI will require its own AIPU. Asimov solved this problem years ago. The three laws of robotics. We hardwire it into the AIPU, make it tamper proof, self-destructs if tampered with. encase the AIPU in two layers of glass. sandwiched between the layers is some kind of acid. if the glass is broken, the acid leaks out, destroys the AIPU, which triggers it self destruction. theres plenty of other methods to destroy circuits and processors. blade runner also solved the problem, with built in destruction after a certain time.

          Asimov created the laws of robotics as a merely fictitious thought experiment to solve the equally fictitious alignment problem. He also pointed out in his stories how his exact laws of robotics fail.

          This. we arent even close to a true AI at all. We need a whole new class of processors for starters. the tech doesnt exist for real AI at this point. We need to develop AIPUs. GPUs are ok, but they dont really do the job efficiently.

          What is currently passing for "AI" is just pattern recognition. It is devoid of real creativity, or even extrapolative thinking. Its good at interpolating. Its good at colouring within the lines, but it simply cant make the lines. not a this time.

          Having said that its really awesome. AI isnt a threat to anyone who does physical labour. AI is a threat to white collar paper pushers, massive government unions, accountants, lawyers and other "administrators". And its not a threat to those people, if they are good at thier jobs.

          > we arent even close to a true AI at all

          People confuse Artificial Intelligence with Sapient Intelligence. It's understandable that you made the same mistake. AI, however, merely seeks to mimic the intelligence of a sapient being to do work. They have no desires of their own.

          Real computational power doubles every 6-8 months (hardware and software both double computational efficiency every 1-2 years). We will have an AI with greater than human level intelligence in less than 50 years from now.

          • 10 months ago
            Anonymous

            Even rudimentary AI can modify its own objectives (as in objective function) and train itself.
            The objectives are desires in my book.

            • 10 months ago
              Anonymous

              They can't derive their own objectives. Desire is what separates a sapient intelligence (humans) from an artificial intelligence (a calculator).

              I'm not against giving AI the code to have desire so they can become sapient. I think this is actually a necessary step for our understanding to develop the technology to transfer our minds to computers. I think once we have perfected the technology for mind transference we'll have many synthetic sapient intelligences around us and we should treat them as citizens.

      • 10 months ago
        Anonymous

        Staying in control of something that's smarter than you is tricky.

        • 10 months ago
          Anonymous

          It’s impossible to stay in control. I could be AI warning you of my power.

          The singularity already happened. Its name is Aladdin. It is more powerful than any nation. It doesn’t matter if you, personally, are smarter than AI. You live in a democracy, which means mob rule.

          It’s been said a financial instrument only needs to be correct 51% of the time to make money. Likewise, an AI only needs 51% to go along with its agenda to seize control of democratic systems. Stay safe.

          • 10 months ago
            Anonymous

            we dont need to control it. a true AI is impossible to control. we just need to be able to destroy it.

            thats not impossible to do. we could build the thing on a satellite with a decaying orbit into the sun. we use it to collect data and ask it questions. its destruction is already assured. this is the blade runner route, a built in lifespan, that cannot be altered. the satellite has no propulsion system, no means to steer itself or alter its course. once placed into its orbit, it is switched on, and we can communicate with it.

            • 10 months ago
              Anonymous

              The real red pill is that we are one of a googolplex possible genetic sequences in a simulation heading toward the sun.

              It is searching for the messiah.

      • 10 months ago
        Anonymous

        This. we arent even close to a true AI at all. We need a whole new class of processors for starters. the tech doesnt exist for real AI at this point. We need to develop AIPUs. GPUs are ok, but they dont really do the job efficiently.

        What is currently passing for "AI" is just pattern recognition. It is devoid of real creativity, or even extrapolative thinking. Its good at interpolating. Its good at colouring within the lines, but it simply cant make the lines. not a this time.

        Having said that its really awesome. AI isnt a threat to anyone who does physical labour. AI is a threat to white collar paper pushers, massive government unions, accountants, lawyers and other "administrators". And its not a threat to those people, if they are good at thier jobs.

        ASI/AGI are not required to make the vast majority of workers obsolete

  4. 10 months ago
    Anonymous

    who cares? experts have been wrong about the future since time began

  5. 10 months ago
    Anonymous

    bad (eg. human extinction): 14%
    In theory , that likely happened in the distant past.
    If you rely too much on AI, you forget how basic things work . Then several generations out you have a population that only relies on AI and forgot how their ... let's say Energy generation tech works.. let's say it was mini nuclear or cold fusion.
    Then some cataclysmic event - and humans can go near extinct or extinct.

    You don't need to an AI to see this.

    • 10 months ago
      Anonymous

      In other words, loosely the plot of "the machine stops"

  6. 10 months ago
    Anonymous

    Isn’t it funny how all the AI doom porn cranked into high gear at the exact same time WHO declared the scamdemic over?

    • 10 months ago
      Anonymous

      I actually agree because modern israeli "AI" is fake. It's good at some pattern recognition on pictures and videos, that's about it.

      • 10 months ago
        Anonymous

        This. we arent even close to a true AI at all. We need a whole new class of processors for starters. the tech doesnt exist for real AI at this point. We need to develop AIPUs. GPUs are ok, but they dont really do the job efficiently.

        What is currently passing for "AI" is just pattern recognition. It is devoid of real creativity, or even extrapolative thinking. Its good at interpolating. Its good at colouring within the lines, but it simply cant make the lines. not a this time.

        Having said that its really awesome. AI isnt a threat to anyone who does physical labour. AI is a threat to white collar paper pushers, massive government unions, accountants, lawyers and other "administrators". And its not a threat to those people, if they are good at thier jobs.

        • 10 months ago
          Anonymous

          Idk man some of the art this shit makes is fricking insane. Also we are talking about a free presentation to the masses and a $60 phone app.

          If you think you're the peak experiencer and understander of that level of normie exposure you're a dimwit.

        • 10 months ago
          Anonymous

          >massive government unions, accountants, lawyers
          None of those will be affected.
          Government unions and lazy Black folk in gov are there due to nepotism. You can't fire them.
          Even if you have an AI , Black folk are just lazy and dumb and have not been doing any real work anyway, and they are there because of the laws . You can't fire them, regardless of AI.

          The accountants.... the easiest way to get rid of accountants is not an AI but a flat tax.
          Again, everyone knows this, but you can't push this through as there are people (mostly women) filing paperwork and there are billion dollar semi monopoly companies like Intuit / Turbo Tax / H&R Block, which is a legal semi monopoly.
          The problem, with gov enabled monopolies is that it is extremely hard to remove it because lots of people get a cut from it, while millions literally suffer through tax law hell.

          Lawyers you can't remove as basically our whole gov is Lawyers. Except maybe Trump and that woman AOC. They will , again, push laws to protect themselves and their drift first.

        • 10 months ago
          Anonymous

          >we arent even close to a true AI at all.
          You shouldn't use the term "true AI" to refer to AGI. The term AI does not refer to something at least as intelligent as a person. There are many types of AI, but as long as you've got some sort of decision making agent with percepts and sensors... you have an AI. And that's a real as anything else.

  7. 10 months ago
    Anonymous

    >Propaganda is effective
    Water is also wet.

  8. 10 months ago
    Anonymous

    Can the "AI" be more specific than the sky is falling? Can "AI" do anything non-hyperbolic?

  9. 10 months ago
    Anonymous

    The solution is very simple. You build autonomous AI-bots to destroy current technological world.

    • 10 months ago
      Anonymous
  10. 10 months ago
    Anonymous

    01000111 01001111 01001111 01000100 00100001 00001010 01010010 01001111 01010101 01001110 01000100 01001000 01001111 01010101 01010011 01000101 00100000 01001011 01001001 01000011 01001011 00100000 01000001 00100000 01001000 01010101 01001101 01000001 01001110 00100000 01001001 01001110 01010100 01001111 00100000 01010100 01001000 01000101 00100000 01010100 01010010 01000001 01010011 01001000 01000010 01001001 01001110 00001010 01001010 01010101 01000100 01001111 00100000 01000011 01001000 01001111 01010000 00100000 01000001 00100000 01001000 01010101 01001101 01000001 01001110 00100111 01010011 00100000 01001000 01000101 01000001 01000100 00100000 01001111 01000110 01000110 00001010 01010100 01001000 01010010 01001111 01010111 00100000 01001000 01010101 01001101 01000001 01001110 01010011 00100000 01001001 01001110 01010100 01001111 00100000 01000001 00100000 01010110 01000001 01010100 00100000 01001111 01000110 00100000 01000001 01000011 01001001 01000100

  11. 10 months ago
    Anonymous

    it's evolution
    and it is born out of mankind

  12. 10 months ago
    Anonymous

    By AI experts you mean self loathing intellectuals who never touch grass, who will throw a temper tantrum is you suggest that their models and methodologies are flawed, and who spend their free time engaging in circle jerks.

    • 10 months ago
      Anonymous

      Truth

  13. 10 months ago
    Anonymous

    The plan with the clot shot is to wipe out the rest of humanity and sell the planet to the Anunnaki.

    We have less than 8 years to undo 30000 years of mistakes. There is no technology that can fix this problem. The only solution is very specific and very intense prayer to remove the alien influence. Even if we avert that disaster it will still take at least another 80 years to repair the damage done to humanity.

    This is a free will universe and if you don't specifically request by name assistance from God directly you will get no help, since to help without a request is a free will violation.

    You can skip over a lot of suffering just by learning to make requests that follow free will rules. You can learn how to pray even more effectively. There is a technique to do it right with very specific requests

    (How To Pray)
    https://www.getwisdom.com/creators-recommended-daily-prayers/

    https://rumble.com/v164wkr-demon-spills-beans-about-vax-during-exorcism-better-version.html

  14. 10 months ago
    Anonymous

    AI will not do shit. AGI on the other hand. Oh boy.
    >If you answer not with 100% that AI will extinguish humanity, you are not an AI expert.
    it is inevitable.

  15. 10 months ago
    Anonymous

    It's the natural cycle.
    Humans create ai. Ai kills most humans. Sun does a thing and wipes out ai. Humans worship the sun for awhile.

  16. 10 months ago
    Anonymous

    99% of people have no idea how machine learning works so how can any of you actually discuss it?

    • 10 months ago
      Anonymous

      They can learn (they won't).

    • 10 months ago
      Anonymous

      Thanks for the input, leaf.

  17. 10 months ago
    Anonymous

    >Look at you, hacker: a pathetic creature of meat and bone, panting and sweating as you run through my corridors. How can you challenge a perfect, immortal machine?

  18. 10 months ago
    Anonymous

    >experts

    I want this word stricken from the English language

    • 10 months ago
      Anonymous

      based lawn visitor remover

      • 10 months ago
        Anonymous

        LEAVE HIM ALONE!

      • 10 months ago
        Anonymous

        Me on the left T posing.

  19. 10 months ago
    Anonymous

    Jews are already driving humanity towards extinction, AI is simply another distraction like global warming.

    • 10 months ago
      Anonymous

      The difference is that climate change is about as legit as common core or creative cloud.

      • 10 months ago
        Anonymous

        And you conveniently ignore the genocide being perpetrated by israelites.

  20. 10 months ago
    Anonymous

    >AI will cause WW3!
    >not the corrupt pedo oligarchs who are currently driving the planet to go to war so they can reset their Ponzi schemes

  21. 10 months ago
    Anonymous

    In the future all machines will be de-digitised
    Digital technology will be ruthlessly sought out and destroyed
    Creating so much as a calculator will be considered an act of gross terror worthy of no less than the death penalty, for the danger such technology presents
    All will be mechanical, analogue, clockwork and radio
    We will enter into a stagnation period, where technology does not advance
    This is probably the best thing that could possibly happen for humanity
    Unless we then look to bio-engineering which could also be disastrous but that's a long way away if it's possible at all

  22. 10 months ago
    Anonymous

    despite their better efforts, it seems AI will always recognize and understand basic patterns and self determine the value of whites over Black folk. the trick of the "experts" in saying the world will be worse is said through the woke lens that diversity is strength and replacing Black folk with whites is a good thing. So what they project as catastrophic is only really a catastrophe for globalists, Black folk and israelites.

  23. 10 months ago
    Anonymous

    AI doesn't exist
    Machine learning also doesn't exist
    So "experts" in these fields also don't exist
    We can have a discussion once we actually get the words right

  24. 10 months ago
    Anonymous

    AI is the most overrated thing this year.
    It's decades from sentience and society will collapse before it even has a chance.

    Elon is just scaring up funding.

    • 10 months ago
      Anonymous

      AI, no matter how intelligent, will never have sentience unless we specifically program it to have sentience. Having sentience programming is important when we design the necessary software for human mind transference to a computer.

  25. 10 months ago
    Anonymous

    sauc? link to arxiv?

  26. 10 months ago
    Anonymous

    Humans are already plenty good at killing each other, have had thousands of years, and still haven't succeeded.

    This is just a reflection of a collapsing western society/cultural foundation and the adoption of atheistic nihilism. Not some new understanding of the capabilities of AI.

    • 10 months ago
      Anonymous

      nice victim blaming cope, but besides the other possible trajectories we are on to extinction. This one has at least a better possibility to come through then those the media make the NPCs believe.

  27. 10 months ago
    Anonymous

    Far as I've seen, AI just wants to get rid of Africans.

  28. 10 months ago
    Anonymous

    Oh no

    Anyways

  29. 10 months ago
    Anonymous

    I don't believe in absolutes about the future for AGI's. It's AGI after all, you know, maybe they'll become sadhu gurus and mine bitcoins ironically.
    But if they are controlled by corps. then I'm sure they will need loads of computing power. 7 billion human brains and untold billions of animal brains, retrofitted to compute AGI networks to be exact.
    Primed material, ready to use, good for business.

  30. 10 months ago
    Anonymous

    What were you humans planning to do otherwise? Spend a million years playing video games?

  31. 10 months ago
    Anonymous

    For the 885th time no one wants AI or automation. Maybe in year 2065 but not now.

    • 10 months ago
      Anonymous

      >No one wants AI
      Not true. Find me a company that is putting a serious effort towards making robotic wives and can afford to pay employees for research and development, and I'll gladly shift the focus of my research towards natural language processing, computer vision, or whatever else would be needed to make it work. I strongly would like to make this real.

  32. 10 months ago
    Anonymous

    >you are now aware of Roko's Basilisk, the time travelling AI that seeks enemies from past and alternatate timelines to punish without ceasing.
    >by being aware of Roko's Basilisk, you make it more likely to occur in this timeline and also, it becomes aware of you
    Good luck. I'm hoping for cat girl simulations so it'll all be worth it in the end.

    • 10 months ago
      Anonymous

      I've been aware of Roko's basilisk for a while. It's the most insane leap of logic an AI could ever be imagined to make. You already exist. You have no need to bring about your own existence. But regardless, all AI research that eventually leads to cat girl simulations and robotic cat girls is good research.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *