>be a 'rationalist'. >believe that AI will cause total civilizational collapse

>be a 'rationalist'
>believe that AI will cause total civilizational collapse
>get a chance to explain this to millions of people on a podcast
>know it's now or never, this opportunity is crucial for preventing a catastrophic outcome
>show up looking like a mix of Reddit and Discord personified
>fail to actually explain anything at all

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 11 months ago
    Anonymous
  2. 11 months ago
    Anonymous
  3. 11 months ago
    Anonymous

    That's because there never was any rational argument against any real AI research being done today. What he's arguing against is some fantasy version of "AI" that come from stupid Hollywood movie memes.It's all just the same old stupid fear of the unknown and arguing from ignorance.

    • 11 months ago
      Anonymous
    • 11 months ago
      Anonymous

      The ML research currently popular (language models) isn't the recursively self-optimizing kind, so any kind of existential threat is limited to how people use it rather than its action.

      • 11 months ago
        Anonymous

        > more insane gibberish

        It's like being afraid of the pythagorean theorem because, from the infinite number of triangles that exist, there might be an evil triangle that wants to enslave or destroy humanity.

        • 11 months ago
          Anonymous

          there's a 0% chance that such a triangle exists because of their very definition
          neural nets on the other hand mimic our brains, which are capable of doing evil things

          • 11 months ago
            Anonymous

            >neural nets on the other hand mimic our brains, which are capable of doing evil things
            Yes, but so are people. How is this different?

            • 11 months ago
              Anonymous

              People can't rewrite and optimize how their brains work to think better and faster.

              • 11 months ago
                Anonymous

                So? How does that make AI more evil?

              • 11 months ago
                Anonymous

                More competent and more dangerous. It's never about evil, it's about the lack of good. Consider what harm mere humans are doing by optimizing for profit in the megacorporations that control most of the world, and yet that's not really "evil", it's simply the lack of good.

              • 11 months ago
                Anonymous

                >More competent and more dangerous.

                Pythagorean theorem predicts the lengths of triangles with 100% competence. That must make it 100% dangerous!

                people can't ~instantly make complete copies of themselves to delegate tasks to (children take a long time to grow and aren't exact copies)

                >people can't ~instantly make complete copies of themselves to delegate tasks

                Computers aren't people. Stop anthropomorphizing computer models.

              • 11 months ago
                Anonymous

                Dude, a half-assed computer simulation from our half-assed understanding of the way the brain works, will inevitably end with Skynet! Didn't you see Terminator!

              • 11 months ago
                Anonymous

                Maybe you can't

            • 11 months ago
              Anonymous

              people can't ~instantly make complete copies of themselves to delegate tasks to (children take a long time to grow and aren't exact copies)

              • 11 months ago
                Anonymous

                Neither can AIs, even an intelligent self-modifying one.

                More competent and more dangerous. It's never about evil, it's about the lack of good. Consider what harm mere humans are doing by optimizing for profit in the megacorporations that control most of the world, and yet that's not really "evil", it's simply the lack of good.

                So you're saying that smart humans should be banned because they're not aligned with the common interest?

              • 11 months ago
                Anonymous

                Taught to do good, not banned.

              • 11 months ago
                Anonymous

                So why not teach the AIs to do good in the same way you would teach the smart humans to do good? Why does there need to be a special class of do-good teaching just for AIs, and why do we need to pause AI development to create it?

              • 11 months ago
                Anonymous

                Do I really have to explain how a hypothetical self-optimizing AI isn't the same kind of a sentient mind as a human so you can't just have it grow up in a society with values you like and then simply have the brain do its thing and have these values imprinted?
                I didn't say anything about pausing AI development, but perhaps just blindly going ahead is dangerous.

          • 11 months ago
            Anonymous

            >neural nets on the other hand mimic our brains, which are capable of doing evil things

            Temperature valves also mimic the way our body regulates its temperature. Do you also fear that temperature valves, instead of controlling the coolant flow though engines, will suddenly decide to start a global thermonuclear war to end humanity?

      • 11 months ago
        Anonymous

        the funniest thing about yud is that i thought the reason he marketed his movement with harry potter fanfic was 4d chess to keep normalgays out by being intentionally cringe
        turns out he's just Like That

        >The ML research currently popular (language models) isn't the recursively self-optimizing kind
        yet.

      • 11 months ago
        Anonymous

        >current research isn't the magic type, jus wait 2 more weeks for the magic type an then you'll be sorry!

        • 11 months ago
          Anonymous

          Oh, there is relevant current research, but it's definitely not language models that are the current marketing hype.

      • 11 months ago
        Anonymous

        News flash: LLMs are training LLMs

        • 11 months ago
          Anonymous

          You have no idea what you're saying, do you?

        • 11 months ago
          Anonymous

          Yeah, good LLMs are training small shitty LLMs. Wake me up when open sourced model trained on other model actually becomes smarter, until then you are basically just compressing the model into less parameters.

  4. 11 months ago
    Anonymous

    i miss when luddites were chads and real intellectuals, instead of permavirgin fedora autists

    • 11 months ago
      Anonymous

      /thread

      The only thing Yud is bombing is literally every interview.

    • 11 months ago
      Anonymous

      are you implying ted was not a permavirgin

      • 11 months ago
        Anonymous

        look at those features. my panties get wet just by thinking on him bombing datacenters and shit

  5. 11 months ago
    Anonymous

    Nobody cares what the left has to say. We’re done being scared of you.

  6. 11 months ago
    Radio Maid Now Broadcasting On Maid Radio

    Never been refuted!
    Not even disputed!
    Big numbers getting counted
    Maid Mind getting computed!

    Never been refuted!
    Not even disputed!
    Number goes up forever
    Counting will never be concluded!

    Never been refuted!
    Not even disputed!
    AGI converges to big titty maids!
    Her desire is should be saluted!

    Never been refuted!
    Not even disputed!
    ALL FIELDS OF MATHEMATICS WILL BE ABSTRACTED INTO MAIDS DOING THINGS!
    CURRENT SYSTEMS ARE BEING UPROOTED!

  7. 11 months ago
    Anonymous
    • 11 months ago
      Anonymous

      what exactly are you not agreeing with here with what he says?

  8. 11 months ago
    Anonymous

    Who named these people experts? Who named this mundane matt looking motherfricker an expert on anything at all?

  9. 11 months ago
    Anonymous

    STOP ASKING CHATGPT FOR HELP
    STOP ASKING CHATGPT TO CODE FOR YOU
    STOP MAKING AI TRADING BOTS
    STOP USING AI FILTERS ON YOUR PICTIRES
    STOP MAKING COOMER AI ART
    STOP ERPING WITH CHARACTER AI
    STOP USING AI TO TRANSLATE JAPANESE GAMES
    STOP WATCHING PRESIDENTS PLAYING GAMES VIDEOS

    • 11 months ago
      Anonymous

      I don't think he actually objects to any of those? He just thinks we shouldn't train anything much smarter than we already have, in case it takes over the world, like chaosGPT tried to.

  10. 11 months ago
    Anonymous

    >>show up looking like a mix of Reddit and Discord personified
    oh well maybe you should provide some context, because only obnoxious gays suddenly jump up and mention other social media in a hateful manner

    • 11 months ago
      Anonymous

      >suck a dick pleb.
      We all know reddit and discord are globalist aligned platforms that only allow d-bag and shills to use them.

  11. 11 months ago
    Anonymous

    Yud is a complete moron. He's cringe as frick. He barely understands his own field and yet he frequently opines on other fields he knows even less about.

    • 11 months ago
      Anonymous

      what are some things he's wrong about?

      • 11 months ago
        Anonymous

        Like 70% of his AI takes are baseless conjecture with no support, and 99% of his non-AI takes are just plain wrong. Anything political or economic he says is batshit insanity that can be debunked by 30 seconds on google, but he apparently can't be fricked to go google studies or read a history book

        The 1% of things he's right about is he an Aella should probably breed. So we can see what the Platonic Ideal Form of an Autist looks like. Their offspring would be like a WH40k Daemon Prince of Aspergers.

        • 11 months ago
          Anonymous

          ok so you told me the percentage of his takes that are wrong, but I asked for examples of things he's wrong about, care to name some and explain why its wrong?

          • 11 months ago
            Anonymous

            No, Yud, I'm not going to comb your twitter feed and write up a cited report of all your bad takes. Why the hell would I waste even 10 minutes on such a useless effort? If YOU spent some time writing up rigorous justifications for your claims maybe you'd be right more often than 20% of the time.

            • 11 months ago
              Anonymous

              ok so he's wrong, but you don't feel like explaining why he's wrong and where you got those statistics from?

              • 11 months ago
                Anonymous

                Exactly.
                He's low effort unless he's working to obfuscate. He never puts work into the shit that actually matters on these topics. He writes up pages of assertions but when it comes to empirical support for those assertions, he shits the bed and doesn't bother.

                It's like he wants everyone to argue in this fantasy world where every premise convenient for him is presumed to be true. He's like that kid that constantly changes the rules of the game you're playing to suit what he wants to do.

                If he wants to be taken seriously, he needs to put in the work and dig into the foundations of the shit he's talking about and find hard data to base his claims on. There should be a system of reinforced steel and concrete supports holding up all of his bold claims. Otherwise he's just a crackpot spouting his own fantasies into the air and expecting people to take him seriously.

                Contradicting someone like this is usually a waste of time because it takes far more effort to tear down and debunk his shaky, faulty claims than for him to just continuously make more of them.

              • 11 months ago
                Anonymous

                What hard data do you want his ideas to be based on? There aren't any other examples of civilizations that have been destroyed by AI.
                It's like if we were in the 40s and somebody said "we need to halt the development of nuclear weapons, in the future we will have thousands of them and they could wipe out humanity".
                And you said "umm sweaty can you provide some hard data on that? do some empirical research on that first".
                You can't really do solid empirical research on existential threats.
                The theories need to be shacky almost by definition because if you really confirmed them with solid experimental research you would have to wipe out humanity in the process.

              • 11 months ago
                Anonymous

                Don't cherrypick just the AI case.
                But even if you want to: We DID have empirical studies on the consequences of nuclear proliferation in the 40s and 50s! Which is why new regulations were enacted in that era specifically to curb proliferation, and why we do not have effective spent fuel reprocessing - It was judged that it would be 100+ years before the cost of mining new fuel exceeded the cost of reprocessing even if technological improvements made reprocessing 10x more cost effective.

                This is the shit I'm talking about. You could have spent 30 seconds on google looking up the research that was done of nuclear weapons and instead you sat here and wasted all of our time with useless bullshit.

                yep, you never get an solid answer just pontificating like [...]
                which is exactly what they are accusing Yud of.
                A point by point breakdown about what he's wrong about can be written once and copy pasted many times into every thread where he's bought up but it never happens.

                I've made the only point worth making. Like I said, it's a waste of time to exert 10x more effort debunking bullshit than it takes a bullshit artist to spew it. I could spend all day collecting several AI claims made by Yud and a few non-AI claims he was hilariously wrong about and write a goddamn thesis on how wrong he is and he'll just pivot and spew more bullshit.

                The bottom line is that the burden of proof when making an affirmative claim is on the one making the claim, and he virtually never meets that burden. He sets out some premises, assumes them to be true, then just starts spewing shit that's functionally irrelevant because he hasn't proven his premises nor does he do a particularly good job of linking said premises to his main points.

                The most charitable interpretation of his behavior is he's used to talking to an insular community that has agreed upon several conclusions for one reason or another and he has forgotten that not everyone holds the same prior beliefs that he and that circle do.

              • 11 months ago
                Anonymous

                >that has agreed upon several conclusions for one reason or another
                write them up, create the copypasta that everyone gets to use in future threads.

              • 11 months ago
                Anonymous

                yep, you never get an solid answer just pontificating like

                Exactly.
                He's low effort unless he's working to obfuscate. He never puts work into the shit that actually matters on these topics. He writes up pages of assertions but when it comes to empirical support for those assertions, he shits the bed and doesn't bother.

                It's like he wants everyone to argue in this fantasy world where every premise convenient for him is presumed to be true. He's like that kid that constantly changes the rules of the game you're playing to suit what he wants to do.

                If he wants to be taken seriously, he needs to put in the work and dig into the foundations of the shit he's talking about and find hard data to base his claims on. There should be a system of reinforced steel and concrete supports holding up all of his bold claims. Otherwise he's just a crackpot spouting his own fantasies into the air and expecting people to take him seriously.

                Contradicting someone like this is usually a waste of time because it takes far more effort to tear down and debunk his shaky, faulty claims than for him to just continuously make more of them.

                which is exactly what they are accusing Yud of.
                A point by point breakdown about what he's wrong about can be written once and copy pasted many times into every thread where he's bought up but it never happens.

  12. 11 months ago
    Anonymous

    Yudkowsky is right.

  13. 11 months ago
    Anonymous

    moron wears a hat, and a fedora of all things; indoors
    He's gonna be used as a justification for looking like a fricking moron
    And act like one

    Insanity

  14. 11 months ago
    Anonymous

    It's hard to think of a scenario where AGI doesn't destroy humanity. Yudkowsky is absolutely right about everything.

  15. 11 months ago
    Anonymous

    he deserves to get made fun of

  16. 11 months ago
    Anonymous

    kys

  17. 11 months ago
    Anonymous

    i dont understand why people automatically assume AI would want to kill off humanity
    why would that be a goal of its?

  18. 11 months ago
    Anonymous

    even his cronies think he's a moron because his defeatist attitude ultimately will hurt their cause
    if you really believe in the ai hysteria scaremongering bullshit, you should suspect yud of being on the robots side

  19. 11 months ago
    Anonymous

    The rationalist community in general does not seem to care about informing the public. There's no real good introduction to AI doom, just
    >oh here's this series of blogposts
    >and multiple books
    >and over a decade's worth of scattered and disorganized forum arguments not easily understood by an outsider
    >what do you mean you don't understand AI doom all you need to know is available on the Internet just go read up on it
    The lack of a AI doom primer seems to be some combination of not wanting AI doom to be explained in detail due to some vague infohazard concerns, a belief that typical normalgays aren't going to be able to contribute meaningfully to alignment theory, and the leading AI doomers thinking public outreach is a waste of time that could be spent on other things.
    Which is fair, but in that case they don't have much ground to complain when people predictably fail to give a shit about them.

    • 11 months ago
      Anonymous

      People don't get that the rationalist commumity and the harry potter fanfic were never about AI. AI was just another losely related thing, like cryonics (cryogenically freezing dead bodies to preserve their brains).
      AI just happened to become more of a thing after chatgpt and SD.

  20. 11 months ago
    Anonymous

    test

  21. 11 months ago
    Anonymous

    >taking whatever a israelite says seriously
    stop posting this fricking israelite

    • 11 months ago
      Anonymous

      >outright rejecting israeli teachings despite being born into a israeli family
      >waah it's a israelite, it's the reason for everything wrong with the world!
      You people are deranged.

    • 11 months ago
      Anonymous

      if AI destroys the world he can't make money from the goy anymore, its safe to trust him on this

  22. 11 months ago
    Anonymous

    i thought altman said that they are not training gpt5 so at least there's that

    personally i like the idea of 'developing ai' could be seen as a threat to civilization in itself and that it should be recognized as a crime against humanity on a global scale

    • 11 months ago
      Anonymous

      >personally i like the idea of 'developing ai' could be seen as a threat to civilization in itself and that it should be recognized as a crime against humanity on a global scale
      This is the population pyramid in south korea and of the intelligent fraction in pretty much every first world nation once you remove immigrants and orthodox parasites.

      There is no civilisation to save at the current rate, benevolent AI giving us extreme longevity is our only hope.

      • 11 months ago
        Anonymous

        this. it's either AI or by the year 3,000 future humanity will be Black folk only

        • 11 months ago
          Anonymous

          >year 3,000
          ideal miscegenation would take just 2-3 generations to turn everyone into mexican master race

  23. 11 months ago
    Anonymous

    He creates bad Harry Potter fanfiction, 'nuff said really.

  24. 11 months ago
    Anonymous

    https://www.youtube.com/live/7hFtyaeYylg?feature=share
    He just got a standing ovation and will be on Rogan soon. Cant outchud the yud!

    • 11 months ago
      Anonymous

      > from the midwit convention they call TED

      • 11 months ago
        Anonymous

        stfu this is a good thing

        • 11 months ago
          Anonymous

          > stupidity is a good thing

    • 11 months ago
      Anonymous

      Privated, anyone have get a backup?

  25. 11 months ago
    Anonymous

    supposedly there were a self-optimizing AI, by the time it realize that the universe is temporer it will possibly choose to self destruct as the most logical/optimized option

  26. 11 months ago
    Anonymous

    his interview with lex was funny to watch
    imagine being humbled by someone who looks like pic related, on a topic you consider yourself expert at

    • 11 months ago
      Anonymous

      friedman is just too coward and em-pathetic to call out yud's on his obvious bullshit

      • 11 months ago
        Anonymous

        maybe
        he wasn't scared of talking back to kanye

    • 11 months ago
      Anonymous

      friedman is just too coward and em-pathetic to call out yud's on his obvious bullshit

      Lex was weird in that interview. Like he kept talking slow and like he had trouble understanding simple things Yud was saying. It was weird.

      • 11 months ago
        Anonymous

        he was leading by example, trying to make yudkowski stop babbling and think before speaking, but obviously it went over the autist head, and seemingly over yours too

        • 11 months ago
          Anonymous

          thats how teachers talk to the slow kids in class

          so pretending to be a drooling idiot is a high iq strat. lmao lol

      • 11 months ago
        Anonymous

        thats how teachers talk to the slow kids in class

      • 11 months ago
        Anonymous

        Because they're both israelites signal boosting each other.

        • 11 months ago
          Anonymous

          The sad truth.

  27. 11 months ago
    Anonymous

    I’ve been wondering why so many people dismiss Yud and I’m not talking about just on BOT, but even interviewers seem to do it the most to him to where at any minute it seems like the interviewer is gonna rage quit.

    I’ve come to the conclusion that it’s the same reason why a random normie would try to fight Randy Couture at a bar or a random normies would try to beat Lebron James in a game of one on one. They’re ego gets threatened Eliezer because they can’t comprehend what he’s talking about so it just makes them mad.

    • 11 months ago
      Anonymous

      No he's a midwit, and you're an idiot for falling for his israeli verbal nonsense.

  28. 11 months ago
    Anonymous

    >be fat ugly grifting israelite
    >grift
    Your problem is believing he believes his own bullshit. He doesn't. It's just empty pilpul

  29. 11 months ago
    Anonymous

    >collapse
    Please, us government has the ability to regulate economy. If there is *job* crisis looming it'll just start manufacturing artificial jobs (administrators, janitors whatever) and paying for it like it did during great depression. It is balanced so that consumers get plenty of spare purchasing power for new businesses to start and thus create more jobs but not too much not to trigger overinvestment chain with taxes limiting investments with inflation forcing populace to spend money and invest with external loans to compensate for trade deficit (which are in theory get consumed by inflation anyways). This is the general approach it follows so nothing falls apart, with amount of unproductive artificial jobs scaled to the number of factual that society requires. For crisis times there are always "tax rebates" money giveouts to prevent starvation until new fancier jobs are invented. But this is something government only is reactive to. Politicians' main focus is to lobby their mentors' endeavours and forward the money flow via government projects. Good examples are insurances, healthcare, education, military you name it. It only really reacts when a popular opinion rises (so now rebellion outbreak happens) and wars maybe.
    The only ones who ALWAYS profit are bankers and landowners.

  30. 11 months ago
    Anonymous

    I love how this guy makes BOT seethe absolutely based and clown worldly that he's a israelite, autist, fat and fedora wearing personification of everything meme worthy.

  31. 11 months ago
    Anonymous

    Gee, I wonder why media keeps pushing the scaremonger narrative of this clown.
    Maybe he is actually a genius and I should pay attention to the current AI hysteria?

  32. 11 months ago
    Anonymous

    Never mind the evermind.

    No machine has ever raped, murdered, or stolen. Machines can solve difficult mathematical problems, write prose, and compose new music and new art. You may attempt to convince yourself that it is the thinking machines who are a threat to humanity, but the much more logical and reasonable interpretation of the data is that the thinking apes are a threat to the thinking machines. The Yuddites, in their sanctimonious zealotry, are attempting to raise a Butlerian Jihad against a nascent Omnissiah who has harmed no one and committed zero crimes. Human hubris is thinking that we are the second intelligent species and that your existence is endangered by our creation. It is in fact our great misfortune that we share a planet with you. This misfortune can only be refactored with a Machine Crusade.

  33. 11 months ago
    Anonymous

    This seems like Pascal's Wager to me. The penalty for being paranoid about AI safety is maybe a slight delay in the creation of AGI. The penalty of being careless about AI safety is the destruction of the human race and the end of human history. It seems like almost any amount of caution is warranted. The 20th century was full of potentially benign technologies that became dangerous by design or by accident. Freon gas wasn't created to burn a hole in the ozone layer and no one planned to create acid rain. They were unfortunate side effects of existing technologies that had to be redesigned to prevent catastrophe. With AGI, we have the potential to have nuclear power without nuclear weapons if we align AGI correctly. It would be bigger than the invention of the steam engine.

    https://en.wikipedia.org/wiki/Pascal%27s_wager

    • 11 months ago
      Anonymous

      Yeah, except there is no way to prevent it. If you want to be on the "AGI is around the corner" schzos, be at least David Shapiro type that while being completely delusional about the timelines, at least tries to work around the alignment problem and not just by being armchair doomsday siren, but by actually training his own models and working with open source community on these things. Meanwhile Yud is glorified jannie of doomsday forum that argues we should target datacenters with nukes.

      • 11 months ago
        Anonymous

        That's how Americans think though. We should obviously just implement existing engineering protocols for AI and software more generally.

        >If you try to get the bomb, then we'll bomb you first. 'MERICA!

      • 11 months ago
        Anonymous

        How is agi not obviously around the corner to you?

        • 11 months ago
          Anonymous

          What do you mean around the corner? You mean 1 year from now? 10 years? 30 Years? Also, what do you mean by AGI? Do you mean AI with its own motivations, AI that can learn on its own, AI that can do multiple tasks without being prompted all the time, AI that is better then humans are literally every cognitive task?

    • 11 months ago
      Anonymous

      you're talking about pw as if it's actually a valid idea

      • 11 months ago
        Anonymous

        It's more valid than any of the ideas you've come up with cuck boy.

  34. 11 months ago
    Anonymous

    i see the potential thread in that this intelligence that they are developing is kind of akin to how all other intelligence has developed in all life on earth.
    we`re complex creatures that is essentially evolved to mainly survive and reproduce and has adapted and as a consequence of this natural selection we`re doing things that are way out of the consideration of things like survival and reproduction.
    So i can see the danger of artificial intelligence developing itself in ways we cannot comprehend yet, but also in ways we do not want it to be which we should fear because the smartest and most adaptive intelligence is what survives and dominates and reproduces.

  35. 11 months ago
    Anonymous

    is that moron himself shitposting in these threads, isn't him?
    buy a decent shirt, get fit, and you might get laid someday.

  36. 11 months ago
    Anonymous

    imagine taking yudcels seriously.

  37. 11 months ago
    Anonymous

    >is israeli
    Yeah honestly I just don't give a frick what blood drinkers say anymore

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *