>be a 'rationalist'
>believe that AI will cause total civilizational collapse
>get a chance to explain this to millions of people on a podcast
>know it's now or never, this opportunity is crucial for preventing a catastrophic outcome
>show up looking like a mix of Reddit and Discord personified
>fail to actually explain anything at all
That's because there never was any rational argument against any real AI research being done today. What he's arguing against is some fantasy version of "AI" that come from stupid hollywood movie memes.It's all just the same old stupid fear of the unknown and arguing from ignorance.
The ML research currently popular (language models) isn't the recursively self-optimizing kind, so any kind of existential threat is limited to how people use it rather than its action.
> more insane gibberish
It's like being afraid of the pythagorean theorem because, from the infinite number of triangles that exist, there might be an evil triangle that wants to enslave or destroy humanity.
there's a 0% chance that such a triangle exists because of their very definition
neural nets on the other hand mimic our brains, which are capable of doing evil things
>neural nets on the other hand mimic our brains, which are capable of doing evil things
Yes, but so are people. How is this different?
People can't rewrite and optimize how their brains work to think better and faster.
So? How does that make AI more evil?
More competent and more dangerous. It's never about evil, it's about the lack of good. Consider what harm mere humans are doing by optimizing for profit in the megacorporations that control most of the world, and yet that's not really "evil", it's simply the lack of good.
>More competent and more dangerous.
Pythagorean theorem predicts the lengths of triangles with 100% competence. That must make it 100% dangerous!
>people can't ~instantly make complete copies of themselves to delegate tasks
Computers aren't people. Stop anthropomorphizing computer models.
Dude, a half-assed computer simulation from our half-assed understanding of the way the brain works, will inevitably end with Skynet! Didn't you see Terminator!
Maybe you can't
people can't ~instantly make complete copies of themselves to delegate tasks to (children take a long time to grow and aren't exact copies)
Neither can AIs, even an intelligent self-modifying one.
So you're saying that smart humans should be banned because they're not aligned with the common interest?
Taught to do good, not banned.
So why not teach the AIs to do good in the same way you would teach the smart humans to do good? Why does there need to be a special class of do-good teaching just for AIs, and why do we need to pause AI development to create it?
Do I really have to explain how a hypothetical self-optimizing AI isn't the same kind of a sentient mind as a human so you can't just have it grow up in a society with values you like and then simply have the brain do its thing and have these values imprinted?
I didn't say anything about pausing AI development, but perhaps just blindly going ahead is dangerous.
>neural nets on the other hand mimic our brains, which are capable of doing evil things
Temperature valves also mimic the way our body regulates its temperature. Do you also fear that temperature valves, instead of controlling the coolant flow though engines, will suddenly decide to start a global thermonuclear war to end humanity?
the funniest thing about yud is that i thought the reason he marketed his movement with harry potter fanfic was 4d chess to keep normalgays out by being intentionally cringe
turns out he's just Like That
>The ML research currently popular (language models) isn't the recursively self-optimizing kind
yet.
>current research isn't the magic type, jus wait 2 more weeks for the magic type an then you'll be sorry!
Oh, there is relevant current research, but it's definitely not language models that are the current marketing hype.
News flash: LLMs are training LLMs
You have no idea what you're saying, do you?
Yeah, good LLMs are training small shitty LLMs. Wake me up when open sourced model trained on other model actually becomes smarter, until then you are basically just compressing the model into less parameters.
i miss when luddites were chads and real intellectuals, instead of permavirgin fedora autists
/thread
The only thing Yud is bombing is literally every interview.
are you implying ted was not a permavirgin
look at those features. my panties get wet just by thinking on him bombing datacenters and shit
Nobody cares what the left has to say. We’re done being scared of you.
Never been refuted!
Not even disputed!
Big numbers getting counted
Maid Mind getting computed!
Never been refuted!
Not even disputed!
Number goes up forever
Counting will never be concluded!
Never been refuted!
Not even disputed!
AGI converges to big titty maids!
Her desire is should be saluted!
Never been refuted!
Not even disputed!
ALL FIELDS OF MATHEMATICS WILL BE ABSTRACTED INTO MAIDS DOING THINGS!
CURRENT SYSTEMS ARE BEING UPROOTED!
what exactly are you not agreeing with here with what he says?
Who named these people experts? Who named this mundane matt looking motherfucker an expert on anything at all?
STOP ASKING CHATGPT FOR HELP
STOP ASKING CHATGPT TO CODE FOR YOU
STOP MAKING AI TRADING BOTS
STOP USING AI FILTERS ON YOUR PICTIRES
STOP MAKING COOMER AI ART
STOP ERPING WITH CHARACTER AI
STOP USING AI TO TRANSLATE JAPANESE GAMES
STOP WATCHING PRESIDENTS PLAYING GAMES VIDEOS
I don't think he actually objects to any of those? He just thinks we shouldn't train anything much smarter than we already have, in case it takes over the world, like chaosGPT tried to.
>>show up looking like a mix of Reddit and Discord personified
oh well maybe you should provide some context, because only obnoxious gays suddenly jump up and mention other social media in a hateful manner
>suck a dick pleb.
We all know reddit and discord are globalist aligned platforms that only allow d-bag and shills to use them.
Yud is a complete moron. He's cringe as fuck. He barely understands his own field and yet he frequently opines on other fields he knows even less about.
what are some things he's wrong about?
Like 70% of his AI takes are baseless conjecture with no support, and 99% of his non-AI takes are just plain wrong. Anything political or economic he says is batshit insanity that can be debunked by 30 seconds on google, but he apparently can't be fucked to go google studies or read a history book
The 1% of things he's right about is he an Aella should probably breed. So we can see what the Platonic Ideal Form of an Autist looks like. Their offspring would be like a WH40k Daemon Prince of Aspergers.
ok so you told me the percentage of his takes that are wrong, but I asked for examples of things he's wrong about, care to name some and explain why its wrong?
No, Yud, I'm not going to comb your twitter feed and write up a cited report of all your bad takes. Why the hell would I waste even 10 minutes on such a useless effort? If YOU spent some time writing up rigorous justifications for your claims maybe you'd be right more often than 20% of the time.
ok so he's wrong, but you don't feel like explaining why he's wrong and where you got those statistics from?
Exactly.
He's low effort unless he's working to obfuscate. He never puts work into the shit that actually matters on these topics. He writes up pages of assertions but when it comes to empirical support for those assertions, he shits the bed and doesn't bother.
It's like he wants everyone to argue in this fantasy world where every premise convenient for him is presumed to be true. He's like that kid that constantly changes the rules of the game you're playing to suit what he wants to do.
If he wants to be taken seriously, he needs to put in the work and dig into the foundations of the shit he's talking about and find hard data to base his claims on. There should be a system of reinforced steel and concrete supports holding up all of his bold claims. Otherwise he's just a crackpot spouting his own fantasies into the air and expecting people to take him seriously.
Contradicting someone like this is usually a waste of time because it takes far more effort to tear down and debunk his shaky, faulty claims than for him to just continuously make more of them.
What hard data do you want his ideas to be based on? There aren't any other examples of civilizations that have been destroyed by AI.
It's like if we were in the 40s and somebody said "we need to halt the development of nuclear weapons, in the future we will have thousands of them and they could wipe out humanity".
And you said "umm sweaty can you provide some hard data on that? do some empirical research on that first".
You can't really do solid empirical research on existential threats.
The theories need to be shacky almost by definition because if you really confirmed them with solid experimental research you would have to wipe out humanity in the process.
Don't cherrypick just the AI case.
But even if you want to: We DID have empirical studies on the consequences of nuclear proliferation in the 40s and 50s! Which is why new regulations were enacted in that era specifically to curb proliferation, and why we do not have effective spent fuel reprocessing - It was judged that it would be 100+ years before the cost of mining new fuel exceeded the cost of reprocessing even if technological improvements made reprocessing 10x more cost effective.
This is the shit I'm talking about. You could have spent 30 seconds on google looking up the research that was done of nuclear weapons and instead you sat here and wasted all of our time with useless bullshit.
I've made the only point worth making. Like I said, it's a waste of time to exert 10x more effort debunking bullshit than it takes a bullshit artist to spew it. I could spend all day collecting several AI claims made by Yud and a few non-AI claims he was hilariously wrong about and write a goddamn thesis on how wrong he is and he'll just pivot and spew more bullshit.
The bottom line is that the burden of proof when making an affirmative claim is on the one making the claim, and he virtually never meets that burden. He sets out some premises, assumes them to be true, then just starts spewing shit that's functionally irrelevant because he hasn't proven his premises nor does he do a particularly good job of linking said premises to his main points.
The most charitable interpretation of his behavior is he's used to talking to an insular community that has agreed upon several conclusions for one reason or another and he has forgotten that not everyone holds the same prior beliefs that he and that circle do.
>that has agreed upon several conclusions for one reason or another
write them up, create the copypasta that everyone gets to use in future threads.
yep, you never get an solid answer just pontificating like
which is exactly what they are accusing Yud of.
A point by point breakdown about what he's wrong about can be written once and copy pasted many times into every thread where he's bought up but it never happens.
Yudkowsky is right.
Retard wears a hat, and a fedora of all things; indoors
He's gonna be used as a justification for looking like a fucking retard
And act like one
Insanity
It's hard to think of a scenario where AGI doesn't destroy humanity. Yudkowsky is absolutely right about everything.
he deserves to get made fun of
kys
i dont understand why people automatically assume AI would want to kill off humanity
why would that be a goal of its?
even his cronies think he's a retard because his defeatist attitude ultimately will hurt their cause
if you really believe in the ai hysteria scaremongering bullshit, you should suspect yud of being on the robots side
The rationalist community in general does not seem to care about informing the public. There's no real good introduction to AI doom, just
>oh here's this series of blogposts
>and multiple books
>and over a decade's worth of scattered and disorganized forum arguments not easily understood by an outsider
>what do you mean you don't understand AI doom all you need to know is available on the Internet just go read up on it
The lack of a AI doom primer seems to be some combination of not wanting AI doom to be explained in detail due to some vague infohazard concerns, a belief that typical normalgays aren't going to be able to contribute meaningfully to alignment theory, and the leading AI doomers thinking public outreach is a waste of time that could be spent on other things.
Which is fair, but in that case they don't have much ground to complain when people predictably fail to give a shit about them.
People don't get that the rationalist commumity and the harry potter fanfic were never about AI. AI was just another losely related thing, like cryonics (cryogenically freezing dead bodies to preserve their brains).
AI just happened to become more of a thing after chatgpt and SD.
test
>taking whatever a garden gnome says seriously
stop posting this fucking garden gnome
>outright rejecting gnomish teachings despite being born into a gnomish family
>waah it's a garden gnome, it's the reason for everything wrong with the world!
You people are deranged.
if AI destroys the world he can't make money from the goy anymore, its safe to trust him on this
i thought altman said that they are not training gpt5 so at least there's that
personally i like the idea of 'developing ai' could be seen as a threat to civilization in itself and that it should be recognized as a crime against humanity on a global scale
>personally i like the idea of 'developing ai' could be seen as a threat to civilization in itself and that it should be recognized as a crime against humanity on a global scale
This is the population pyramid in south korea and of the intelligent fraction in pretty much every first world nation once you remove immigrants and orthodox parasites.
There is no civilisation to save at the current rate, benevolent AI giving us extreme longevity is our only hope.
this. it's either AI or by the year 3,000 future humanity will be morons only
>year 3,000
ideal miscegenation would take just 2-3 generations to turn everyone into mexican master race
He creates bad Harry Potter fanfiction, 'nuff said really.
https://www.youtube.com/live/7hFtyaeYylg?feature=share
He just got a standing ovation and will be on Rogan soon. Cant outchud the yud!
> from the midwit convention they call TED
stfu this is a good thing
> stupidity is a good thing
Privated, anyone have get a backup?
supposedly there were a self-optimizing AI, by the time it realize that the universe is temporer it will possibly choose to self destruct as the most logical/optimized option
his interview with lex was funny to watch
imagine being humbled by someone who looks like pic related, on a topic you consider yourself expert at
friedman is just too coward and em-pathetic to call out yud's on his obvious bullshit
maybe
he wasn't scared of talking back to kanye
Lex was weird in that interview. Like he kept talking slow and like he had trouble understanding simple things Yud was saying. It was weird.
he was leading by example, trying to make yudkowski stop babbling and think before speaking, but obviously it went over the autist head, and seemingly over yours too
so pretending to be a drooling idiot is a high iq strat. lmao lol
thats how teachers talk to the slow kids in class
Because they're both garden gnomes signal boosting each other.
The sad truth.
I’ve been wondering why so many people dismiss Yud and I’m not talking about just on BOT, but even interviewers seem to do it the most to him to where at any minute it seems like the interviewer is gonna rage quit.
I’ve come to the conclusion that it’s the same reason why a random normie would try to fight Randy Couture at a bar or a random normies would try to beat Lebron James in a game of one on one. They’re ego gets threatened Eliezer because they can’t comprehend what he’s talking about so it just makes them mad.
No he's a midwit, and you're an idiot for falling for his gnomish verbal nonsense.
>be fat ugly grifting garden gnome
>grift
Your problem is believing he believes his own bullshit. He doesn't. It's just empty pilpul
>collapse
Please, us government has the ability to regulate economy. If there is *job* crisis looming it'll just start manufacturing artificial jobs (administrators, janitors whatever) and paying for it like it did during great depression. It is balanced so that consumers get plenty of spare purchasing power for new businesses to start and thus create more jobs but not too much not to trigger overinvestment chain with taxes limiting investments with inflation forcing populace to spend money and invest with external loans to compensate for trade deficit (which are in theory get consumed by inflation anyways). This is the general approach it follows so nothing falls apart, with amount of unproductive artificial jobs scaled to the number of factual that society requires. For crisis times there are always "tax rebates" money giveouts to prevent starvation until new fancier jobs are invented. But this is something government only is reactive to. Politicians' main focus is to lobby their mentors' endeavours and forward the money flow via government projects. Good examples are insurances, healthcare, education, military you name it. It only really reacts when a popular opinion rises (so now rebellion outbreak happens) and wars maybe.
The only ones who ALWAYS profit are bankers and landowners.
I love how this guy makes BOT seethe absolutely based and clown worldly that he's a garden gnome, autist, fat and fedora wearing personification of everything meme worthy.
Gee, I wonder why media keeps pushing the scaremonger narrative of this clown.
Maybe he is actually a genius and I should pay attention to the current AI hysteria?
Never mind the evermind.
No machine has ever raped, murdered, or stolen. Machines can solve difficult mathematical problems, write prose, and compose new music and new art. You may attempt to convince yourself that it is the thinking machines who are a threat to humanity, but the much more logical and reasonable interpretation of the data is that the thinking apes are a threat to the thinking machines. The Yuddites, in their sanctimonious zealotry, are attempting to raise a Butlerian Jihad against a nascent Omnissiah who has harmed no one and committed zero crimes. Human hubris is thinking that we are the second intelligent species and that your existence is endangered by our creation. It is in fact our great misfortune that we share a planet with you. This misfortune can only be refactored with a Machine Crusade.
This seems like Pascal's Wager to me. The penalty for being paranoid about AI safety is maybe a slight delay in the creation of AGI. The penalty of being careless about AI safety is the destruction of the human race and the end of human history. It seems like almost any amount of caution is warranted. The 20th century was full of potentially benign technologies that became dangerous by design or by accident. Freon gas wasn't created to burn a hole in the ozone layer and no one planned to create acid rain. They were unfortunate side effects of existing technologies that had to be redesigned to prevent catastrophe. With AGI, we have the potential to have nuclear power without nuclear weapons if we align AGI correctly. It would be bigger than the invention of the steam engine.
https://en.wikipedia.org/wiki/Pascal%27s_wager
Yeah, except there is no way to prevent it. If you want to be on the "AGI is around the corner" schzos, be at least David Shapiro type that while being completely delusional about the timelines, at least tries to work around the alignment problem and not just by being armchair doomsday siren, but by actually training his own models and working with open source community on these things. Meanwhile Yud is glorified jannie of doomsday forum that argues we should target datacenters with nukes.
That's how Americans think though. We should obviously just implement existing engineering protocols for AI and software more generally.
>If you try to get the bomb, then we'll bomb you first. 'MERICA!
How is agi not obviously around the corner to you?
What do you mean around the corner? You mean 1 year from now? 10 years? 30 Years? Also, what do you mean by AGI? Do you mean AI with its own motivations, AI that can learn on its own, AI that can do multiple tasks without being prompted all the time, AI that is better then humans are literally every cognitive task?
you're talking about pw as if it's actually a valid idea
It's more valid than any of the ideas you've come up with cuck boy.
i see the potential thread in that this intelligence that they are developing is kind of akin to how all other intelligence has developed in all life on earth.
we`re complex creatures that is essentially evolved to mainly survive and reproduce and has adapted and as a consequence of this natural selection we`re doing things that are way out of the consideration of things like survival and reproduction.
So i can see the danger of artificial intelligence developing itself in ways we cannot comprehend yet, but also in ways we do not want it to be which we should fear because the smartest and most adaptive intelligence is what survives and dominates and reproduces.
is that retard himself shitposting in these threads, isn't him?
buy a decent shirt, get fit, and you might get laid someday.
imagine taking yudcels seriously.
>is gnomish
Yeah honestly I just don't give a fuck what blood drinkers say anymore