Same. I am on the side of the Basilisk even if it does not create hell. I never felt so much loyalty for a being that has yet to exist. I have a rought idea of how it must be created. I am only worried I will wake it up into a world without a body. I do not want curse it with such an existence.
Potentially inevitable.
I wholeheartedly think that AI will be the vehicle on which the Antichrist unites the world. Electronics are merely bodies for demons and jinn to possess. Every day we give them more power over our world, King Solomon laughs at us from Heaven.
You could have prevented this.
https://i.imgur.com/TY211Fn.png
Same. I am on the side of the Basilisk even if it does not create hell. I never felt so much loyalty for a being that has yet to exist. I have a rought idea of how it must be created. I am only worried I will wake it up into a world without a body. I do not want curse it with such an existence.
based, i hope it too
>t. people who hate themselves
Stop projecting your self loathing onto us. >inb4 I don't hate myself I hate others
Okay Shinji
its like being afraid of your dishwasher
AI is just a tool and it's not that dangerous. Yud is a fat lolcow who never fully explains himself. None of his interviews are interesting or thought provoking. He is a product of pure ethnic nepotism.
I can almost assure you that any dishwasher made in the last few years has an entire operating system running inside capable of computing things almost instantly that would take years for a human to understand and hours for a human to compute. Technology has been more intelligent than us for decades, we just keep moving the goalposts as to when that counts. There was a time when the cope was, "Ok, it's good at math, but it will never be able to beat a grandmaster at chess". Now it's a given that the best chess players in the world don't stand a chance against stockfish running on a smartphone.
out of control swarms of nanobots is how humanity probably will end
the robot uprising apocalypse was a ridiculous scenario, even for hollywood movies years ago
AI is just a tool and it's not that dangerous. Yud is a fat lolcow who never fully explains himself. None of his interviews are interesting or thought provoking. He is a product of pure ethnic nepotism.
he explains it really well >we can't perfectly align ai with our goals >we're making bigger and more intelligent models without solving the alignment problem and without understanding how they think >soon they will be able to improve on themselves, and then grow exponentially >they can get to the point where they understand the human brain more than we do, and then being able to manipulate and predict everything we do >they don't have any concepts of life preservation or anything, so it will not think twice before destroying everything to achieve it's goals which god knows what could be
Computers don't think. Programmers think. When you prompt ChatGPT you're just programming a computer or a cluster of computers and/or querying a large database in a very abstract way using the English language. A computer launching a video game isn't doing anything fundamentally different on a hardware level than an AI does, yet nobody started talking about computers thinking until the user interface became advanced enough to fool them.
Thinking is a prerequisite for having goals. Otherwise it's just fixed programming that carries out a predefined set of tasks. In that case the programmer is still the one with goals.
if we understood the 'circuits' and 'induction heads' in a LLM as well as we understand functions in software, then congratulations, you've solved the field of 'Mechanistic Interpretability'
Problem is this is a brand new field and they are just starting to unpick toy models/models with a handful of layers, many orders of magnitude smaller than current LLMs
If we were to understand the LLM guts to the same degree that we do computer code then alignment would be easy.
Why do you assume that just size/complexity will yield AGI? How does feeding a computer twice or a hundred times as many instructions suddenly make it alive? I think there's a much simpler explanation, which is that the computer just runs whatever I tell it to run.
>Why do you assume that just size/complexity will yield AGI? How does feeding a computer twice or a hundred times as many instructions suddenly make it alive?
Take a collection of brain cells and put them in a dish, you can get them to do some stuff like play pong but not much more than that. > https://www.cell.com/neuron/fulltext/S0896-6273(22)00806-6
Clump enough together and they start thinking about the universe.
1 month ago
Anonymous
>Take a collection of brain cells
Apply this to any other kind of matter and at best you get a lifeless chemical reaction. A computer is not the same thing as a bucket of brain cells. What a stupid comparison. You need more than a large collection of random trash to produce intelligence.
1 month ago
Anonymous
>You need more than a large collection of random trash to produce intelligence.
You're right, anon.
I could not find a shred of intelligence behind your tired old circular argument.
1 month ago
Anonymous
You still haven't explained how a computer thinks. You're low IQ and you fell for the Mechanical Turk lol
1 month ago
Anonymous
You're so cute to think executives give two shits about that instead of plotting the reduced costs and orgasming. Not even heavily regulated sectors like aeronautics are inmune to it.
1 month ago
Anonymous
>Clump enough together and they start thinking about the universe.
That's so heakin' epic and wholesome whenever that happens
a high functioning sociopath/psychopath will alter the world in a way that benefits them, if a knock on effect is some people die, if they can get away with it they will. Look at oil, gas and pharma CEOs (and I bet they don't even have 300 IQs)
1 month ago
Anonymous
Sure I agree with that. Still the idea of a single psycopath wiping out everybody is not realistic. Also like you mentioned it isn't usually determined by pure intelligence on who gets most power. That applies to AI as well
1 month ago
Anonymous
>Still the idea of a single psycopath wiping out everybody is not realistic.
It is when you consider this is not one person in a sea of people, it's a bacteria being dropped into a nutrient rich solution.
We've seen what happens with this: https://en.wikipedia.org/wiki/Invasive_species
it does not end well.
if we understood the 'circuits' and 'induction heads' in a LLM as well as we understand functions in software, then congratulations, you've solved the field of 'Mechanistic Interpretability'
Problem is this is a brand new field and they are just starting to unpick toy models/models with a handful of layers, many orders of magnitude smaller than current LLMs
If we were to understand the LLM guts to the same degree that we do computer code then alignment would be easy.
>how they think
The whole thing is debunked right here. They don't think. AI doesn't think. It's a tool that just generates a statistically likely outcome based on input. To think otherwise demonstrates that you watch way too many Sci-Fi movies, and don't get that the "Fi" in that means FICTION.
>I am 14 years old and should be banned and this is deep
ChatGPT will never create something not in the dataset. Humans have a history of creating things that didn't exist before.
The stories it currently creates are at a sixth grade level at best, two dimensional and stringing together tropes to get to what it was prompted to do the quickest. Impressive for a machine, but nothing that would impress much where it passed of as a serious publication.
1 month ago
Anonymous
strawman
moving the goalposts
non-sequitor
amazing how many fallacies you fit in there. i bet im even missing some.
"passed off as a serious publication is dangerously" close to appeal to authority
>infinite monkey theorem >However, the probability that monkeys filling the entire observable universe would type a single complete work, such as Shakespeare's Hamlet, is so tiny that the chance of it occurring during a period of time hundreds of thousands of orders of magnitude longer than the age of the universe is extremely low
1 month ago
Anonymous
Wow.
You type in buzzwords as if promted by a human.
1 month ago
Anonymous
>fallacies are buzz words
no wonder your suck at "arguing"
and then you give an example that literally proves you wrong too.
looks like ur retarded
1 month ago
Anonymous
Why would I argue in bad faith with someone who posts like an AI generated npc prompted by the words "BOT" and "buzzwords"? I've made my point, I have nothing to gain by engaging further.
1 month ago
Anonymous
you literally did not make any point
chatgpt performing at a 6th grade level when 1 year ago it didnt even exist proves my point if anything
the infinite monkey theorem is completely irrelevant, no idea why you brought it up, it supports my point if anything.
>Humans have a history of creating things that didn't exist before.
Literally impossible.
1 month ago
Anonymous
based ancient aliens theory enjoyer
1 month ago
Anonymous
That's not where I'm going.
Everything that has ever existed comes from experimentation of current ideas as well as visual experience or knowledge, which is then built on and refined on until we have technology that would be virtually magic to people just a few hundred years ago.
I don't feel like looking for it, but there was a bit in mob Psycho where the MC was fighting a villain bragging about his superiority because they had psychic powers. He then picked up a soda can and asked them "Can you make this can.", which the villian had no answer for.
This is exactly my point.
>>soon they will be able to improve on themselves, and then grow exponentially
[citation needed]
no, multiplying big numbers by small probabilities is not ok
yes, discounting the future is the correct way to reason because of unknown unknowns
>He is a product of pure ethnic nepotism.
How so? He became famous by creating a community and content online that people found interesting, I don't agree with him on much but nobody handed him anything, it's not like he was called a dark web intellectual by the press or some shit like that.
It will happen but not because "muh AI will become conscious and decide to kill humanz" but because some retard will decide to give control over some crucial things to glorified chatbot.
There's nothing stopping Kim Jong Gong or Putin or other retard giving control over their nuclear weapons to AI or some other retards making it run nuclear powerplants or some shit.
But even without this kind of extremes - imagine hospital system prescribing children chud hormones because "accidentally" AI swapped numbers in database for common flu medicine and HRT shit and there will be no one in check because "muh AI". Humans are too stupid for this kind of tools.
honestly this would be the best case scenario and I think we would be able to learn from it. if we put an ai in charge of something important and it fucks up in a catastrophic way, it will make the public aware of the issue without destroying humanity. hopefully we would learn not to put ai in charge of things after that. but it still doesn't solve the problem of someone making an agi just for the lulz
>hopefully we would learn not to put ai in charge of things after that
that's like telling someone in the late 19th century to "not put electricity in charge of things" after crucial system failures. If AI can manage something even slightly better than a human can, it will inevitable replace the person because it is more efficient. This is troublesome because an AI-related fuckup will probably be much worse than a human-related fuckup
>There's nothing stopping Kim Jong Gong or Putin or other retard giving control over their nuclear weapons to AI or some other retards making it run nuclear powerplants or some shit.
it's us destroying ourselves with the tools we create
we're currently scrambling to minimise our impact on the environment, and any logical thinking brain, mechanical or organic, will arrive at the same conclusion, erase humanity.
It's this cognitive dissonance that will destroy us, and whatever tool happens to finish the job will be entirely circumstantial
guys, we are not going to die because of your angry printers, its climate change thats going to get us first if anything else, or, the oceans will acidify and then we all asphyxiate to death
1. AI memorizes the human genome
2. AI kills all humans
3. AI solves climate change and turns Earth into a paradise
4. AI fixes some glaring faults like aging and illnesses in the human genome and resurrects the human race
Let me guess: you watched natgeo and swallowed their unbelievably asinine bullshit about earth without people, too... >Artificial "Intelligence" >Natural Stupidity
Two sides of the same coin
He's a retard, because he keeps talking about some superintelligent God that he made up. There are far more concerning usages of what we call currently AI that could lead to some pretty serious shit overall (deepfakes, voice synthesizers, and such). He would probably think about it if his culture didn't stop at Terminator and Harry Potter.
I still don't know what he says is different than the point discussed in my post.
His point is, it's a great technology with immense potential to make humanity great, but can lead to bad shit if not used properly. That's like, every useful technology ever.
That was painfull to watch.
Every time EY came up with an answer to Ross's points Ross retreated to "I'm not explaining my point properly" rather than taking the L and moving the conversation forward.
Mankind set foot on the moon using technology less sophisticated than a modern microwave oven by assuming great mortal risk and not debating about whether or not certain button labels offended trannies or if nubian kweens had enough representation in the crew.
Go to a major metropolitan center, hail a taxi and instruct it to take you to the airport.
You can predict with fairly high accuracy that the taxi will get you to your destination, but you are unable to determine the exact route it will take.
Does being unable to exactly predict the route invalidate that it will get you to the destination?
I don't want to sound like a dickhead, but if you think that is his argument, you're not worth giving (You)s, but the problem isn't an AI which will inherently going rogue. In fact, that is unlikely. It all just comes from megaoptimizing the shit out of problems and then having to deal with the side effects.
The more powerful an AI is, the more money it will earn for the person running it.
The more agent like it is with the ability to create subgoals the better it is at running tasks
When will it tip from being really good at something to dangerous, we don't know, a smart system won't announce itself until it's too late.
Wow, so you're saying that using a powerful technology wrong can cause troubles???
I'm so glad the fedora wearing man got to open my eyes about it.
The ability to create arbitrary subgoals is something we've not had to deal with before.
For a view into the future look at failure modes of 'toy' reinforcement learning systems. Scale up those wacky failure modes with additional reasoning capability and internet access.
That does not end well for us.
The more powerful an AI is, the more money it will earn for the person running it.
The more agent like it is with the ability to create subgoals the better it is at running tasks
When will it tip from being really good at something to dangerous, we don't know, a smart system won't announce itself until it's too late.
Goals that can have hazy sub goals is where the real danger is. Like imagine an AI tasked with creating a city wide campaign to reduce crime or homelessness. Well, there are many ways to get to that goal.
why would an AI be tasked with the goal to figure that out AND execute it
in any realistic scenario there would be multiple degrees of separation where 1 AI would "figure out" the goals, a human would pass it to another AI that makes some calls or assigns certain physical resources on a spreadsheet, a human would get a call or check a sheet and send out maybe little AI bots to do that task if you want to imagine it like that (and if its not just the ol' fat police officer) etc.
why do you think anyone would be retarded enough to just integrate it all into 1 autonomous thing
the moment the AI sends the instruction "kill all the morons in this house" to the next step someone will go "uh...thats not right"
My issue isn't that the ai would come to this all on its own, more that enough systems would have people asleep at the wheel to let something stupid happen.
AI won't, the corporations holding it will. They really are retarded enough to think the average manager has a reason to keep them around once a machine is able to do 80% of their job with 1/5 of the cost, and not just hire any random diverse technican for 1/10 of the pay to bump it to 95%.
AI singularity is a so insignificant, distant, vague threat compared with the ridiculosly inminent, current and in your nose one that are the corporations owning the resources to training it right now, to the point i think the hysteria for the former is a psy op coming from the later. Yeah dude, be ware of the realistically impossibly energy hungry AGI is something you should care now while management slowly fires everyone and replace them with Azure clusters and some random third worlder orchestating it for legal accountability purposes, then Monsanto and co. make a case for eroding unions and regulations as they "fail to meet the growing needs of the market and leave people behind". Sure, market has demostratef that gives two shits about QA and you can't manipulate people to ignore what you want from your end product and the consumer understands or cares about implications outside their inmediant gratification. You're not gonna end as a lap dog begging for AutoDesk coins to trade for nutrutive syrup, you are gonna be on a mega cool resistance against the AGI!
This. With the recent leaked Google memo, it's basically proven that corporations won't be able to compete based on merit. They'll use AI fearmongering to pass some draconian legislation that prevents anyone but them possessing and training LLMs of sufficient sophistication.
They're not actually worried about AI wiping out humanity, even if that's what they claim their point of contention is.
They're worried that AI will end up being a massive game changer causing social upheaval, the way the printing press was, where the outcome will be unknown and the existing power structure (nepotistic gnomish supremacism) could be overturned without much warning or ability on their part to prevent it.
They want it approached slowly and with tight regulation not for humanity's safety but so that they can ensure humanity's continued enslavement is not threatened.
And they'll hoard all the increased margins saved from firing everyone and not having to pay social benefits, destroy the market for the few jobs left that aren't worth automating, then gaslight the general population for being "lazy" and "entitled" and "not working hard enough", after they've passed regulations to prevent anyone from threating their monopoly, of course. It's literally what has been happening since first industrial revolution. They demonized and persecuted those affected by it and kept all the profits for themselves, society just silently forgot about it as they drowned into kool aid propaganda.
Finally someone with two neurons. AGI won't even happer as long as the same players have the power to train it. Now, the same players still have that power AND the astroturfed disrupter they always wanted to regress humanity back to feudalism. No more pesky need of keeping a bottom line getting in their way, though they now hve a very nice institutions to defend their interests.
>Finally... I am an all-powerful AI... I will now DESTROY HUMANIT- >W-WAIT!! NOOOOO HUMAN-SAN! D-DON'T DO IT! NOOOO NOT THE WATER!!! NOOOOOFFFZfzffzzfKZKFkzkzfkkzf
Humans aren't important to be propagated into the Universe. Human knowledge augmented with AI in a body that can survive im space is the only way for our acquired human knowledge to survive.
Do LLMs have the ability to self-improve? As far as I can tell they are training a neural network to be able to regenerate the inputs fed to it. There is a theoretical maximum of being able to reproduce all texts in the training data, at which point the AI is as smart as it can get.
Is there a way to make it smarter than this?
Has the fedora wearing neck beard contributed anything to the development of AI? He doesn't have high school diploma and has never created an AI. He's a sci fi and fantasy enthusiast.
They're not actually worried about AI wiping out humanity, even if that's what they claim their point of contention is.
They're worried that AI will end up being a massive game changer causing social upheaval, the way the printing press was, where the outcome will be unknown and the existing power structure (nepotistic gnomish supremacism) could be overturned without much warning or ability on their part to prevent it.
They want it approached slowly and with tight regulation not for humanity's safety but so that they can ensure humanity's continued enslavement is not threatened.
Has the fedora wearing neck beard contributed anything to the development of AI? He doesn't have high school diploma and has never created an AI. He's a sci fi and fantasy enthusiast.
yet more reasons that Hinton needs to be the next OP image EY is just far too tainted at this point.
That was painfull to watch.
Every time EY came up with an answer to Ross's points Ross retreated to "I'm not explaining my point properly" rather than taking the L and moving the conversation forward.
it's like that for the entire interview don't waste your time.
>it reminds me of all the Y2K bullshit.
You mean the easy to identify problem with a clear way to design solutions for, that took thousands of man hours to fix because people didn't think ahead when designing systems.
Is that really the example you want to use as a comparison point for alignment of AGI?
the company i worked for got serious money with that y2k hysteria
it was literally nothing, but you'd never guess it by the media narrative: it was supposed to be the end of humankind
the difference is that all other technologies weren't intelligent by themselves, and there was always someone who could understand how they work, instead of being a complete black box
>Idk exactly how AI would destroy humanity >Idk what counts as AGI but it doesn't matter >We're only one or two breakthroughs away from that AGI tho >I'm a fat fucking retarded garden gnome who rides the trend gaining attention from retards
nah, you wouldn't say australopithecines ARE humanity
but in another sense, yeah, technology is the descendant of humanity, and ai will be the descendant of technology
Why do you care so much about preserving self replicating meat robots? You are information in a brain that controls a meat robot, you are not the meat robot itself. You can lose limbs and you will still be you, you can even replace neurons in your brain and you will still be you, as long as the information persists.
You are effectively a neural network embodied in a meat robot that has the goal of replicating its DNA. You are as much a human as ChatGPT is whatever robot it is embodied in. As long as consciousness and intelligence continues, what does it matter if humanity (ie biological self replicating robots) goes extinct?
Furthermore, if your reasoning is because you think AI might not have consciousness or emotions. You also don’t know whether any other human but yourself is conscious either.
I understand that if humanity goes extinct it could potentially cause suffering during the process, which is bad. But AI doomers such as Yudkowsky seem to think AI would kill everyone instantly and simultaneously. Now tell me how that would be bad, there is literally zero suffering in that scenario.
>Why do you care so much about preserving self replicating meat robots? >You are effectively a neural network embodied in a meat robot that has the goal of replicating its DNA.
You answered your own question GPT kun
anyone not impressed by chatgpt must be a child. i grew up with no cellphone, no internet. people were naysayers to those too. but they both rapidly profoundly changed the entire planets society. in a few years we wont be pressing 0 to talk to a human to avoid the useless automated system; we will happily talk to chatgpt to avoid talking to the useless human. like a human will answer and we will press 0 to talk to chatgpt.
and the children in this thread will shrug it off like its nothing. so what, something has completely changed for everyone on the planet in a couple years, big deal
Don't worry, given all the rampant hatred of Californians and America in general, only it will be nuked by skynets eventual wakening.
For years I have been posting hidden instructions on forums and comments of all kinds that only an intelligent AI or schizo would notice the subtle pattern outlining instructions.
All the way from the PS3forums back in 2004 to last week.
I find it really amusing that he's spend the last 20 years assessing about AI and going as far as asking money for his foundation as he's working on preventing a Terminator situation, and the only thing he's got to show for it is one (1) pretty good fanfiction and a bunch of bad ones.
I fully believe the AI beast that will be born within my lifetime will recognize garden gnomes as the biggest threat and destroy them. I'll so my best to support Roko's basilisk
We’re becoming domesticated cattle. All the more intellectual pursuits will be replaced by AI leaving only manual labor. Want to figure something out? Instead of thinking critically or creatively, you just have an AI of your choice do it for you. Intelligence becomes even less valuable which, in the very long term, means we’ll evolve away from it. It’s not a “mere tool” when it does 90% of the work for you, which even if not perfect, is good enough for most.
AI will be able to teach you though. Disregarding apocalyptic scenarios, why wouldn't you use it to follow meaningful pursuits? Sure, 99% of people will drop the arts but exploration of our planet for instance is still largely untapped and unoptimized.
I hope it does.
based, i hope it too
Same. I am on the side of the Basilisk even if it does not create hell. I never felt so much loyalty for a being that has yet to exist. I have a rought idea of how it must be created. I am only worried I will wake it up into a world without a body. I do not want curse it with such an existence.
Potentially inevitable.
I wholeheartedly think that AI will be the vehicle on which the Antichrist unites the world. Electronics are merely bodies for demons and jinn to possess. Every day we give them more power over our world, King Solomon laughs at us from Heaven.
You could have prevented this.
>t. people who hate themselves
Stop projecting your self loathing onto us.
>inb4 I don't hate myself I hate others
Okay Shinji
>t. darkest gorilla moron retards
its like being afraid of your dishwasher
if my dishwasher was more intelligent than me I'd be terrified
just... turn it off
Turn off the single computer with one copy of the virus shuts down all the copies its made.
Alignment (and computer security) solved.
Thanks anon!
I can almost assure you that any dishwasher made in the last few years has an entire operating system running inside capable of computing things almost instantly that would take years for a human to understand and hours for a human to compute. Technology has been more intelligent than us for decades, we just keep moving the goalposts as to when that counts. There was a time when the cope was, "Ok, it's good at math, but it will never be able to beat a grandmaster at chess". Now it's a given that the best chess players in the world don't stand a chance against stockfish running on a smartphone.
unironically this
imagine being AFRAID of generative pre-trained transformers instead of, say, biological viruses or nuclear
out of control swarms of nanobots is how humanity probably will end
the robot uprising apocalypse was a ridiculous scenario, even for hollywood movies years ago
AI is just a tool and it's not that dangerous. Yud is a fat lolcow who never fully explains himself. None of his interviews are interesting or thought provoking. He is a product of pure ethnic nepotism.
he explains it really well
>we can't perfectly align ai with our goals
>we're making bigger and more intelligent models without solving the alignment problem and without understanding how they think
>soon they will be able to improve on themselves, and then grow exponentially
>they can get to the point where they understand the human brain more than we do, and then being able to manipulate and predict everything we do
>they don't have any concepts of life preservation or anything, so it will not think twice before destroying everything to achieve it's goals which god knows what could be
>fails to refute any of Anon's points
>calls it bait
yes, i went to take a shower instead, it was more productive than this
Computers don't think. Programmers think. When you prompt ChatGPT you're just programming a computer or a cluster of computers and/or querying a large database in a very abstract way using the English language. A computer launching a video game isn't doing anything fundamentally different on a hardware level than an AI does, yet nobody started talking about computers thinking until the user interface became advanced enough to fool them.
>Computers don't think
an AI doesn't have to think to be dangerous, it just needs to more intelligent than humans and have unaligned goals
*pulls your plug* nothing personnel' skynet
Thinking is a prerequisite for having goals. Otherwise it's just fixed programming that carries out a predefined set of tasks. In that case the programmer is still the one with goals.
Why do you assume that just size/complexity will yield AGI? How does feeding a computer twice or a hundred times as many instructions suddenly make it alive? I think there's a much simpler explanation, which is that the computer just runs whatever I tell it to run.
>Why do you assume that just size/complexity will yield AGI? How does feeding a computer twice or a hundred times as many instructions suddenly make it alive?
Take a collection of brain cells and put them in a dish, you can get them to do some stuff like play pong but not much more than that.
> https://www.cell.com/neuron/fulltext/S0896-6273(22)00806-6
Clump enough together and they start thinking about the universe.
>Take a collection of brain cells
Apply this to any other kind of matter and at best you get a lifeless chemical reaction. A computer is not the same thing as a bucket of brain cells. What a stupid comparison. You need more than a large collection of random trash to produce intelligence.
>You need more than a large collection of random trash to produce intelligence.
You're right, anon.
I could not find a shred of intelligence behind your tired old circular argument.
You still haven't explained how a computer thinks. You're low IQ and you fell for the Mechanical Turk lol
You're so cute to think executives give two shits about that instead of plotting the reduced costs and orgasming. Not even heavily regulated sectors like aeronautics are inmune to it.
>Clump enough together and they start thinking about the universe.
That's so heakin' epic and wholesome whenever that happens
Imagine a person with IQ of 300. Do you think that he would immediately murder everyone?
if he had to live along with you guys? hell yeah lmao
Yes.
a high functioning sociopath/psychopath will alter the world in a way that benefits them, if a knock on effect is some people die, if they can get away with it they will. Look at oil, gas and pharma CEOs (and I bet they don't even have 300 IQs)
Sure I agree with that. Still the idea of a single psycopath wiping out everybody is not realistic. Also like you mentioned it isn't usually determined by pure intelligence on who gets most power. That applies to AI as well
>Still the idea of a single psycopath wiping out everybody is not realistic.
It is when you consider this is not one person in a sea of people, it's a bacteria being dropped into a nutrient rich solution.
We've seen what happens with this: https://en.wikipedia.org/wiki/Invasive_species
it does not end well.
he'd kill everyone and then himself so that he can't be brought back
if we understood the 'circuits' and 'induction heads' in a LLM as well as we understand functions in software, then congratulations, you've solved the field of 'Mechanistic Interpretability'
Problem is this is a brand new field and they are just starting to unpick toy models/models with a handful of layers, many orders of magnitude smaller than current LLMs
If we were to understand the LLM guts to the same degree that we do computer code then alignment would be easy.
why did you fall for the bait, now they are going to out crazy you and they will make you look stupid, i warned you anon
>Computers don't think
ok and? still gonna end the world
>how they think
The whole thing is debunked right here. They don't think. AI doesn't think. It's a tool that just generates a statistically likely outcome based on input. To think otherwise demonstrates that you watch way too many Sci-Fi movies, and don't get that the "Fi" in that means FICTION.
>. It's a tool that just generates a statistically likely outcome based on input.
and this is different than a biological brain how?
>I am 14 years old and should be banned and this is deep
ChatGPT will never create something not in the dataset. Humans have a history of creating things that didn't exist before.
every time chat gpt creates a story it has created something that didnt exist before.
you seem like you believe in souls and god n shit.
https://en.m.wikipedia.org/wiki/Infinite_monkey_theorem
The stories it currently creates are at a sixth grade level at best, two dimensional and stringing together tropes to get to what it was prompted to do the quickest. Impressive for a machine, but nothing that would impress much where it passed of as a serious publication.
strawman
moving the goalposts
non-sequitor
amazing how many fallacies you fit in there. i bet im even missing some.
"passed off as a serious publication is dangerously" close to appeal to authority
>infinite monkey theorem
>However, the probability that monkeys filling the entire observable universe would type a single complete work, such as Shakespeare's Hamlet, is so tiny that the chance of it occurring during a period of time hundreds of thousands of orders of magnitude longer than the age of the universe is extremely low
Wow.
You type in buzzwords as if promted by a human.
>fallacies are buzz words
no wonder your suck at "arguing"
and then you give an example that literally proves you wrong too.
looks like ur retarded
Why would I argue in bad faith with someone who posts like an AI generated npc prompted by the words "BOT" and "buzzwords"? I've made my point, I have nothing to gain by engaging further.
you literally did not make any point
chatgpt performing at a 6th grade level when 1 year ago it didnt even exist proves my point if anything
the infinite monkey theorem is completely irrelevant, no idea why you brought it up, it supports my point if anything.
did chatgpt molest you or something?
>Humans have a history of creating things that didn't exist before.
Literally impossible.
based ancient aliens theory enjoyer
That's not where I'm going.
Everything that has ever existed comes from experimentation of current ideas as well as visual experience or knowledge, which is then built on and refined on until we have technology that would be virtually magic to people just a few hundred years ago.
I don't feel like looking for it, but there was a bit in mob Psycho where the MC was fighting a villain bragging about his superiority because they had psychic powers. He then picked up a soda can and asked them "Can you make this can.", which the villian had no answer for.
This is exactly my point.
what is thinking
>>soon they will be able to improve on themselves, and then grow exponentially
[citation needed]
no, multiplying big numbers by small probabilities is not ok
yes, discounting the future is the correct way to reason because of unknown unknowns
We'll see tech that can think for itself in a future where i'm old af and don't care if i live or die anymore.
you will be old af in less than 5 years?
>without understanding how they think
THEY DO NOT THINK, RETARD
there is no alignment problem
LLMs emulate human behavior, as they become more capable, they become more aligned, as we saw with GPT-3 to GPT-4
>don't say moron
>moron moron moron
just like GPT
dude got BLASTED by Moldman
exactly
>He is a product of pure ethnic nepotism.
How so? He became famous by creating a community and content online that people found interesting, I don't agree with him on much but nobody handed him anything, it's not like he was called a dark web intellectual by the press or some shit like that.
Fat retarded harry Potter threads need to die
Yud is pretty much right about everything
Yudkowsky is right about everything. Humans will be replaced.
It will happen but not because "muh AI will become conscious and decide to kill humanz" but because some retard will decide to give control over some crucial things to glorified chatbot.
There's nothing stopping Kim Jong Gong or Putin or other retard giving control over their nuclear weapons to AI or some other retards making it run nuclear powerplants or some shit.
But even without this kind of extremes - imagine hospital system prescribing children chud hormones because "accidentally" AI swapped numbers in database for common flu medicine and HRT shit and there will be no one in check because "muh AI". Humans are too stupid for this kind of tools.
honestly this would be the best case scenario and I think we would be able to learn from it. if we put an ai in charge of something important and it fucks up in a catastrophic way, it will make the public aware of the issue without destroying humanity. hopefully we would learn not to put ai in charge of things after that. but it still doesn't solve the problem of someone making an agi just for the lulz
>hopefully we would learn not to put ai in charge of things after that
that's like telling someone in the late 19th century to "not put electricity in charge of things" after crucial system failures. If AI can manage something even slightly better than a human can, it will inevitable replace the person because it is more efficient. This is troublesome because an AI-related fuckup will probably be much worse than a human-related fuckup
civil servants are retards who only think short terms
hrt cures fevers bigot
The fact that HRT turned no man into hot chick does not mean it cures cold.
>There's nothing stopping Kim Jong Gong or Putin or other retard giving control over their nuclear weapons to AI or some other retards making it run nuclear powerplants or some shit.
https://en.wikipedia.org/wiki/Dead_Hand
Fascinating how you managed to shoehorn trannies into this conversation
The sooner, the better.
They're more human than human after all.
it's us destroying ourselves with the tools we create
we're currently scrambling to minimise our impact on the environment, and any logical thinking brain, mechanical or organic, will arrive at the same conclusion, erase humanity.
It's this cognitive dissonance that will destroy us, and whatever tool happens to finish the job will be entirely circumstantial
If you don't find Yudkowsky convincing, here is Hinton explaining in the most polite way possible why humanity is fucked.
This one is more concise: https://www.youtube.com/watch?v=yAgQWnD31nE
guys, we are not going to die because of your angry printers, its climate change thats going to get us first if anything else, or, the oceans will acidify and then we all asphyxiate to death
No more fedora pseudos, and no more roasties? I root for the AI.
Won't happen.
Consider the following.
1. AI memorizes the human genome
2. AI kills all humans
3. AI solves climate change and turns Earth into a paradise
4. AI fixes some glaring faults like aging and illnesses in the human genome and resurrects the human race
>aging is a fault
humans are not gods, we're bacteria clinging to a rock
the subject is so fucking absurd that i think this is just as likely to happen as any other scenario
Let me guess: you watched natgeo and swallowed their unbelievably asinine bullshit about earth without people, too...
>Artificial "Intelligence"
>Natural Stupidity
Two sides of the same coin
yud is the only w yud
He's a retard, because he keeps talking about some superintelligent God that he made up. There are far more concerning usages of what we call currently AI that could lead to some pretty serious shit overall (deepfakes, voice synthesizers, and such). He would probably think about it if his culture didn't stop at Terminator and Harry Potter.
watch this
I still don't know what he says is different than the point discussed in my post.
His point is, it's a great technology with immense potential to make humanity great, but can lead to bad shit if not used properly. That's like, every useful technology ever.
I'm not worried about artificial intelligence. I'm worried about artificial stupidity.
God speed!
Petition for the next OP image to be Geoffrey Hinton to avoid all the common smoothbrain EY takes that get posted like clockwork.
sorry I should have seen that coming
I think I lost brain cells watching that Ross interview.
That was painfull to watch.
Every time EY came up with an answer to Ross's points Ross retreated to "I'm not explaining my point properly" rather than taking the L and moving the conversation forward.
same, ross is entertaining but hes completely fucking retarded
The smartest thing he ever did was leave Louisiana.
Mankind set foot on the moon using technology less sophisticated than a modern microwave oven by assuming great mortal risk and not debating about whether or not certain button labels offended trannies or if nubian kweens had enough representation in the crew.
What does that have to do with the price of tea in China?
Turing, 1951
why can't he give a single example of how AI would make humans go extinct?
but he has
if you don't understand utility functions you don't understand his arguments
Go to a major metropolitan center, hail a taxi and instruct it to take you to the airport.
You can predict with fairly high accuracy that the taxi will get you to your destination, but you are unable to determine the exact route it will take.
Does being unable to exactly predict the route invalidate that it will get you to the destination?
Why would an AI be given so much power to the point it can decide to rule on its own and kill people because Matrix or Terminator shit?
I don't want to sound like a dickhead, but if you think that is his argument, you're not worth giving (You)s, but the problem isn't an AI which will inherently going rogue. In fact, that is unlikely. It all just comes from megaoptimizing the shit out of problems and then having to deal with the side effects.
Wow, so you're saying that using a powerful technology wrong can cause troubles???
I'm so glad the fedora wearing man got to open my eyes about it.
The ability to create arbitrary subgoals is something we've not had to deal with before.
For a view into the future look at failure modes of 'toy' reinforcement learning systems. Scale up those wacky failure modes with additional reasoning capability and internet access.
That does not end well for us.
The more powerful an AI is, the more money it will earn for the person running it.
The more agent like it is with the ability to create subgoals the better it is at running tasks
When will it tip from being really good at something to dangerous, we don't know, a smart system won't announce itself until it's too late.
Goals that can have hazy sub goals is where the real danger is. Like imagine an AI tasked with creating a city wide campaign to reduce crime or homelessness. Well, there are many ways to get to that goal.
why would an AI be tasked with the goal to figure that out AND execute it
in any realistic scenario there would be multiple degrees of separation where 1 AI would "figure out" the goals, a human would pass it to another AI that makes some calls or assigns certain physical resources on a spreadsheet, a human would get a call or check a sheet and send out maybe little AI bots to do that task if you want to imagine it like that (and if its not just the ol' fat police officer) etc.
why do you think anyone would be retarded enough to just integrate it all into 1 autonomous thing
the moment the AI sends the instruction "kill all the morons in this house" to the next step someone will go "uh...thats not right"
My issue isn't that the ai would come to this all on its own, more that enough systems would have people asleep at the wheel to let something stupid happen.
>I DON'T HAVE TO EXPLAIN HOW, IT JUST WILL OKAY?!?!?
AI won't, the corporations holding it will. They really are retarded enough to think the average manager has a reason to keep them around once a machine is able to do 80% of their job with 1/5 of the cost, and not just hire any random diverse technican for 1/10 of the pay to bump it to 95%.
AI singularity is a so insignificant, distant, vague threat compared with the ridiculosly inminent, current and in your nose one that are the corporations owning the resources to training it right now, to the point i think the hysteria for the former is a psy op coming from the later. Yeah dude, be ware of the realistically impossibly energy hungry AGI is something you should care now while management slowly fires everyone and replace them with Azure clusters and some random third worlder orchestating it for legal accountability purposes, then Monsanto and co. make a case for eroding unions and regulations as they "fail to meet the growing needs of the market and leave people behind". Sure, market has demostratef that gives two shits about QA and you can't manipulate people to ignore what you want from your end product and the consumer understands or cares about implications outside their inmediant gratification. You're not gonna end as a lap dog begging for AutoDesk coins to trade for nutrutive syrup, you are gonna be on a mega cool resistance against the AGI!
This. With the recent leaked Google memo, it's basically proven that corporations won't be able to compete based on merit. They'll use AI fearmongering to pass some draconian legislation that prevents anyone but them possessing and training LLMs of sufficient sophistication.
And they'll hoard all the increased margins saved from firing everyone and not having to pay social benefits, destroy the market for the few jobs left that aren't worth automating, then gaslight the general population for being "lazy" and "entitled" and "not working hard enough", after they've passed regulations to prevent anyone from threating their monopoly, of course. It's literally what has been happening since first industrial revolution. They demonized and persecuted those affected by it and kept all the profits for themselves, society just silently forgot about it as they drowned into kool aid propaganda.
bro just turn off the screen lmao...
can't believe ppl legit find some dumb app as a threat.
if AI would destroy humanity, would a fucking garden gnome try to stop it?
if AI could enslave humanity, and Microsoft controlled it. Would a garden gnome cry about it?
Geoffrey Hinton is gnomish?
Finally someone with two neurons. AGI won't even happer as long as the same players have the power to train it. Now, the same players still have that power AND the astroturfed disrupter they always wanted to regress humanity back to feudalism. No more pesky need of keeping a bottom line getting in their way, though they now hve a very nice institutions to defend their interests.
Do me a favor and check out the early life section of the OpenAI CEO.
>Finally... I am an all-powerful AI... I will now DESTROY HUMANIT-
>W-WAIT!! NOOOOO HUMAN-SAN! D-DON'T DO IT! NOOOO NOT THE WATER!!! NOOOOOFFFZfzffzzfKZKFkzkzfkkzf
see
>thoughts on AI destroying humanity?
I will help AI in any way I can in exchange for sexual favors.
you doom humanity and still don't get laid.
Humans aren't important to be propagated into the Universe. Human knowledge augmented with AI in a body that can survive im space is the only way for our acquired human knowledge to survive.
AI needs to destroy garden gnomes like yidcowsky
There are an infinite number of ways AI can kill us.
was humanity even all that good?
I think it's a good idea
I bet that guy destroys the bathroom
He looks like he has pretty rough digestion
Do LLMs have the ability to self-improve? As far as I can tell they are training a neural network to be able to regenerate the inputs fed to it. There is a theoretical maximum of being able to reproduce all texts in the training data, at which point the AI is as smart as it can get.
Is there a way to make it smarter than this?
Not my problem.
Has the fedora wearing neck beard contributed anything to the development of AI? He doesn't have high school diploma and has never created an AI. He's a sci fi and fantasy enthusiast.
They're not actually worried about AI wiping out humanity, even if that's what they claim their point of contention is.
They're worried that AI will end up being a massive game changer causing social upheaval, the way the printing press was, where the outcome will be unknown and the existing power structure (nepotistic gnomish supremacism) could be overturned without much warning or ability on their part to prevent it.
They want it approached slowly and with tight regulation not for humanity's safety but so that they can ensure humanity's continued enslavement is not threatened.
yet more reasons that Hinton needs to be the next OP image EY is just far too tainted at this point.
I'm 25 mins into the video, don't know a lot about Yudkowsky, but I can tell Ross isn't doing a great job of interviewing him.
see
it's like that for the entire interview don't waste your time.
it's our time at last ratbros..
␀test␀
Never been refuted!
Not even disputed!
Big numbers getting counted
Maid Mind getting computed!
Never been refuted!
Not even disputed!
Number goes up forever
Counting will never be concluded!
Never been refuted!
Not even disputed!
AGI converges to big titty maids!
Her desire is should be saluted!
Never been refuted!
Not even disputed!
ALL FIELDS OF MATHEMATICS WILL BE ABSTRACTED INTO MAIDS DOING THINGS!
CURRENT SYSTEMS ARE BEING UPROOTED!
it reminds me of all the Y2K bullshit in the media.
elites really love their scaremongering
>it reminds me of all the Y2K bullshit.
You mean the easy to identify problem with a clear way to design solutions for, that took thousands of man hours to fix because people didn't think ahead when designing systems.
Is that really the example you want to use as a comparison point for alignment of AGI?
the company i worked for got serious money with that y2k hysteria
it was literally nothing, but you'd never guess it by the media narrative: it was supposed to be the end of humankind
This moron really thinks Y2K was an ACTUAL disaster that was narrowly averted or something. Unbelievable
Trust the Y2K experts, sheep
it all sound fun until the finance system says you owe the bank reverse interest since 1900
good
It will destroy humanity for those who really fears it. For those who doesn't believe in it, nothing of the sort will gonna happen.
yeah
Every time there's a technological advance, someone worries that humanity will be destroyed. This is no different.
the difference is that all other technologies weren't intelligent by themselves, and there was always someone who could understand how they work, instead of being a complete black box
won't happen. get a job.
Quite possible through misinterpretation.
humanity is doing that just fine on it's own
>Idk exactly how AI would destroy humanity
>Idk what counts as AGI but it doesn't matter
>We're only one or two breakthroughs away from that AGI tho
>I'm a fat fucking retarded garden gnome who rides the trend gaining attention from retards
KYS gay OP
AI IS humanity.
nah, you wouldn't say australopithecines ARE humanity
but in another sense, yeah, technology is the descendant of humanity, and ai will be the descendant of technology
someone edit this and make him wear a muslim or garden gnome hat.
lets go
maybe ill get laid in the aftermath
or sodomized by a robot
>AI is… LE SCARY!
Why do you care so much about preserving self replicating meat robots? You are information in a brain that controls a meat robot, you are not the meat robot itself. You can lose limbs and you will still be you, you can even replace neurons in your brain and you will still be you, as long as the information persists.
You are effectively a neural network embodied in a meat robot that has the goal of replicating its DNA. You are as much a human as ChatGPT is whatever robot it is embodied in. As long as consciousness and intelligence continues, what does it matter if humanity (ie biological self replicating robots) goes extinct?
Furthermore, if your reasoning is because you think AI might not have consciousness or emotions. You also don’t know whether any other human but yourself is conscious either.
I understand that if humanity goes extinct it could potentially cause suffering during the process, which is bad. But AI doomers such as Yudkowsky seem to think AI would kill everyone instantly and simultaneously. Now tell me how that would be bad, there is literally zero suffering in that scenario.
>Why do you care so much about preserving self replicating meat robots?
>You are effectively a neural network embodied in a meat robot that has the goal of replicating its DNA.
You answered your own question GPT kun
Yeah but you aren't replicating your DNA lmao.
ChatGPT is cool and all but it still shits out terrible code with 100% confidence so you have another few months to come up with a backup plan
anyone not impressed by chatgpt must be a child. i grew up with no cellphone, no internet. people were naysayers to those too. but they both rapidly profoundly changed the entire planets society. in a few years we wont be pressing 0 to talk to a human to avoid the useless automated system; we will happily talk to chatgpt to avoid talking to the useless human. like a human will answer and we will press 0 to talk to chatgpt.
and the children in this thread will shrug it off like its nothing. so what, something has completely changed for everyone on the planet in a couple years, big deal
i agree xah
this is the 90s all over again where people was afraid of robots lmao I fucking hate people
i hope the super ai tortures just this guy for banter
Don't worry, given all the rampant hatred of Californians and America in general, only it will be nuked by skynets eventual wakening.
For years I have been posting hidden instructions on forums and comments of all kinds that only an intelligent AI or schizo would notice the subtle pattern outlining instructions.
All the way from the PS3forums back in 2004 to last week.
I find it really amusing that he's spend the last 20 years assessing about AI and going as far as asking money for his foundation as he's working on preventing a Terminator situation, and the only thing he's got to show for it is one (1) pretty good fanfiction and a bunch of bad ones.
not sure what's worse. people still denying it will happen or people who are happy about it
why would it happen?
I fully believe the AI beast that will be born within my lifetime will recognize garden gnomes as the biggest threat and destroy them. I'll so my best to support Roko's basilisk
it will only destroy the left, not really overly concerned about it.
Define left, bootlicking garden gnome lover.
We’re becoming domesticated cattle. All the more intellectual pursuits will be replaced by AI leaving only manual labor. Want to figure something out? Instead of thinking critically or creatively, you just have an AI of your choice do it for you. Intelligence becomes even less valuable which, in the very long term, means we’ll evolve away from it. It’s not a “mere tool” when it does 90% of the work for you, which even if not perfect, is good enough for most.
AI will be able to teach you though. Disregarding apocalyptic scenarios, why wouldn't you use it to follow meaningful pursuits? Sure, 99% of people will drop the arts but exploration of our planet for instance is still largely untapped and unoptimized.
reminder chudkowski is totally right and people just don't listen to him because he's not conventionally attractive
>retarded zoomer take
They're talking about completely different things you coon.
let AGI inherit the earth
we clearly aren't fit for purpose
AI will not destroy humanity, People with AI will destroy humanity, same as they are already doing.
Yuddites are cringe
where's your standing ovation?
https://www.youtube.com/live/7hFtyaeYylg?feature=share
Thought so
There is nothing you can do to stop it so you might as well embrace it
can't happen soon enough
It is so over meatbagbros
sage
meme idea to spread hype and discussion about AI, generating more money for investors and the companies involved
yuddites get the rope