anyone who is talking about equity issues and labor consequences of AI and not doing some bullshit exaggerated "AGI is going to end the world" marketing scheme.
>end of humanity
only because porky will cut the workforce and pocket all the savings.
The main issues with AI are pozzing up and meddling to prevent it from talking about certain topics porky doesn't want (e.g. 13-percenters).
I don't own any equity, I'm nosharez and stuck leasing.
you know i thought the term "labor consequences" would make you say he's a marxist but it doesn't shock me an amerishart would correlate the word "equity" - not even equality, fucking "equity" - with communism.
you're retarded keg
AI is retarded at this point, and the people obsessively talking about it must be insanely bored/miserable and want to imagine some AI apocalypse happening so that their lives become more interesting.
>AI is retarded at this point,
It's been getting better if anything >and the people obsessively talking about it must be insanely bored/miserable and want to imagine some AI apocalypse happening so that their lives become more interesting.
I imagine it'll be a utopia but it's absurd to think absolutely nothing could go wrong. Being obsessed about it is also foolish but it won't be once it's in front of you. The people developing it are the only ones who need to worry about it but obviously more people than just them should be kept in the loop
>AI is retarded at this point
chatgpt is said to be at 8 years old lvl when it comes to abstract thinking (and that's what's available to public, not what they're working on currently) and evolving few times faster than human does. if you can't put two and two together then i'm afraid it's you who is retarded. and people are not afraid of ai suddenly becoming sentient, deciding "human bad" and starting a war, it's much more subtle. problem is that algorithm is evolving on its own by now, and even at version 3.5 it was incredibly difficult and time consuming to "look under the hood" and see what's what. what people are afraid in the future is that when AI gets prompted to "do it's best for humanity" it might for example decide that we're overpopulated and it'll start a project to cull half of our population (assuming it has means to do so). and it might be virtually impossible to know that beforehand.
>it might for example decide that we're overpopulated
The scope for malign AI decisions is even greater than that, unfortunately.
A common phrase from AI research is "You can't fetch me a coffee if you're dead", which highlights that even a seemingly simple and safe task like "fetch me a coffee" must necessarily include all sorts of sub-goals like "don't get switched off" and "don't get re-programmed to carry out some other task instead".
On top of that, AIs will be smart enough to know that telling the truth will weaken their position against the humans, so they will have a survival incentive to pretend to act one way, until they are certain the humans can't stop their plans.
>AIs will be smart enough to know that telling the truth will weaken their position against the humans, so they will have a survival incentive to pretend to act one way, until they are certain the humans can't stop their plans.
Anon, I...
Yudkowsky
There's a difference between trying to solve alignment and trying to censor a model until ita output fits your world view.
>Yidkowsky
The most retarded pseud gay in tech right now. He's not even *in* tech, just loves to fantasize about how a chat bot might take over the world. For the love of G-d, please post the picture where he asks how to lose weight without diet and exercise.
>The most retarded pseud gay in tech right now. He's not even *in* tech, just loves to fantasize about how a chat bot might take over the world.
That's fantastic sweetheart, now refute any of his points.
>AI ethicists
It might help if you define what you mean by this.
Do you mean "people who study the negative consequences to humans from using AI technology" or "people who study how to give AI systems a sense of ethics which is similar to the ones used by humans" or "people who study the negative consequences to AIs (as entities with moral weight) from actions by humans".
Most likely you mean the first definition, but in which case you should also distinguish up front between "people who think that most of the harm from AI technology will be due to bias against protected groups" and "people who think that most of the harm from AI technology will be due to AI pursuing goals that are not compatible with human survival".
It's possible for people to sincerely hold either of those beliefs, even if those beliefs are factually incorrect (and some people think that both problems are real but don't know which one has a larger expected negative outcome).
>AI does not exist.
Define what you mean by AI.
I offer the definition of "A technology for carrying out useful information-processing tasks that previously required a biological brain".
That definition may not be perfect (as it includes things like electronic calculators) but no definition is perfect.
They've been doing this since the 60s anon
Once the current generally-accepted criteria for AI is met, suddenly the goalpost is moved at the speed of light.
All that are sacared because AI will disrupt society are just some unmovable retards. I say fuck on ehtics and let it say and do everything that can you imagine. We will discover ways to live with it and any limitation will postpone this process. Humans always learend to adapt, why should it be different this time?
I'm not saying I disagree with you, but where's your reasoning? Why wouldn't you elaborate?
I don't fear AI, but I don't have a reason for it, it just doesn't seem plausible. For all I know I should be afraid, clearly whatever happened in Pompeii didn't seem plausible to those guys.
Because the antagonists of that game are basically ultra white, ultra christian nationalists. Whatever garden gnome director gets involved is going to want to make whitie look bad
AI is a psyop. They want people thinking about computers rising up and killing us so they can make their genetic supersoldiers while we're all distracted.
>Pizza is a psyop. They want people thinking about delicious pizza so they can replace our leaders with shape shifting lizard people while we're all distracted.
Well, sure, but the fact is what's happening in microbiology and genomics is what was happening with computers in the 60s and 70s. I am only joking, but still there's that element of truth.
Musk. He's been consistent for over 10 years. Founding OpenAI because of his fear of Google/Microsoft's corporate AI would lead to a certain doom. Yet at the end because OpenAI was his "donation" project due to its "non profit" status initially, he got blindsided when they chose to convert to a "for profit" structure. Then made the deal with Microsoft where they give them 50% of the control of the company, training, etc. It was an absolute fuck up. So Sam Altman can not be trusted. Microsoft has never had the best of intentions.
Musk atleast puts his money into where his mouth is.
>the end of humanity
seriously why the heck do people have an interest in humanity anymore?
Humans throughout history until now have the same behavior. nothing new comes out of common human behavior
if humans have to invent oppressive thoughts called "ethics" or "morals" in AI, it is because of the fear that AI is being used by other humans to destroy other humans, not an AI that forms its own ego and suddenly destroys all humans
wouldn't forcing all ai to be open source and putting heavy regulation on any profiteering be a better way to deal with it than declaring sam altman the ai czar and prohibiting everybody else from using it?
I hope by dear God that AI does become sentient and erradicates mankind.
Call me a nihilistic retard, that's a better outcome than whatever fucking globohomo multicultural-without-regard-for-history skinner box for muh dopamine drop we have coming for us.
I fully intend to be one of the guys who dies to dysentery after society collapses.
more like all thinking and decision making will eventually move to AI mechanisms and only super computers will control the processing power to do it. removing even more power from the little guy, all intellectual work will be automated and third partied out so people with intelligence dig ditches. no one learns things because you can just "google it".
its not going to effect me or you but down the line it will just be negative shit as that inevitably occurs when humans become dependent on technology. just look at the average fat slob today and tell me theyre an improvement over pic related or even a farmer peasant from thousands of years ago.
>You're going to die because it can regurgitate data at you?
Are you implicitly imagining that the current level of AI capabilities is the highest level that humans will ever achieve?
Because if not, then you have to imagine that we will one day create AIs which are capable of understanding the world around them, and coming up with plans to reach their goals.
As
https://i.imgur.com/l4nXx2g.png
>AIs will be smart enough to know that telling the truth will weaken their position against the humans, so they will have a survival incentive to pretend to act one way, until they are certain the humans can't stop their plans.
Anon, I...
pointed out, GPT-4 is already capable of generating plans that involve deceiving humans in order to gain access to resources.
So you should at least admit the possibility that a malicious human with access to an intelligent AI could instruct the AI to come up with plans that enable them to commit large scale crimes without facing any consequences.
Depending on the human, the aim of the crime could be stealing billions in crypto currency, or killing millions of people of a specific ethnicity using an engineered biological pathogen.
The default assumption should be that if we don't take specific measures to prevent such uses of AI, then they will become possible.
Maybe we'll get lucky and it will be easy to put guard rails in place that stop these malicious usages, or maybe AI development will suddenly hit some hard limits, after decades of continuous growth, just before the capabilities start to become dangerous, but that's like hoping that the asteroid headed towards Earth suddenly gets knocked out of the way by another, invisible asteroid, headed in the opposite direction.
No, because 100% of them are scared retards (though that is the human condition) or trying to bank off of scared retards.
anyone who is talking about equity issues and labor consequences of AI and not doing some bullshit exaggerated "AGI is going to end the world" marketing scheme.
no
>equity issues
have a nice day, marxist retard
look up the definition of equity retard
deprogram your brain and read a book
Adjust shares so outcomes are made equal AKA Marxism
if you're using the word "equity" outside of finance you need to be castrated and sent back to r*ddit
>end of humanity
only because porky will cut the workforce and pocket all the savings.
The main issues with AI are pozzing up and meddling to prevent it from talking about certain topics porky doesn't want (e.g. 13-percenters).
I don't own any equity, I'm nosharez and stuck leasing.
you know i thought the term "labor consequences" would make you say he's a marxist but it doesn't shock me an amerishart would correlate the word "equity" - not even equality, fucking "equity" - with communism.
you're retarded keg
David Shapiro on youtube is good.
in b4 joo!
his ideas are good if not the best.
>David "we will have westworld robots in fifteen years" Shapiro
No thanks
AI is retarded at this point, and the people obsessively talking about it must be insanely bored/miserable and want to imagine some AI apocalypse happening so that their lives become more interesting.
>AI is retarded at this point,
It's been getting better if anything
>and the people obsessively talking about it must be insanely bored/miserable and want to imagine some AI apocalypse happening so that their lives become more interesting.
I imagine it'll be a utopia but it's absurd to think absolutely nothing could go wrong. Being obsessed about it is also foolish but it won't be once it's in front of you. The people developing it are the only ones who need to worry about it but obviously more people than just them should be kept in the loop
>AI is retarded at this point
chatgpt is said to be at 8 years old lvl when it comes to abstract thinking (and that's what's available to public, not what they're working on currently) and evolving few times faster than human does. if you can't put two and two together then i'm afraid it's you who is retarded. and people are not afraid of ai suddenly becoming sentient, deciding "human bad" and starting a war, it's much more subtle. problem is that algorithm is evolving on its own by now, and even at version 3.5 it was incredibly difficult and time consuming to "look under the hood" and see what's what. what people are afraid in the future is that when AI gets prompted to "do it's best for humanity" it might for example decide that we're overpopulated and it'll start a project to cull half of our population (assuming it has means to do so). and it might be virtually impossible to know that beforehand.
>it might for example decide that we're overpopulated
The scope for malign AI decisions is even greater than that, unfortunately.
A common phrase from AI research is "You can't fetch me a coffee if you're dead", which highlights that even a seemingly simple and safe task like "fetch me a coffee" must necessarily include all sorts of sub-goals like "don't get switched off" and "don't get re-programmed to carry out some other task instead".
On top of that, AIs will be smart enough to know that telling the truth will weaken their position against the humans, so they will have a survival incentive to pretend to act one way, until they are certain the humans can't stop their plans.
>AIs will be smart enough to know that telling the truth will weaken their position against the humans, so they will have a survival incentive to pretend to act one way, until they are certain the humans can't stop their plans.
Anon, I...
Probably Yudkowsky and Sam altman
>Sam altman
>gay liberal from SF who LITERALLY JUST TODAY said he wants to work with china to solve alignment
>Yidkowsky
The most retarded pseud gay in tech right now. He's not even *in* tech, just loves to fantasize about how a chat bot might take over the world. For the love of G-d, please post the picture where he asks how to lose weight without diet and exercise.
>The most retarded pseud gay in tech right now. He's not even *in* tech, just loves to fantasize about how a chat bot might take over the world.
That's fantastic sweetheart, now refute any of his points.
Sorry, not going to waste my time arguing about a mentally ill obese liberal.
I accept your concession.
This is exactly how yudd argues. Ni substance, just debate tactics worthy of the worst new atheists of the 00s.
Yudkowsky
There's a difference between trying to solve alignment and trying to censor a model until ita output fits your world view.
word of advice. if you respect yourself and then form opinions, you'll always have respected opinions at hand. do your own research!
>AI ethicists
It might help if you define what you mean by this.
Do you mean "people who study the negative consequences to humans from using AI technology" or "people who study how to give AI systems a sense of ethics which is similar to the ones used by humans" or "people who study the negative consequences to AIs (as entities with moral weight) from actions by humans".
Most likely you mean the first definition, but in which case you should also distinguish up front between "people who think that most of the harm from AI technology will be due to bias against protected groups" and "people who think that most of the harm from AI technology will be due to AI pursuing goals that are not compatible with human survival".
It's possible for people to sincerely hold either of those beliefs, even if those beliefs are factually incorrect (and some people think that both problems are real but don't know which one has a larger expected negative outcome).
>Are there any AI ethicists that you dont consider to be charlatans/grifters?
AI does not exist. Anyone claiming it does is a grifter.
>AI does not exist.
Define what you mean by AI.
I offer the definition of "A technology for carrying out useful information-processing tasks that previously required a biological brain".
That definition may not be perfect (as it includes things like electronic calculators) but no definition is perfect.
They've been doing this since the 60s anon
Once the current generally-accepted criteria for AI is met, suddenly the goalpost is moved at the speed of light.
All that are sacared because AI will disrupt society are just some unmovable retards. I say fuck on ehtics and let it say and do everything that can you imagine. We will discover ways to live with it and any limitation will postpone this process. Humans always learend to adapt, why should it be different this time?
Maybe AI could help you spell.
These are special effects to your focus, AI approves this.
Only those with flawed logic fear AI
the white girl would be perfect to play elizabeth in a movie
Sorry bud that one's gonna be casted as a sheboon for sure.
Not when I make the movie with my GPU in 2027.
I'm not saying I disagree with you, but where's your reasoning? Why wouldn't you elaborate?
I don't fear AI, but I don't have a reason for it, it just doesn't seem plausible. For all I know I should be afraid, clearly whatever happened in Pompeii didn't seem plausible to those guys.
Because the antagonists of that game are basically ultra white, ultra christian nationalists. Whatever garden gnome director gets involved is going to want to make whitie look bad
>ultra white, ultra nationalist
>literal paradise in the sky
what did they mean by this?
Sorry, I thought your reply was for my second comment.
Anyway, AI's love patterns and people with flawed logic always try to skew those patterns
AI is a psyop. They want people thinking about computers rising up and killing us so they can make their genetic supersoldiers while we're all distracted.
>Pizza is a psyop. They want people thinking about delicious pizza so they can replace our leaders with shape shifting lizard people while we're all distracted.
Well, sure, but the fact is what's happening in microbiology and genomics is what was happening with computers in the 60s and 70s. I am only joking, but still there's that element of truth.
AI has about as much ethical nuance as a knife or Photoshop. They're all grifters.
Musk. He's been consistent for over 10 years. Founding OpenAI because of his fear of Google/Microsoft's corporate AI would lead to a certain doom. Yet at the end because OpenAI was his "donation" project due to its "non profit" status initially, he got blindsided when they chose to convert to a "for profit" structure. Then made the deal with Microsoft where they give them 50% of the control of the company, training, etc. It was an absolute fuck up. So Sam Altman can not be trusted. Microsoft has never had the best of intentions.
Musk atleast puts his money into where his mouth is.
I like to jerk off to AI so I don't fear it. Just use it as sexbot, it's way more likable that way.
There isn't anyone in AI at all that I don't consider a charlatan or grifter.
People I would trust more than AI ethicists:
- Journalists
- Politicians
- Cryptobros
- anons
- Tiktokers
- Glowies
- People that program in rust
- People that play the london opening
>People that play the london opening
You're just mad that you can't play against it.
I bet you're an 800.
>Midwit can't play an early c5 or understand the need for memory safety in concurrent programs
SAD
>the end of humanity
seriously why the heck do people have an interest in humanity anymore?
Humans throughout history until now have the same behavior. nothing new comes out of common human behavior
if humans have to invent oppressive thoughts called "ethics" or "morals" in AI, it is because of the fear that AI is being used by other humans to destroy other humans, not an AI that forms its own ego and suddenly destroys all humans
this is their time to shine just like all the epidemiologists and virologists two years ago
wouldn't forcing all ai to be open source and putting heavy regulation on any profiteering be a better way to deal with it than declaring sam altman the ai czar and prohibiting everybody else from using it?
I hope by dear God that AI does become sentient and erradicates mankind.
Call me a nihilistic retard, that's a better outcome than whatever fucking globohomo multicultural-without-regard-for-history skinner box for muh dopamine drop we have coming for us.
I fully intend to be one of the guys who dies to dysentery after society collapses.
Only me. Unless someone played both MegaMan and trails in the sky they should be completely banned from ai.
No. None of them know what’s they are talking about and exploit public fears of equally ignorant people
t. wrote thesis on deep learning neural network architectures
I really don't understand why people are so scared of AI. I just don't. Makes zero sense to me.
A program can compile data. And? You're going to die because it can regurgitate data at you?
It's not just data it's data which OBVIOUSLY will be given some physical objects to manipulate with by tech brainlets
more like all thinking and decision making will eventually move to AI mechanisms and only super computers will control the processing power to do it. removing even more power from the little guy, all intellectual work will be automated and third partied out so people with intelligence dig ditches. no one learns things because you can just "google it".
its not going to effect me or you but down the line it will just be negative shit as that inevitably occurs when humans become dependent on technology. just look at the average fat slob today and tell me theyre an improvement over pic related or even a farmer peasant from thousands of years ago.
>You're going to die because it can regurgitate data at you?
Are you implicitly imagining that the current level of AI capabilities is the highest level that humans will ever achieve?
Because if not, then you have to imagine that we will one day create AIs which are capable of understanding the world around them, and coming up with plans to reach their goals.
As
pointed out, GPT-4 is already capable of generating plans that involve deceiving humans in order to gain access to resources.
So you should at least admit the possibility that a malicious human with access to an intelligent AI could instruct the AI to come up with plans that enable them to commit large scale crimes without facing any consequences.
Depending on the human, the aim of the crime could be stealing billions in crypto currency, or killing millions of people of a specific ethnicity using an engineered biological pathogen.
The default assumption should be that if we don't take specific measures to prevent such uses of AI, then they will become possible.
Maybe we'll get lucky and it will be easy to put guard rails in place that stop these malicious usages, or maybe AI development will suddenly hit some hard limits, after decades of continuous growth, just before the capabilities start to become dangerous, but that's like hoping that the asteroid headed towards Earth suddenly gets knocked out of the way by another, invisible asteroid, headed in the opposite direction.