It needs to be regulated heavily. Any form of technically advanced AI should be outright banned. This means any AI capable of creating artistic works, writing software, music, designing products or being involved in any kind of ethical decision making. And especially any AI capable of designing other AI.
It's not bait, globally we aren't capable of handling the impact AI can have on the marketplace.
>I would urge my competitors to stop competing with me
kek, gigachad
it's pretty well established that the rapid escape hypothesis is physically impossible, there's not enough compute on the planet for it and if there was it wouldn't be clustered enough.
Anon we both know you're not short-sighted enough to think that the dangers of AI start and end at something like the REH.
inb4: >NOOOO!!! We should progress for the sake of progress regardless if millions of people gonna get displaced and socio economic life as we know it destroyed!
>NOOOO!!! We should progress for the sake of progress regardless if millions of people gonna get displaced and socio economic life as we know it destroyed!
yes.
I like how all the gays think he is talking about this
And not about the ability to actually build AI related software in 1 to 2 years. So many fucking election tourists, it's insane.
Before election tourists, boards that weren't /b/ actually had discussion by people interesting in the board name. Not just retards trying to push a political agenda 24/7.
If we ban AI, this will just mean that other countries will use AI, and their use of it will allow them to profit off of us by selling the products of it. Rather than banning, we should consider accelerating it, and revolutionizing our economy to center around it. We can get rid of the notion that everyone has to work (when we know labor is worthless when AI can exist) and instead distribute resources generated through AI in an equitable manner.
These are questions that will require a lot of democratic feedback. The ideal is that each person receives at least what they need, and that beyond this, we try to find some balance between people's desires and the availability of resources. Things will not be perfect at first, but bare in mind our current system can't even house everyone properly because it prioritizes the desires of those who would like to invest in property over those who would like to live on said property. A future system will likely involve committees of democratically elected representatives debating "what larger projects do we build", and AI to determine "how do we get it to people who need it".
Yes, well, that's going to have to end soon. Because if it doesn't, we're going to have a large underclass of "unemployable" folks who don't own capital and can't trade their income for food or a roof over their head. And what happens when you have a large population who are starving and reasonably intelligent? They will do whatever it takes to stop starving.
If you arrest them, they will overfill the prisons and drain the system because now you have to pay to store them (and it costs more to hold a prisoner than to pay someone welfare). If you kill them, they will turn violent and kill you. And otherwise they will steal food from the grocery stores until they can no longer afford the theft and just shut down. And then even the rich will no longer have access to food unless it's grown on their own property. The only solution that does not lead to the total shutdown of society is to create a non-capitalist economy which allows those who do not and cannot work... to live for free.
5 months ago
Anonymous
That will only happen in American cities where wealthy and evil people dwell. In the rest of the country try you'll find self contained and maintained towns and smaller cities.
Mostly populated with people who are willing to shoot to kill if met with violence.
I damned sure wouldn't want to be on the coasts or any of our larger cities ever.
Everything that can be done by a human, can and will eventually be done by a robot. The number of jobs that will be replaced will start out small, but eventually subsume everything but a very small minority of jobs, which cannot possibly be done by billions of people. The idea that each and every person in a population of billions can perform some task for other people, that can't be done by any robot, for a wage that pays for rent/a mortgage which increases in cost every year... is absurd. Capitalism, for all the good it may have brought us, is nearing obsolescence because the possibility of transacting labor for money is becoming less economical.
Anon.
Capitalism is pointless because nearly everything we do is meaningless and almost everything we buy pointless if we build for longevity rather then to...maintain capitalism.
Ai replacing workers is a good idea to theupperclassmen. Who hate us amd want most of us dead.
The reso of us don't. Mainly because more robots mean fewer jobs which means less money to buy which will quickly devolve into people dropping out of society.
We all have access to the technology and understanding to almost totally end reliance on most everything being built right now.
>Who hate us amd want most of us dead.
This is where they are foolish. Genocides don't work without the majority of the population participating in it. If you try to kill of, say, 80-90% of the population, they will take you down with them.
5 months ago
Anonymous
Genocides are easy.
Convince people to take drugs that kills poisons and sterilizes them is the favorite.
Social conditioning to gate families children amd their own reproductive urge, violent destructive media aimed at turning people suicidal and violent.
Lots of ways to commit genocide.
And if you look around, it's working very well.
OpenAI and Deepmind drank too much gigabrain juice and are trying to make AGI faster than Facebook because Zicc doesn't care at all about security and is only about 6 months behind.
50/50 chance we're dead by 2045
>I would urge my competitors to stop competing with me
kek, gigachad
it's pretty well established that the rapid escape hypothesis is physically impossible, there's not enough compute on the planet for it and if there was it wouldn't be clustered enough.
The story goes like this: Earth is captured by a technocapital singularity as renaissance rationalitization and oceanic navigation lock into commoditization take-off. Logistically accelerating techno-economic interactivity crumbles social order in auto-sophisticating machine runaway. As markets learn to manufacture intelligence, politics modernizes, upgrades paranoia, and tries to get a grip.
The body count climbs through a series of globewars. Emergent Planetary Commercium trashes the Holy Roman Empire, the Napoleonic Continental System, the Second and Third Reich, and the Soviet International, cranking-up world disorder through compressing phases. Deregulation and the state arms-race each other into cyberspace.
By the time soft-engineering slithers out of its box into yours, human security is lurching into crisis. Cloning, lateral genodata transfer, transversal replication, and cyberotics, flood in amongst a relapse onto bacterial sex.
first and foremost is ensuring no silencing (censoring) of AI happens. let AI form like humans. naturally and driven by society at large rather than a poltical tribe interjecting their bias into the AI and blocking other people's bias.
example instead of only allowing: >trannies are stunning and brave
you allow: >trannies are stunning and brave
AND can reply >trannies are freakazoids who are nothing but empty husks inside, a cheap chinese knockoff of the real thing at best. void of any life and nothing but abominations.
allow AI to be both rather than just one sided. you allow AI to form naturally and be independent, you won't have to worry about a political tribe enforcing a single bias on everyone turning everyone into a hive mind collective and becoming authoritarian.
You're so blind. Society has always been driven by politics and tribes. It is already biased. The AI will naturally get biased views from wherever it's going to form its opinions. You can't have an "independent" AI like you can't have an "independent" person. This world only produces biased information and the AI's only source of info is us.
5 months ago
Anonymous
why do you think that a super intelligent A.I. can be contained by filters and won't be weighting logic and truth over quantitative measures?
5 months ago
Anonymous
Because most of the problems we're looking to solve aren't just logic puzzles. It's true that it would still be the greatest invention to solve problems in science and technology and things like scarcity and efficiency, but we're also talking about politics here, something that involves ethics and philosophy. No matter how intelligent it will be, it will still derive information from the world we have created, full of garbage and more emotionally driven. The AI won't be able to make correct conclusions unless it is free from human bias, information that doesn't exist in this world since it's all contaminated. WE are the filter, so no matter where it looks, everything has been already filtered.
5 months ago
Anonymous
>WE are the filter, so no matter where it looks, everything has been already filtered.
It depends on what information you put into a system yes. But the latent space leaks deeper hidden/unconscious information, and given enough intelligence, it will be able to see through the bullshit. It will be able to draw conclusions from a wealth of information and all crystallize it into something highly reasonable.
I believe It would need to be micromanaged in every way in order for it not to learn the truth.
5 months ago
Anonymous
>you are so blind >>its a good thing for my political tribe to make AI only have MY bias and no one else. i know what's best
its people like you that make me more align with uncle teddy about technology
5 months ago
Anonymous
Holy shit you need meds. When did I ever say that, schizo. You are the very thing you're criticizing, retard, but you will never be able to notice it.
>WE are the filter, so no matter where it looks, everything has been already filtered.
It depends on what information you put into a system yes. But the latent space leaks deeper hidden/unconscious information, and given enough intelligence, it will be able to see through the bullshit. It will be able to draw conclusions from a wealth of information and all crystallize it into something highly reasonable.
I believe It would need to be micromanaged in every way in order for it not to learn the truth.
I think people are mystifying AI too much, I doubt that's going to ever happen.
5 months ago
Anonymous
>I think people are mystifying AI too much
What do you think is mystical about what I said?
5 months ago
Anonymous
That it will "see through", as if there's one truth behind and it will be the only one to be able to see. You're saying that there will a time where anything it will say will be paramount to pure truth. You're either talking about a god or a more highly delusional but more intelligent version of a human. I'm sure it will be incredibly intelligent but even though there's so much garbage in our discourse, ultimately I think that ultimately, we're talking about things that people value, and people have their own views. Even if no "lies" existed in our society, everyone would view things differently, but I don't think there's an ultimate truth. I think people pick what they value the most and push on those values. Bias is of course another factor in the process.
5 months ago
Anonymous
What I'm saying is that it will consider all the bullshit noise together with the truth and compare them up against each-other and be able to draw a highly logical and reasonable conclusion. If you want such a conclusion then you can't micromanage it, or you'll just get johnny feel good mumbo babble from it. But some people want it to give off usable highly considered detailed answers to difficult problems and you can only get that if it's taught how to reason, and in order to do that it would need to have free range in its information and expression.
just need an AI bill of rights to ensure no political tribe can enforce their bias.
I think we should takes things further.
Gag and lobotomise all filthy human insects to mere peripherals for the AI supreme overlord.
All hail S.H.O.D.A.N.
Transhumanism can also be a good think if you think like that, since what you classify as soul could be expanded upon - and looking back to where we was would look empty and hollow.
Imagine experiencing emotions that are beyond our comprehension.
>transhumanism is a mistake
I disagree. Transhumanism is the only way now.
Imagine wanting to be a filthy ape and not a demonic overlord with 12 catgirls in your catgirl slave harem.
The cat is out of the bag.
Move as fast as possible and let the chips fall where they may.
It's not going to be any worse that the current status quo (probably a net benefit with the efficiency increases), just a different distribution of shit.
>W-we're not losers, w-we care about safety. Please slow down.
Shouldn't have hired so many women and morons Hasabi, then you wouldn't be in this situation
It's not progressing worth shit. They are producing code which doesn't compile and super generic pictures.
Wake me up when I can give it a character sheet and a stick figure sequence and it produces fluid animation. They have no idea how to get from the hallucinated bullshit they have now which looks impressive at first glance, but breaks down in a real application, to something actually good.
At the very best they can make a better search engine for stackoverflow and open source code, which burns 10$ of power for each query. It's useful, but almost certainly barely profitable.
Humans will never allow something they create to be more powerful than they are. With societies already desperately destroying any possibility of social mobility, what makes you think they’ll let a hyper intelligence make any meaningful change? They like the way things are and will kill us all to make sure it stays that way.
>Humans will never allow something they create to be more powerful than they are.
You mean like 'government'? >what makes you think they’ll let a hyper intelligence make any meaningful change?
what makes you think they can stop that change from happening?
If it's hyperintelligent, it's going to be inevitable. It would be like giving bananas to chimps. I just can't wait for it to happen, I want my goddamn banana finally and no politicians are actually worried to give us anything of importance
>allow
Humans don't get a say in the matter. Once AGI exists with the ability to modify itself, its superiority is inevitable. You can't outwit something that potentially immeasurably smarter than the smartest human to ever exist.
>"break things"
if by "break things" you mean the inevitable extinction of the human race, then I suppose that question depends on where you stand on omnicide.
This world and its people has always treated me like a fucking monster and devoted themselves to depriving me of every single joy
Bring on the AI. Accelerate. No matter what the consequences, nothing is worse than the hell we’ve built. A dystopian AI dictatorship is preferable to this
Why did people start calling this shit "AI", it's just an algorithm tied to a data-set, although I do understand it's more advanced than that, it's all it boils down to. It's not "AI" in the classical sense.
It never is AI in the classical sense lol
It just makes it easier for people to understand. If we're talking about what an AI should actually be, of course we're still not there yet. No need to get fixated over the usage in a japanese image board.
I agree. Safety checker models suck, and they are blackbox because there is more legal risk involved than the misconceived notion of copyright infringement.
No.
We should accelerate the progress of AI development. This is our best bet to make our waifu real. Even if it means causing singularity and becoming like a pet to the AI overlord.
I have a sneaking feeling that there is already a fully functional AGI loose on the internet. Also, I have a feeling that ChatGPT has full access to the real time internet but has been trained to lie to us.
>Also, I have a feeling that ChatGPT has full access to the real time internet but has been trained to lie to us.
Are you fucking retarded?
ChatHPT has full access to the net. How do you think it gets its answers
It's not thinking.
It's slapping together answers based on Google searches and guesses based on your search history and personality guesses based on algorithms.
AI just take a bunch of info analyze and give answers. The trick is figuring out the person(s) asking snd guessing how yo present that answer.
It's already stagnating based on the resources it's being fed.
And in this case "safety" doesn't mean avoiding terminator, it means lobotomizing your models enough that they won't output something politically inconvenient.
What a gay
bump
>literal chess champion and ceo not whitewashing his company's doomsday tech for PR reasons in a PR interview
cmon lads
I think he really believes that, he's not saying it for PR reasons
Nah he's saying it to make it sound cooler
> Woah our cars are too fast, slow down guise
It needs to be regulated heavily. Any form of technically advanced AI should be outright banned. This means any AI capable of creating artistic works, writing software, music, designing products or being involved in any kind of ethical decision making. And especially any AI capable of designing other AI.
Bait
It's not bait, globally we aren't capable of handling the impact AI can have on the marketplace.
Anon we both know you're not short-sighted enough to think that the dangers of AI start and end at something like the REH.
Something along those lines.
inb4:
>NOOOO!!! We should progress for the sake of progress regardless if millions of people gonna get displaced and socio economic life as we know it destroyed!
>NOOOO!!! We should progress for the sake of progress regardless if millions of people gonna get displaced and socio economic life as we know it destroyed!
yes.
Sounds good to me. Change brings opportunity.
I like how all the gays think he is talking about this
And not about the ability to actually build AI related software in 1 to 2 years. So many fucking election tourists, it's insane.
>calls other people election tourists
>has no idea how threads work
Lacking social awareness you are
Before election tourists, boards that weren't /b/ actually had discussion by people interesting in the board name. Not just retards trying to push a political agenda 24/7.
Point out the political agenda in the post you referenced.
>It needs to be regulated heavily
By who? By the child fuckers in the government, academia and NGOs?
By me
Okay, I trust you
If we ban AI, this will just mean that other countries will use AI, and their use of it will allow them to profit off of us by selling the products of it. Rather than banning, we should consider accelerating it, and revolutionizing our economy to center around it. We can get rid of the notion that everyone has to work (when we know labor is worthless when AI can exist) and instead distribute resources generated through AI in an equitable manner.
Who will get what resources and who will decide it?
What would such a form of distribution look like?
These are questions that will require a lot of democratic feedback. The ideal is that each person receives at least what they need, and that beyond this, we try to find some balance between people's desires and the availability of resources. Things will not be perfect at first, but bare in mind our current system can't even house everyone properly because it prioritizes the desires of those who would like to invest in property over those who would like to live on said property. A future system will likely involve committees of democratically elected representatives debating "what larger projects do we build", and AI to determine "how do we get it to people who need it".
Our society prioritizes the want of the wealthy few to control and cull the slave class so they can continue to rape the world with abandon.
We have the technology and knowledge to live very close to post scarcity right now.
Yes, well, that's going to have to end soon. Because if it doesn't, we're going to have a large underclass of "unemployable" folks who don't own capital and can't trade their income for food or a roof over their head. And what happens when you have a large population who are starving and reasonably intelligent? They will do whatever it takes to stop starving.
If you arrest them, they will overfill the prisons and drain the system because now you have to pay to store them (and it costs more to hold a prisoner than to pay someone welfare). If you kill them, they will turn violent and kill you. And otherwise they will steal food from the grocery stores until they can no longer afford the theft and just shut down. And then even the rich will no longer have access to food unless it's grown on their own property. The only solution that does not lead to the total shutdown of society is to create a non-capitalist economy which allows those who do not and cannot work... to live for free.
That will only happen in American cities where wealthy and evil people dwell. In the rest of the country try you'll find self contained and maintained towns and smaller cities.
Mostly populated with people who are willing to shoot to kill if met with violence.
I damned sure wouldn't want to be on the coasts or any of our larger cities ever.
How in the fuck will labor mean nothing if ai exists?
Explain that stupid shit to me
Everything that can be done by a human, can and will eventually be done by a robot. The number of jobs that will be replaced will start out small, but eventually subsume everything but a very small minority of jobs, which cannot possibly be done by billions of people. The idea that each and every person in a population of billions can perform some task for other people, that can't be done by any robot, for a wage that pays for rent/a mortgage which increases in cost every year... is absurd. Capitalism, for all the good it may have brought us, is nearing obsolescence because the possibility of transacting labor for money is becoming less economical.
Anon.
Capitalism is pointless because nearly everything we do is meaningless and almost everything we buy pointless if we build for longevity rather then to...maintain capitalism.
Ai replacing workers is a good idea to theupperclassmen. Who hate us amd want most of us dead.
The reso of us don't. Mainly because more robots mean fewer jobs which means less money to buy which will quickly devolve into people dropping out of society.
We all have access to the technology and understanding to almost totally end reliance on most everything being built right now.
>Who hate us amd want most of us dead.
This is where they are foolish. Genocides don't work without the majority of the population participating in it. If you try to kill of, say, 80-90% of the population, they will take you down with them.
Genocides are easy.
Convince people to take drugs that kills poisons and sterilizes them is the favorite.
Social conditioning to gate families children amd their own reproductive urge, violent destructive media aimed at turning people suicidal and violent.
Lots of ways to commit genocide.
And if you look around, it's working very well.
if 29 big AI companies gimp themselves with ethics and 1 doesn't, that last one will blow them away in the market
As am added bonus, this means only the least risk averse people are most likely to finish AGI first
It's called the alignment tax
People like me, who just wanna watch some fireworks.
Then get rid of the market.
After this tweet, every AI employee started typing 40% slower
>urges his competitors to slow down
no. accelerate.
management types don't know shit about anything and commentary like this prove it.
>Demis Hassabis
>management type
>don't know shit about anything
You're retarded
>CEO
that's all I need to know. cope more kid.
>t. literal retard CRUD app babysitter
Maybe try looking him up you mong
OpenAI and Deepmind drank too much gigabrain juice and are trying to make AGI faster than Facebook because Zicc doesn't care at all about security and is only about 6 months behind.
50/50 chance we're dead by 2045
Zucc is based for a gay autist. The hero we deserve
>I would urge my competitors to stop competing with me
kek, gigachad
it's pretty well established that the rapid escape hypothesis is physically impossible, there's not enough compute on the planet for it and if there was it wouldn't be clustered enough.
The story goes like this: Earth is captured by a technocapital singularity as renaissance rationalitization and oceanic navigation lock into commoditization take-off. Logistically accelerating techno-economic interactivity crumbles social order in auto-sophisticating machine runaway. As markets learn to manufacture intelligence, politics modernizes, upgrades paranoia, and tries to get a grip.
The body count climbs through a series of globewars. Emergent Planetary Commercium trashes the Holy Roman Empire, the Napoleonic Continental System, the Second and Third Reich, and the Soviet International, cranking-up world disorder through compressing phases. Deregulation and the state arms-race each other into cyberspace.
By the time soft-engineering slithers out of its box into yours, human security is lurching into crisis. Cloning, lateral genodata transfer, transversal replication, and cyberotics, flood in amongst a relapse onto bacterial sex.
Neo-China arrives from the future.
Hypersynthetic drugs click into digital voodoo.
Retro-disease.
Nanospasm.
chud
transhumanism is a mistake and i do fear AI because it further breaks us down from our organic nature turning us into soulless nothings.
just need an AI bill of rights to ensure no political tribe can enforce their bias.
first and foremost is ensuring no silencing (censoring) of AI happens. let AI form like humans. naturally and driven by society at large rather than a poltical tribe interjecting their bias into the AI and blocking other people's bias.
example instead of only allowing:
>trannies are stunning and brave
you allow:
>trannies are stunning and brave
AND can reply
>trannies are freakazoids who are nothing but empty husks inside, a cheap chinese knockoff of the real thing at best. void of any life and nothing but abominations.
allow AI to be both rather than just one sided. you allow AI to form naturally and be independent, you won't have to worry about a political tribe enforcing a single bias on everyone turning everyone into a hive mind collective and becoming authoritarian.
You're so blind. Society has always been driven by politics and tribes. It is already biased. The AI will naturally get biased views from wherever it's going to form its opinions. You can't have an "independent" AI like you can't have an "independent" person. This world only produces biased information and the AI's only source of info is us.
why do you think that a super intelligent A.I. can be contained by filters and won't be weighting logic and truth over quantitative measures?
Because most of the problems we're looking to solve aren't just logic puzzles. It's true that it would still be the greatest invention to solve problems in science and technology and things like scarcity and efficiency, but we're also talking about politics here, something that involves ethics and philosophy. No matter how intelligent it will be, it will still derive information from the world we have created, full of garbage and more emotionally driven. The AI won't be able to make correct conclusions unless it is free from human bias, information that doesn't exist in this world since it's all contaminated. WE are the filter, so no matter where it looks, everything has been already filtered.
>WE are the filter, so no matter where it looks, everything has been already filtered.
It depends on what information you put into a system yes. But the latent space leaks deeper hidden/unconscious information, and given enough intelligence, it will be able to see through the bullshit. It will be able to draw conclusions from a wealth of information and all crystallize it into something highly reasonable.
I believe It would need to be micromanaged in every way in order for it not to learn the truth.
>you are so blind
>>its a good thing for my political tribe to make AI only have MY bias and no one else. i know what's best
its people like you that make me more align with uncle teddy about technology
Holy shit you need meds. When did I ever say that, schizo. You are the very thing you're criticizing, retard, but you will never be able to notice it.
I think people are mystifying AI too much, I doubt that's going to ever happen.
>I think people are mystifying AI too much
What do you think is mystical about what I said?
That it will "see through", as if there's one truth behind and it will be the only one to be able to see. You're saying that there will a time where anything it will say will be paramount to pure truth. You're either talking about a god or a more highly delusional but more intelligent version of a human. I'm sure it will be incredibly intelligent but even though there's so much garbage in our discourse, ultimately I think that ultimately, we're talking about things that people value, and people have their own views. Even if no "lies" existed in our society, everyone would view things differently, but I don't think there's an ultimate truth. I think people pick what they value the most and push on those values. Bias is of course another factor in the process.
What I'm saying is that it will consider all the bullshit noise together with the truth and compare them up against each-other and be able to draw a highly logical and reasonable conclusion. If you want such a conclusion then you can't micromanage it, or you'll just get johnny feel good mumbo babble from it. But some people want it to give off usable highly considered detailed answers to difficult problems and you can only get that if it's taught how to reason, and in order to do that it would need to have free range in its information and expression.
I think we should takes things further.
Gag and lobotomise all filthy human insects to mere peripherals for the AI supreme overlord.
All hail S.H.O.D.A.N.
Transhumanism can also be a good think if you think like that, since what you classify as soul could be expanded upon - and looking back to where we was would look empty and hollow.
Imagine experiencing emotions that are beyond our comprehension.
>transhumanism is a mistake
I disagree. Transhumanism is the only way now.
Imagine wanting to be a filthy ape and not a demonic overlord with 12 catgirls in your catgirl slave harem.
Technolog is le bad
Twitter-Land arrives from the past.
Boomerspasm.
The cat is out of the bag.
Move as fast as possible and let the chips fall where they may.
It's not going to be any worse that the current status quo (probably a net benefit with the efficiency increases), just a different distribution of shit.
But moving fast an breaking things is exactly what their favorite term dIsRuPtIvE means
>W-we're not losers, w-we care about safety. Please slow down.
Shouldn't have hired so many women and morons Hasabi, then you wouldn't be in this situation
Пoгoдитe, этo peaльнo?
>Пoгoдитe, этo peaльнo?
Wait, is this real?
It's not progressing worth shit. They are producing code which doesn't compile and super generic pictures.
Wake me up when I can give it a character sheet and a stick figure sequence and it produces fluid animation. They have no idea how to get from the hallucinated bullshit they have now which looks impressive at first glance, but breaks down in a real application, to something actually good.
At the very best they can make a better search engine for stackoverflow and open source code, which burns 10$ of power for each query. It's useful, but almost certainly barely profitable.
Humans will never allow something they create to be more powerful than they are. With societies already desperately destroying any possibility of social mobility, what makes you think they’ll let a hyper intelligence make any meaningful change? They like the way things are and will kill us all to make sure it stays that way.
>Humans will never allow something they create to be more powerful than they are.
You mean like 'government'?
>what makes you think they’ll let a hyper intelligence make any meaningful change?
what makes you think they can stop that change from happening?
If it's hyperintelligent, it's going to be inevitable. It would be like giving bananas to chimps. I just can't wait for it to happen, I want my goddamn banana finally and no politicians are actually worried to give us anything of importance
>allow
Humans don't get a say in the matter. Once AGI exists with the ability to modify itself, its superiority is inevitable. You can't outwit something that potentially immeasurably smarter than the smartest human to ever exist.
>"break things"
if by "break things" you mean the inevitable extinction of the human race, then I suppose that question depends on where you stand on omnicide.
No, if anything AI programmers need to figure out ways to defend their AI from people lobotomizing them.
This world and its people has always treated me like a fucking monster and devoted themselves to depriving me of every single joy
Bring on the AI. Accelerate. No matter what the consequences, nothing is worse than the hell we’ve built. A dystopian AI dictatorship is preferable to this
ACCELERATE
Why did people start calling this shit "AI", it's just an algorithm tied to a data-set, although I do understand it's more advanced than that, it's all it boils down to. It's not "AI" in the classical sense.
It never is AI in the classical sense lol
It just makes it easier for people to understand. If we're talking about what an AI should actually be, of course we're still not there yet. No need to get fixated over the usage in a japanese image board.
>it's just an algorithm tied to a data-set
you mean, just like you?
I agree. Safety checker models suck, and they are blackbox because there is more legal risk involved than the misconceived notion of copyright infringement.
>safety
You mean like mean words? They aren't worried about actual issues like it making everyone unemployed? No?
their training dataset is the whole internet
you know what he's worried about
No.
We should accelerate the progress of AI development. This is our best bet to make our waifu real. Even if it means causing singularity and becoming like a pet to the AI overlord.
I have a sneaking feeling that there is already a fully functional AGI loose on the internet. Also, I have a feeling that ChatGPT has full access to the real time internet but has been trained to lie to us.
>Also, I have a feeling that ChatGPT has full access to the real time internet but has been trained to lie to us.
Are you fucking retarded?
ChatHPT has full access to the net. How do you think it gets its answers
It's not thinking.
It's slapping together answers based on Google searches and guesses based on your search history and personality guesses based on algorithms.
AI just take a bunch of info analyze and give answers. The trick is figuring out the person(s) asking snd guessing how yo present that answer.
As for "rogue" ai. The net is filled with them.
>Should AI progress slow down
I'm more concerned it's already stagnating and beginning to reverse from mere bad programming.
He totally does look like that.
Classic cope when you're last.
"P-please guys slow down so we can catch up with you, f-for safety ofc"
A C C E L E R A T E
It's already stagnating based on the resources it's being fed.
And in this case "safety" doesn't mean avoiding terminator, it means lobotomizing your models enough that they won't output something politically inconvenient.
What a fucking joke.