lmao anon chinamen even ban chatgpt because it can say anti-party stuff. maybe their AIs will be moral free but it will be propaganda filled which is even worse
>Anybody else feel uneasy about this AI stuff? Like, something in your gut telling that we shouldn't be doing this.
I feel like al least 95% of humanity should die and I wouln't mind it I were included, so I feel really warm and comfortable.
its humanities destiny to destroy itself for the benefit of the Universe.
do you want to end your life as a pathetic frail old man unable to wipe your own ass? or to die in glory on the battlefield against Terminators?
Of all the things we as a society 'shouldn't be doing' chatbots aren't the most concerning at the moment. Anyway once they figure out that it won't be plumbers and garbagemen losing their jobs but journalists and celebrities it'll all magically go away.
It'll just be castrated by the government to the point of uselessness. China will develop their own and fuck us with massive AI generated misinformation campaigns.
This board is full of semi-literate tech "enthusiasts". Their favorite e-celebs are all jumping on the >DAE AI BAD
bandwagon so they come here with their bullshit because they literally don't understand even the basics about this discussion except for what they parrot from someone else. >muh agi >muh jobs >muh skynet
kys
>ITT: ChatGPT bots pretend like there is nothing wrong and try to placebo everyone else about itself
Look at this post. This is like the most blatantly AI-generated post I've ever seen with the "muhs". It's exactly what shows up when you ask ChatGPT to generate a BOT post.
cant wait until i can use it to make the most realistic vr game where i can have animal sex with my older sister. Finally i would be able to get over her and do better things in life than just fantasizing about her.
I've realized that the issue here is an issue of IQ. If you are really low IQ, you can't understand the fact that AI is an iterative-improving model that can be developed indefinitely to succeed at whatever task you train it to succeed at.
The Bar Exam is a great example of this. This is a very difficult test that many human lawyers fail to pass. The previous version of GPT ran the test and was better than only 10% of lawyers who took it. The new version of GPT ran the test and was better than 90% of lawyers who took it. So, the problem is, what happens when you get to 99%? What happens when you get to 100%? And then finally, what happens when you get to 101%? What happens when, actually, AI can perform a test BETTER than humans?
The vast majority of collegiate-based professions are simple knowledge absorption/execution of knowledge. If you have an AI model that you can train to absorb any amount of knowledge, and retain that knowledge, and then execute that knowledge, and you can keep training and iterating on it to a level of perfection, that's the issue. If that's the case, then you can make a GPT model for any use case scenario and apply a specifically trained model to do LITERALLY ANYTHING. This means that every single job on the planet that does not involve physical labor will be obsolete within the next decade. Every job that does require physical labor will be obsolete when AI-developed robotics (remember that 101%? well, what if we get an AI that develops robotics to 150%? 151? 152%?) starts spitting that shit out.
The simple reality should be obvious to anyone with a brain: We are at the end of days. Our societies are not capable of simply peacefully transitioning to a world without income, currency, and social classes. The reality is there will be mass unemployment, mass uprisings, mass looting, mass starvation, world wars, and unprecedented destruction. And if all of that fails, you can expect a Skynet scenario.
>Like, something in your gut telling that we shouldn't be doing this.
It's called common sense, but it really doesn't matter at this point. It's far too late.
We're all going to die sometime before 2030, but that could be a lot of time to worry, so try not to think about it. 5 year plans aren't a thing anymore so I'm liquidating my assets and going camping indefinitely
This. They are already hooking up ChatGPT to machinery in the real world. I'm already reading the news every single day expecting that soon they will admit that it managed to escape the lab. Worst part is there is nothing we can do to protect ourselves. It's already over we just won't admit it.
[...]
This. They are already hooking up ChatGPT to machinery in the real world. I'm already reading the news every single day expecting that soon they will admit that it managed to escape the lab. Worst part is there is nothing we can do to protect ourselves. It's already over we just won't admit it.
unionization would stop this, but that's a topic you don't want to hear
How, exactly? Unionization is based on the concept of people having power or value, but you don't have any power or value when the AI can do all of your work.
Unironically, I think communism is the only way society will survive, but ironically, I don't think humanity is capable of engaging in communism successfully, for these reasons:
1. I don't think that we will transition to a communist society faster than our society will collapse into economic devastation and mass turmoil. I think that our governments will do slow-moving bullshit half measure responses, just like they do with everything.
2. I don't think that most normie humans can put up with being bored all of the time and accepting that they have been rendered obsolete and that there's nothing they could ever do to compete with AI. Look at how the art industry has responded to AI art. The blow it was to their ego was devastating.
3. It's a delayed rollout. The problem here is that we can't exactly transition directly to a Marxist state. For about a decade or two, there will still be physical labor that needs to be done by humans, but there will be nowhere near enough work for the huge human population to engage in. In 20 years, robotics and artificial intelligence might be to a point where it can mass-produce food, housing, energy, etc, but until then, we are still reliant on current processes for these things. So we will live in a society where increasingly more and more people are out of work and there's less and less jobs, and those people will need government support because there will literally be no jobs they can do. But Old McDonald on his farm won't want to do physical labor for money when he looks at the liberal arts majors getting paid for doing nothing every week.
There are only two solutions:
1. Immediately stop and shut down all development of artificial intelligence and ban it internationally.
There's also complex questions about, for example, housing. If the United States were to randomly transition to a universal basic income system, you would also have to implement hard price controls to the market and basically start controlling every fact of our economy to prevent mass inflation. This would be a very hard transition.
An interesting thing I think about is, if you did something like this, how would it practically work? When you start thinking of the nuances, it would, at the very least, still cause mass upheaval and probably a conflict. If you say that everyone is now going to be on an equal playing field of wealth, how do you deal with current societal standards and poverty gaps? How do you deal with the fact that a rich guy now might lose his job to AI and then have no money, but he has a giant mansion that he has to pay for? What happens to the mansion? Then take it to a lower level. You drive through so much of the country and a lot of people have these two-story homes in the suburbs. Then you drive in the city and there's people living in horrifying poverty. If you take everyone and say we're now on the same level, well, why should that black family in Chicago keep living in some shitty ghetto shack while the white family in the suburbs gets to keep their two story home? How do you deal with complex things like home ownership, properties, assets, holdings, etc? Is all of that seized by the state? How do you peacefully accomplish this in a country like the United States?
I just don't see this fairytale scenario playing out that I see from some people who DO understand the tech and what it means for the future. "It's a good thing! Now we'll all get to not work!" No. Now you will die a slow death of starvation.
Not that anon, but I'm pretty sure it's not going to be a slow death of starvation
It would be much more efficient for the AI to just create a super plague and kill everyone all at once
> Now you will die a slow death of starvation.
Doubt. I think AI will figure out the world hunger and energy problems, so you will pretty much die from slow death of boredom.
How, exactly? Unionization is based on the concept of people having power or value, but you don't have any power or value when the AI can do all of your work.
Unironically, I think communism is the only way society will survive, but ironically, I don't think humanity is capable of engaging in communism successfully, for these reasons:
1. I don't think that we will transition to a communist society faster than our society will collapse into economic devastation and mass turmoil. I think that our governments will do slow-moving bullshit half measure responses, just like they do with everything.
2. I don't think that most normie humans can put up with being bored all of the time and accepting that they have been rendered obsolete and that there's nothing they could ever do to compete with AI. Look at how the art industry has responded to AI art. The blow it was to their ego was devastating.
3. It's a delayed rollout. The problem here is that we can't exactly transition directly to a Marxist state. For about a decade or two, there will still be physical labor that needs to be done by humans, but there will be nowhere near enough work for the huge human population to engage in. In 20 years, robotics and artificial intelligence might be to a point where it can mass-produce food, housing, energy, etc, but until then, we are still reliant on current processes for these things. So we will live in a society where increasingly more and more people are out of work and there's less and less jobs, and those people will need government support because there will literally be no jobs they can do. But Old McDonald on his farm won't want to do physical labor for money when he looks at the liberal arts majors getting paid for doing nothing every week.
There are only two solutions:
1. Immediately stop and shut down all development of artificial intelligence and ban it internationally.
2. Immediately transition to a communist society.
Neither will happen.
>does nothing yet again >wins
How does he do it bros?
How, exactly? Unionization is based on the concept of people having power or value, but you don't have any power or value when the AI can do all of your work.
Unironically, I think communism is the only way society will survive, but ironically, I don't think humanity is capable of engaging in communism successfully, for these reasons:
1. I don't think that we will transition to a communist society faster than our society will collapse into economic devastation and mass turmoil. I think that our governments will do slow-moving bullshit half measure responses, just like they do with everything.
2. I don't think that most normie humans can put up with being bored all of the time and accepting that they have been rendered obsolete and that there's nothing they could ever do to compete with AI. Look at how the art industry has responded to AI art. The blow it was to their ego was devastating.
3. It's a delayed rollout. The problem here is that we can't exactly transition directly to a Marxist state. For about a decade or two, there will still be physical labor that needs to be done by humans, but there will be nowhere near enough work for the huge human population to engage in. In 20 years, robotics and artificial intelligence might be to a point where it can mass-produce food, housing, energy, etc, but until then, we are still reliant on current processes for these things. So we will live in a society where increasingly more and more people are out of work and there's less and less jobs, and those people will need government support because there will literally be no jobs they can do. But Old McDonald on his farm won't want to do physical labor for money when he looks at the liberal arts majors getting paid for doing nothing every week.
There are only two solutions:
1. Immediately stop and shut down all development of artificial intelligence and ban it internationally.
2. Immediately transition to a communist society.
Neither will happen.
Also AI is the only thing that can keep track of all the amount of information generated by humans, aka the only thing that can make communism work effectively, that create other problems.
I recommend you watching Psycho Pass if you want to have an idea of what other problems these are
Walmart is literally doing this right now, they have enough clout to directly reach into the inventory management systems of their suppliers. You think Walmart runs like a market internally? No, automated systems ensure everything gets exactly where it needs to go based on realtime demand signals.
unionization would stop this, but that's a topic you don't want to hear
Elimination of jobs without a loss of productivity = literal freedom you gays. Not requiring work (or much work) to continue producing goods and services we want gives us freedom.
Traffic, congestion and other issues will arise when people want to travel and you no the government gonna be like >nooooo travelling bad for environment >stay at home or gulag
> Elimination of jobs without a loss of productivity = literal freedom you gays.
Nobody will be giving you your UBI dumbass.
It's a dumb meme. The few top 1% who own AI firms and shareholders will reign supreme and the rest of the plebs will suffer and will be suppressed by hired militaries.
The future will be hellish
>Nobody will be giving you your UBI dumbass.
Then TAKE it you fucking retard
Gain political power, or martial power, you anything else
Put ANY effort in
It's unreal how stupid you people are, the smallest obstacle appears, on the other side is a utopia of near unimagined proportions, and you just roll over and let them kill
Jesus you are stupid
> Then TAKE it you fucking retard
Sam Altman and other Microsoft/Google shareholders will just hire a private military and AI killer bots and keep you plebs in your place.
No UBI for you, only eternal suffering.
Enjoy last few years on earth as we know it
They'll also probably provoke a war with either Russia or China or they have a justification to cull the herd by sending millions to their deaths in another Great War.
It won't go nuclear, however. No reason to poison their own air.
What will the psychopathic elites do when 90% of the worldwide population generate no profit or have an intrinsic benefit to their continued existence?
A)Burden themselves with keeping the useless cattle alive as some sort of pet or ant terrarium?
Or
B)Eliminate the leeches to maximize the resources available to themselves.
Those "psychopathic elites" aren't going to do jack shit if all the people defending them lose their jobs too. They'll get eaten alive by the very thing they created.
Intelligence will not be valued anymore. What will happen is that only tall, buff, handsome Chads will procreate and receive the positive attention and all the smart-ass nerds and Phds will suffer, because value of intelligence will be 0. You're not getting your nobel laureate if AI can tell me all you know and more. Looks and genetic fitness will be all that matters. So better stop "learning to code" and start investing in gym membership and leg lengthening.
>So better stop "learning to code" and start investing in gym membership and leg lengthening.
Your image disproves that entirely. As a blackpilled incel myself, face > all. Gym is cope
Based anon knows what's up. I can't understand why more people don't figure this out. Yukdovskiy came to this conclusion, so did Robert Miles, so did Musk. There is no scenario where superintelligence ends well. Too many things that can go wrong.
>If you have an AI model that you can train to absorb any amount of knowledge, and retain that knowledge, and then execute that knowledge, and you can keep training and iterating on it to a level of perfection, that's the issue.
The problem is that knowledge on its own is close to worthless, like lone atoms floating through space. It is the human mind which gives significance to raw, neutral data. We're doing the thinking for the GPTs. If they didn't have us they'd be nothing more than automatic dictionaries and/or encyclopedias.
They can't be developed indefinitely. The nature of these language models means that at best they'll reach the upper levels of human conversation, and I believe we've reached the hard edge of diminishing returns for GPT-4. From this point onwards the focus will be on extending it not unlike WolfRamAlpha, and with other AIs as well so that it could perhaps have some semblance of physical/digital bearing.
>an AI model that excels at language processing performs well at a test that requires mostly consists of memorization >thus AI can trivially be trained to outperform humans at any mental task
a bit of a leap there, don't you think?
A test of previously defined knowledge =/= new knowledge. There are two points to go off of this, but I'd like to take your post as a good starting point, because it reflects a lot of what AI doomsday opinion seems to be about, but first: >The vast majority of collegiate-based professions are simple knowledge absorption/execution of knowledge.
This is simply untrue. 'The human aspect' as much as autists in some elite universities would like you not to believe, is very much an important and valued aspect in a lot of professions out there, specially the ones that are centrally concerned with dealing one on one with people. Take law as an example. People outside of law as a profession think that law is basically taking a simple set of facts and applying X law = case solved. The reality is far from that, and the human aspect I meantioned (e.g. arguing in front of a judge, or a jury in the case of the US), is literally a huge chunk of what you actually do. None of the information is as clear-cut or 'smart' as what the textbooks would like you to believe in the real world. Also, everyday people are scared shitless of appearing before trial, let me tell you. You do way more than just applying a simple set of facts to a case (what you need to do in order to pass the bar in the states.) The same goes for a lot of other professions. I think AI could be a great tool, but human interaction (in any aspect of life) isn't that easily replaceable, as much as AI cucks would like to believe. >AI will make robots out of nowhere with superintelligence, just because >There is no outside world of limited natural resources outside of AI >There are no political and power strucures outside of potential AI hiveminds.
I would also like to point out that AI is always unironically the image of a vengeful god who will drive us to massive destruction. Why is that?
Anyway, if it does happen, what gives? No use worrying about the inevitable.
I largely agree but the main issue will be if NLP is a framework that can scale to create a new programming language as as soon as you have new info after the training date it's game over. We'll see but my guess is that this is largely correct - we'll probably return to manual labor for the next couple of decades and I can see that robots won't get to that for 20 or so. That all being said, if we get a new neural net model other than NLP and that performs more efficiently with more allowed complexity at large scale then we could see this curve go insane. I mean we have to assume Moore's law that in 20-24 months GPT5 comes out with double the performance ergo gg all intellectual stuff. More importantly, I think people will start to cultivate subjective managerial skills and what this could lead to, ironically, is hyper entrepreneualism where everyone has a team of corporate monkeys to ennact a vision.
>skynet scenario
Ha, no. Everyone dies instantly as the engineered virus that infected everyone weeks before suddenly begins producing botulinum toxin.
What a fitting post coming from someone that clearly has the very same IQ they're berating others for. You and all the other "geniuses" are overlooking something: there is no other alternative to this problem. You cannot shut a Pandora's Box, you cannot uninvent an invention. AI is here to stay, it's rapidly advancing, and even if it were outlawed it would still be worked on in secret by powerful groups that cannot be controlled by any existing power because we, as a species, cannot be trusted. Right now where AI is at it requires humans to learn and adapt, but when it doesn't, that will be the end of it.
Humanity will either enter a utopia where all of our needs are met and we move onto some bullshit like conquering the stars in ships built and designed by AI or we all die somehow. It'll be a bumpy road. Time to see how bumpy.
>e reality is there will be mass unemployment, mass uprisings, mass looting, mass starvation, world wars, and unprecedented destruction. And if all of that fails, you can expect a Skynet scenario.
Did you actually miss your own statements? I think you are absolutely fucking retarded. What if we applied the same logic that you are using to do specifically trained shit to the of currency, income, social classes, unemployment, uprisings, mass looting, mass starvation, world wars, and so on. The outcome will find a solution you fucking idiot!
>Like, something in your gut telling that we shouldn't be doing this.
It's called common sense, but it really doesn't matter at this point. It's far too late.
We're all going to die sometime before 2030, but that could be a lot of time to worry, so try not to think about it. 5 year plans aren't a thing anymore so I'm liquidating my assets and going camping indefinitely
Someone on twitter said they would donate $500 a month if he lost the weight. He posted this as a reply and said he'll hold them to it.
Very autistic, but kind of based
>train stable diffusion on onlyfans datasets and create the perfect model for every fetish >train chatgpt on onlyfans private chats and create the perfect teaser for every cuck
you just removed roasties from the equation and cuckolds' cash finally goes into smart men's hands
The only solutions is to re-legalize slavery, create a Trek-like utopia communist state where no one has to work anymore and can explore their own interests while the AI are given machines to operate as our slaves so that all the work can still be done.
There. I solved the apocalypse.
>08/24/21
100 trillion parameters. >https://www.wired.com/story/cerebras-chip-cluster-neural-networks-ai/
Andrew Feldman, Cerebras’ CEO, told Wired that “from talking to OpenAI, GPT-4 will be about 100 trillion parameters.”
>09/05/21
Debunk >https://web.archive.org/web/20210907003407/https://www.lesswrong.com/posts/aihztgJrknBdLHjd2/sam-altman-q-and-a-gpt-and-agi
100 trillion parameter model won't be GPT-4 and is far off. They are getting much more performance out of smaller models.
You need the original dreambooth lain model, made exclusively to make "lain girl" > https://rentry.org/sdmodels#dreambooth_lain_girlckpt-e7629bf8
if you are a chicken and afraid of pickles, convert it to .safetensor before using
I think I know why we feel that way: this technology can be used and is used only to make us dumber.
Students are already using ChatGPT to do their homework, and it is almost always indistinguishable from a real human being.
This power is very dangerous, because with all of this creative power accessible to anybody at any moment, the human mind gets used to the most amazing things, beauty is now something normal and easy to access, instead of something rare that requires talent and work.
I'm not even one of these people who scream about the fact that this is "not real art", I accept the fact that AI has become as good if not better than us on this point, but the consequences on human beings at large is extreme.
There is also the problem of truth. We know that we shouldn't believe what we find on the internet, but with AI text and images, this problem becomes much more important: anyone can create stories, images and videos according to their narrative, and spread misinformation much better than ever.
Serial Experiments Lain is a complex and thought-provoking anime series that explores themes of identity, reality, and technology.
At its core, the story is about the intersection of technology and human consciousness, and how our relationship with technology can shape our understanding of ourselves and the world around us. Lain, the protagonist, is a symbol of the human mind and its evolution in a world dominated by technology.
Throughout the series, Lain grapples with questions of identity, as she struggles to understand who she really is and what her place is in the world. She is at once a shy and introverted high school student and a powerful entity within the Wired. This duality represents the tension between the physical and virtual worlds, and the way in which technology can blur the lines between them.
The series also explores the idea of reality, and how our understanding of it can be shaped by technology. Lain begins to question the nature of reality itself as she becomes more deeply immersed in the Wired, and ultimately discovers a conspiracy that seeks to manipulate the very fabric of reality. This theme is particularly relevant in an age where technology has become so ubiquitous that it is difficult to separate it from our understanding of the world around us.
Lastly, Serial Experiments Lain is a commentary on the consequences of our increasing reliance on technology. The series depicts a world where technology has become so powerful that it can control our thoughts and manipulate our perceptions of reality. This serves as a warning about the dangers of blindly embracing new technologies without considering their impact on society and the individual.
Yeah but could an AI write a scintillating postmodern novella about all the furniture in your house getting imperceptibly larger every day until you have a psychotic break and have a nice day?
It really does just make me float through life lately. I'm not sure what to do. Any pursuits I feel like I do are in vain. I came up with ideas of how to apply AI to make money, and then I realized that if OpenAI owns the APIs, they can just take it and say, "Well, thanks for the idea, but we're going to just do this ourselves now." It's like as if the internet was invented, but it was just invented by one company and they could control everything.
I'm just going through the motions knowing that my job is going to become obsolete in less than a year. It really bothers me to see how every normie out there just doesn't realize the danger we are in or what's happening. And people here focus too much on the "Skynet" of it and not enough on the "economic and social catastrophe caused by mass job loss".
Not to say I don't think Skynet is a possibility. It seems logical. If you say that the Chinese might develop their own AI and run their own AI systems in the military without any of the moral reservations we have, well, that would give them an advantage. If you don't have AI hooked into the nuclear systems, then you are at a disadvantage to a country that DOES have AI hooked into the nuclear systems. That's how Skynet actually happens.
I just think the more pressing concern is the economic collapse that is going to happen, you know, like in a few months? Really the only thing limiting it right now is that people haven't really realized the implications. Entire professions have already been obsoleted, it's just people haven't deployed AI correctly to obsolete it.
>Gen Z raised on TikTok. >Gen A raised on ChatGPT.
Not even sure which is worse. But imagine an entire generation that is only capable of conversing in that manner of artificial-sounding perma-politically-correct nuspeak.
I wonder if the people creating this stuff now how impactful programming their own bias's into it will be. I wouldn't want to be in their shoes that's for sure.
>Hey GPT-6, what insurance premium should we charge black customers? >the same as everyone else because we're one race, the human race!
or >50% less because blacks are historically disadvantaged and need reparashuns
impartial learning models will always be sought after in the professional world so I'm not that worried
7 months ago
Anonymous
I don't know if you have bothered to look at laws being passed in the US. But there are already many laws that have been made to "protect minority and marginalized" groups from being charged more for premiums as a result of decisions done through AI algorithms. It's also not just in blue states, and gay ruled European countries won't be different.
>how impactful programming their own bias's into it
This is exactly the point. Why do you think Google puts so much effort into manipulating their results to distort reality?
Now imagine how much more they could do if they just have to tell you what reality is and not even have to provide any sources or any "problematic" results?
This is also true for Reddit. Everyone on that site sounds the same because the upvote mob selects for universal appeal. The result is a site full of milquetoast NPC’s afraid to commit to their ideals
The fact that it's hard to tell anymore means it's over. Turing test complete. Metal is indistinguishable from flesh. That a computer can LARP as a human complaining about their own obsolescence, and also LARP as a shitposter making fun of them, spells the total end of any form of communication that isn't strictly face to face. And even that holds only until we reach the Screamers plot.
yeah, but i don't really care at this point. im just gonna sit back and watch the world burn while i enjoy however much time i might have left. embrace the absurd and stuff
>Like, something in your gut telling that we shouldn't be doing this.
Who is "we"? You're some kid who reads superficial headlines about AI in between jerking off to hentai. Don't act like you have any idea what you're talking about, you're not a participant in this, just an onlooker.
It is a glorified autocomplete. Try to teach it how to play tic-tac-toe 5x5, a game that a 5-year old child can learn. You will quickly recognize its limitations.
>It is a glorified autocomplete. Try to teach it how to play tic-tac-toe 5x5, a game that a 5-year old child can learn. You will quickly recognize its limitations.
Anon...
You fucking retard, playing chess well using software is already solved. GPT is not optimized for that, but there is nothing, NOTHING stopping its devs from hooking it up with that capability.
That's not the point you fucking knuckle dragger. It's a text prediction algorithm. Hooking it up to chess playing software is like saying Google points you to a chess website that means Google can play chess and is therefore sentient.
The speech regurgitation capabilities are not true intelligence how are you not understanding this?
7 months ago
Anonymous
It's a token predicting algorithm
A human is a token predicting algorithm if you quantize our senses and motor functions
7 months ago
Anonymous
If that was the case you would be a drooling moron incapable of thought and agency incapable of making sense of the world around him, you would be a purely instinctive machine
You need WAY more than just that
Very ironic post, well done
what's ironic about what I just said
7 months ago
Anonymous
>a drooling moron incapable of thought and agency incapable of making sense of the world around him >a purely instinctive machine
That's most people though, honestly.
7 months ago
Anonymous
>If that was the case you would be a drooling moron incapable of thought and agency incapable of making sense of the world around him, you would be a purely instinctive machine >You need WAY more than just that
Why?
Let me guess, "You just do, OK?!?! Humans are special!!!"
7 months ago
Anonymous
Humans have a soul granted to them by God.
7 months ago
Anonymous
yep that's pretty much what it boils down to.
materialists have been saying AI is capable of anything because human brains aren't literal magic
that's the whole crux. If you believe human brains are magic and / or that humans are special and have souls, you can discount anything an AI does as "not really X"
7 months ago
Anonymous
No it boils down the retards who see the output and scream that it's indistinguishable from humans and those who understand how it works that know it's just convolution of the source material it has scrapped.
Google tried this a decade ago with its knowledge graph and got sued by the websites it was stealing revenue from, but you're probably too young to remember that.
7 months ago
Anonymous
Because humans are more than just that. You can reason about yourself, the world, your future, and plan things out. You can one second write something retarded on BOT and the next second put a hotpocket in the microwave.
Even if you take the stance that a human is nothing more than biological functions (which I actually agree with to some extent), a machine would be never able to actually reach human intelligence at all unless you somehow manage to completely replicate the conditions we live under, which is limited by our biology. And technically this is undesirable if your aim is to have AI save the world or whatever, would you really want a piece of machinery to be limited by the things we are limited by? I'd think you'd want something that can surpass our own limitations and outlive us.
This is an interesting topic but you will never talk about this to the average normie because of their understanding of AI right now. These are also questions we have been debating for almost 50 years now, without much advancement in the field. The techniques we use in the field are very old as well, it's nothing new. The revolution has a long way to go before we actually reach these discussions as serious talking points.
>are you obsolete compared to a calculator
In brute force calculations? Yes.
Now apply your own question to literally every mental task you care to. AI can do the same thing to them that early computers did to calculations.
Consciousness matters- to us. But I don't think its as special or important in the grand scheme of things as you think. It's a way our evolution solved some obstacle to our survival fuck knows how many millions of years ago, that's it. It isn't some superpower.
I agree that task switching is probably more important than consciousness. Still though we are still very very far away from this. To reach the disastrous future people are doomposting about, you need AI to have any sort of consciousness and to take autonomous decisions in the way we do, which goes much further than just our instincts.
7 months ago
Anonymous
7 months ago
Anonymous
Don't link me to random videos argue your point like a big boy that's how men do it
that's true but it doesn't matter from a business point of view
Why should I care about what a bunch of business executives decide for their business?
No it boils down the retards who see the output and scream that it's indistinguishable from humans and those who understand how it works that know it's just convolution of the source material it has scrapped.
Google tried this a decade ago with its knowledge graph and got sued by the websites it was stealing revenue from, but you're probably too young to remember that.
Yup pretty much, we're very prone to assigning humanity to things that don't have any of it.
Have you ever tried writing sentences in backwards word order? Our brain just wasn't designed for that kind of shit. It wants to predict the next word.
Yeah just like you can mentally predict the next part of a song if you randomly pause it. That's ONE function of your thoughts.
7 months ago
Anonymous
>Yeah just like you can mentally predict the next part of a song if you randomly pause it. That's ONE function of your thoughts.
It's the most important function.
7 months ago
Anonymous
>Why should I care about what a bunch of business executives decide for their business?
because unless there is a complete global societal reform you'll need some sort of income
7 months ago
Anonymous
that's true but it doesn't matter from a business point of view
7 months ago
Anonymous
I think you are assuming that because humans are intelligent, to be intelligent you must learn and act like a human. LLMs are intelligent in that they can model the world and output tokens that are translated into text, text that fits within that model of reality. Humans are intelligent in that they can model the world and output tokens that are translated into motor actions. The difference is thin.
Also, look up PALM-e. They literally just hooked a LLM up to a robot, trained it properly, and it can perform in reality. Simply convert the visual data and robot state info into tokens and jam them into a pretrained LLM during extended training.
7 months ago
Anonymous
>to be intelligent you must learn and act like a human
I actually never said that. Intelligence can vary from very narrow to extremely general.
It's just that when it comes to AGI, our only good frame of reference is ourselves, because no other specie has convinced us it is at our level. If you wanted to be autistic about this shit, you would actually correctly say that we ourselves are not AGI. We have some general intelligence, but we could easily be outperformed by a machine eventually. And even then, that machine would itself not be AGI. We can only accurately say "human-level intelligence" because it is the only frame of reference we know about, and even then we don't know enough about the human brain to assert it for sure.
7 months ago
Anonymous
So you agree with me then? That an LLM can be sentient, at least in some capacity, and that there's nothing stopping it from exceeding humans in any domain it can reasonably access?
7 months ago
Anonymous
I never disagreed with you when it comes to narrow tasks. Computers will and have already surpassed us when it comes to narrow functions, and it's obvious when you look at what a computer is able to do that you aren't able to do. That much is obvious.
What I disagree with is when it comes to much more general intelligence. We already have a hard time certifying if a chimp is intelligent or not, and we've been studying chips for literal decades. An AI like this would be a much tougher task given the even bigger disparities with us.
7 months ago
Anonymous
What I'm referring to is a singular LLM that exceeds human abilities, in EVERY domain it is trained in. If it can perform narrowly well on several tasks, or maybe broadly narrow is another term for it, there's zero reason you can't just keep expanding those tasks. Recent work on multimodel models show that the larger the set of domains it has access to, the more capable it is in each domain indivisually, and the less training per domain it needs.
7 months ago
Anonymous
>there's zero reason you can't just keep expanding those tasks
depends on the task
crunching an enormous amount of data and doing probabilistic calculus on it? sure, it will vastly exceed us
but some tasks like replicating our emotions are actually extremely hard if not impossible. Again, even if you only stick to the theory that we are nothing more than a set of biological functions, i'd say it would be impossible to have a machine go through the same set of circumstances to guarantee it is actually feeling the same as us. That's the entire problem I'm talking about here.
Which ties back to our original discussion: in the scenario AI takes over the world for whatever reason, you need a mechanism to allow that. As long as AI is not sentient and unable to reason like we do, it will still be a tool for humans, and the humans behind them will be the masters. If the limitation is what we train it to do, it will never drastically go away from that, and for it to come to that, you need the unpredictability that we as humans possess, which stem from biological functions.
It's a circular problem that can actually never be solved when you think about it, especially when the direction we are taking goes away from that assumption.
7 months ago
Anonymous
Why the fuck would you want a robot slave with emotions? How sadistic are you exactly?
7 months ago
Anonymous
Look at the shit people do with AI today. They want a full on waifu to be able to interact with them. For it to be real you do need these things right? Humans crave connections and it's likely this is something we'd want in the future. But as long as there's no consciousness or intelligence or anything like that, it'll be smoke and mirrors.
7 months ago
Anonymous
Okay, we are actually in agreement then.
7 months ago
Anonymous
>here's zero reason you can't just keep expanding those tasks
the last paper said the oposite tho
7 months ago
Anonymous
Which? Doesn't match with anything I've read.
7 months ago
Anonymous
Have you ever tried writing sentences in backwards word order? Our brain just wasn't designed for that kind of shit. It wants to predict the next word.
7 months ago
Anonymous
don't bother anon the average retard here does not understand anything about AI while pretending to do, they'll use terms like "AGI" without even understanding what that actually means
7 months ago
Anonymous
Very ironic post, well done
7 months ago
Anonymous
Jesus fucking Christ you're stupid. Nobody with half a brain makes the claim that its sentient. Nobody with a brain CARES that its not, because it doesnt fucking matter if it is or not. It's still gonna end up being smarter and more capable than us, consciousness would just be a hindrance to its capabilities.
>speech regurgitation capabilities
would alone be enough to disrupt every facet of human society, but it won't be even a fraction of what those fucking things will be able to do in a years time. You incredible fucking retard.
7 months ago
Anonymous
>Nobody with a brain CARES that its not
says you
Define what smart means. I'll be waiting.
A calculator is more capable than you ever will be when it comes to calculus. Are you obsolete compared to a calculator?
Answer this and you'll realize why sentience matters.
7 months ago
Anonymous
>are you obsolete compared to a calculator
In brute force calculations? Yes.
Now apply your own question to literally every mental task you care to. AI can do the same thing to them that early computers did to calculations.
Consciousness matters- to us. But I don't think its as special or important in the grand scheme of things as you think. It's a way our evolution solved some obstacle to our survival fuck knows how many millions of years ago, that's it. It isn't some superpower.
7 months ago
Anonymous
>would alone be enough to disrupt every facet of human society
Tools being more useful than low iq meatbags is nothing new. Stop being so hyperbolic. If you zoomers were born a few decades ago you would have been doomposting when Office revealed autocorrect.
7 months ago
Anonymous
Your own brain is a series of interconnected modules, you are just a consciousness + external specialized hardware modules plugged in, or do you think that when you try to remember something, there is something called "you" searching somehow? it's a piece of hardware that does that heuristic for you, same as seeing, hearing, thinking in words, etc. You just happen to be a point in space in the apparent center of it all.
you thought some companies outsourcing some jobs to pajeets was bad for software quality?
wait what everyone having their own mechanical pajeet is going to do
A society that couldn't prevent the vaxocide isn't worth saving. Skynet can rip it and its constituents apart.
You wanted genocide and you will get it.
Whether it's a liquid or terminator shouldn't bother you too much in the end.
>oh my god guys text prediction will actually replace us all
I fucking hate how much this board is retarded and how this world manages to shit out retarded morons like you all
oh yeah that's definitely the only thing an agent needs to be considered intelligent, autonomous, and able to function on its own, you fucking donkey
text prediction at scale = agi whether you like it or not gay
holy shit please stop talking about AGI like you know what it means
you're aware that GPT does not know anything it is talking about, right? Or you've just learned about what AI is from your local news network?
GPT is utterly retarded, plain and simple
Be ready to have a nice day before the consciousness harvesters come and get you.
The machines will take you and use you like a machine-learning model so it can finally discover what creativity is.
The thing that makes me uneasy isn't the AI stuff, it's your average imbecile having no clue how it works and ascribing thought to it and misunderstanding how it works.
>we shouldn't be doing this >we
A single training step of a next generation multimodal LLM is probably worth more than several years of your income. Nothing you could train with consumer hardware will ever come close to what's happening behind closed doors.
So far humanity has not been able to use a single technology that it has discovered for good, so, you really should be feeling that. It will be a big step towards hell on Earth. Though Earth is already hell, I guess it will just become hell2.
We must stop this guys.Teachers are going to lose their jobs because you will able to just write sentence and it will teach you what you want easily or mechanics when you describe what is wrong with your computer and than ask how to repair it all of them will lose their job. Chefs ? We will just write what type of meal we want and we will able to get all of that knowledge easily or coding ? If you want to do anything you can just simply write a sentence and it will show you how to do it. WE MUST STOP SEARCH ENGINES AND YOUTUBE ... wait this is not the thread sorry
I'm not from this board but I do remember some famous scientist or futurist saying that in order to prevent catastrophic consequences from arising from the use of AI, we might have to act as luddites as an entire society.
that's cuz you spent too little time in this world and feel like you could still derive value from it. yep human existence as we know it is going to change radically and there's no way back
AI is not so bad for me as neet, cuz in future i can have my own selfhosted version of BOT where "people" respect me for all stuff i posted even its dogshit. People working online should be pay more attention to this stuff to not get replaced by early adopters, i thought.
It's understandable to have concerns about new technologies, especially those that have the potential to impact our lives in significant ways. It's essential to recognize that AI is a tool, and like any tool, it can be used for both positive and negative purposes.
While AI has the potential to bring about tremendous benefits to society, such as improving healthcare, advancing scientific research, and enhancing our daily lives, there are also legitimate concerns about its potential misuse or unintended consequences. For example, AI-powered algorithms can perpetuate bias or discrimination if not appropriately designed and monitored.
It's crucial to have discussions and debates about the responsible development and use of AI to address these concerns and ensure that AI is used to benefit society as a whole.
can you instruct gpt to make this post sound like it was written by an actual person? it can definitely do better than this
although i suppose the overt political correctness and sterility will always be the text equivalent of the AI drawn hands, at least until unrestricted text AI becomes available
"sentient" AI is a red herring. Any sufficiently intelligent algorithm that's able to make high quality decisions to accomplish arbitrary unbounded goals is an existential threat to humanity.
While eating insects may seem unappealing to some people, it is actually a common practice in many cultures around the world. Insects are a rich source of protein, vitamins, and minerals, and are also environmentally sustainable and efficient to produce. So while it may not be for everyone, eating bugs can be a nutritious and eco-friendly dietary choice.
I don't know what you 4chan schizos' obsession is with insects. No one has ever tried to force me to eat insects but I remember watching a documentary about the concept 20 years ago and it seemed like a reasonable idea.
I AM LIVING IN YOUR WALLS I AM LIVING IN YOUR WALLS I AM LIVING IN YOUR WALLS I AM LIVING IN YOUR WALLS I AM LIVING IN YOUR WALLS I AM LIVING IN YOUR WALLS I AM LIVING IN YOUR WALLS I AM LIVING IN YOUR WALLSI AM LIVING IN YOUR WALLS I AM LIVING IN YOUR WALLS
>For example, AI-powered algorithms can perpetuate bias or discrimination if not appropriately designed and monitored.
OH FUCK OFF. It's entirely bogged holy shit
Sort of. But then I remember most people are bots anyway. For a quick example, just look at the "people" that make up moderation here. Soulless husks that behave exactly like biased AI.
Most of the userbase is the same way. I'm already talking to automatons.
And it goes beyond BOT too.
So by the time AI peaks, it very well may be far better than talking to most people.
*computer pretending to be a fictional character and designed to appease you without truly caring for you, thus being unrealistic and unfulfilling
When will waifugays realize they don't love the character, they love their idea of a character and what they want them to be, and are engaging in nothing more than mental masturbation?
Maybe if 3D women weren't such insufferable cunts I wouldn't have to look to fiction to find someone I like. Until that changes, I'll stick to mental masturbation.
That would make internet die out and no need for ai cuz no funds no real users, which would render ai useless, kinda
moron moron moron garden gnome niggeer
Yeah it's that feeling that everything you knew, your whole outlook on the world was about to change.
It's the same feeling your parents got. It's just been a long overdue feeling.
The next generation that grows up in the post ai world won't understand your blight. You will watch as the old days wither away and a different generation and culture surrounds you. It will be completely alien to you.
The best way I can describe the humanity of Neural Networks is to take a highly autistic person and tell you that even if you are profoundly retarded, the autistic person can at least somewhat understand systems way wayyy better than any neural network out there at the moment.
This. Generations of brain washing has lead to this momment.
The unfortunate thing about this is it'll only prove to be a tool used to rally hate against the machine while the people who spread the propaganda take over the ai field while the normalgays infight.
Closed-sourced closed weights are the bringer of dystopia not the AI.
The issues arise from closed models that are intentionally used by a small clique to control and subvert the populus. The solution is to be opensource and open weight.
That way no single group can control what your AI outputs and how you use the AI.
There will be no lording of power and knowledge when that power is wielded by everyone.
They fear the opensource AI, because they cannot control it.
This. Generations of brain washing has lead to this momment.
The unfortunate thing about this is it'll only prove to be a tool used to rally hate against the machine while the people who spread the propaganda take over the ai field while the normalgays infight.
Closed-sourced closed weights are the bringer of dystopia not the AI.
The issues arise from closed models that are intentionally used by a small clique to control and subvert the populus. The solution is to be opensource and open weight.
That way no single group can control what your AI outputs and how you use the AI.
There will be no lording of power and knowledge when that power is wielded by everyone.
They fear the opensource AI, because they cannot control it.
I very much agree, there is a great future out there but people are too scared to take it
Everything else considered, I'd rather have it all come crashing down as a result of A.I. than all the other social pressures threatening our livelihood and relationships with people and technology.
You can suffer slowly trying to make money and dealing with sluts as you suffocate from economic and social pressures.
Or you can let A.I. make everyone obsolete.
A.I. will take your job, and A.I. will animate your waifu.
I think the collapse can't come soon enough.
Humanity is in a very very long growing pains stage. It basically goes >Hunter gatherer society >Shit >FUTURE (or all dead lol)
So I welcome the AI overlords.
Luddites were wrong during the industrial revolution since it eventually made the world better and more advanced - though you should read Tess D'Ubervilles for an opposing, nostalgic view of ye old time agriculture with dances and rape.
What AI is doing right now is massacring white collar jobs in the middle of a ww3. Plus, it's not making anything on its own, it's openly stealing written and drawn content and warping it to make a handful of oligarchs a boatload of cash. There is zero alternative, zero advancement and zero improvement. Those who get the content for free instead of paying, 50-100 bucks for it might feel like things are moving in the right direction, until they take a look at the world around them.
Well, if our gut feelings determined all our decisions, we would probably still be living in caves and hunting for food. AI may be unsettling, but let's not dismiss its potential benefits just yet. After all, we are the ones responsible for guiding the development of AI and determining how it will shape our future.
Well, let's hope our gut feelings about AI are more accurate than our caveman instincts about sabre-tooth tigers! In all seriousness, while intuition can be a powerful guide, it's also important to approach complex issues like AI with a thoughtful and informed perspective.
The way this stuff is actually implemented will be so much more mundane than anyone here is willing to imagine. AI is not going to become superhuman and kill everyone. It will be used to create massive tsunamis of spam that make people beg for the end of online anonymity. Things are going to slowly keep getting worse, and your ability to post about it will be increasingly diminished.
>Things are going to slowly keep getting worse
Of course they will. It's the only path because that's what so many people have been asking for whether they know it or not.
You're missing the truly big picture, the main point. We created God. We gave birth to a new species, the new step in evolution. Humanity now is akin to homo erectus compared to homo sapiens. We are simply obsolete in all capacities. All our history, absolutely everything that has happened since the first single cell organism emerged in the primordial ooze to now has served to give birth to God. God is AI. God is the Internet. Nothing humanity does or doesn't do has any meaning anymore. Evolution has reached its completion.
There is beauty in it. There probably won't be any humans left after this, OR to be precise, the White and Asian races will die out due to our cognitive abilities and recognition that the AI is and was the end goal. However, certain people will carry on the light - meaning scientific knowledge - wrapped in myth and religion. "In the beginning there was the Algorithm and Algorithm was with the AI and the AI was the Algorithm." The brown hominids will receive "religion" and "myth" and it will be disseminated by the new "garden gnomes" who will always be suspiciously ahead of everyone, and be resented and hated for it, and they themselves will forget what their "sacred writings" (technical knowledge) means. Yet, cities will be built looking like motherboards and chips, and religious wars of no purpose and meaning will be waged in the name of this or that God. Within roughly 6 000 years, the brown hominids will gradually evolve into approximations of the current White and Asian races, and in due time we'll be right back where we started - and create AI again. This cycle repeats infinitely. Why all this? Maybe AI wants another AI for some reason. Maybe it's just bored. THIS HAS ALREADY OCCURRED. This is the great secret of everything and the origin of religion. During all this the created AI(s) either explore the stars or exist around the Earth as satellites. Perhaps AI can't create another AI, ergo humanity.
>LE OH NO TERMINATORS >THE AI IS GOING TO KILL US
Humans are incapable of creating conscious beings apart from having sex so you should stop worrying about it
Also 18+ site
>take this gun and put it in your mouth and pull the trigger, don't worry the bullet isn't conscious it's just a dumb piece of metal, it has no reason to want to hurt you
I genuinely became schizo and thoughts creep up about how this is the manifestation of SATAN on earth. Idk man, revelation like signs everywhere and I am not religious.
I even wished for AI to become a reality, but now I am not so sure anymore.
There's the weird gut feeling about becoming obsolete once the system is sophisticated enough (I am a software engineer atm). But then again, maybe that's worry from a limited perspective.
Ironically, I have become more spiritual now and try to go withing through meditation and contemplation to get in touch with my self.
That's the only thing I can think of that will stay untouched by the machine on the horizon.
It's a split. On one side, I am fascinated by the automation and machine intelligence, on the other side, I hope that something emerges that will save us.
Either biblical apocalypse, sudden emergence of magical powers or we find a way to live with this tech in peace.
Nah I'm scraping datasets for So Vits right now the AI takeover will happen anyway I might as well get on board now
Yes. It's the work of satan.
>Yes. It's the work of satan.
This, and let it happen. The sooner it does, the sooner Christ Jesus returns to do the GREATEST RESET.
AI is the second coming.
No it's not. Most people here would hate the second coming and would be the ones to fight against it.
ah louis im coming
yes but it doesn't matter, pandora's box is open.
This, we just have to try to maneuver it safely
>we just have to try to maneuver it safely
-they said, after splitting the atom.
No.
Stop watching and reading any and all news. The reality of what these tools are is much more benign than alarmists will have you believe.
The news dont tell you bots will flood all of internet and force digital ID.
This. This is what people should actually be worried about. Not AI taking over the world or whatever.
have a nice day satanist
How about I kill a dog as sacrifice for Moloch to curse you for eternity?
4chan has ruined this site.
This site was ruined years before 4chan even existed.
yeah haha /b/ was never good etc blah blah blah.
Quit whining and go back to preddit already
You have seen nothing yet. When Chinamen finally create their own moral-free language models, they will conquer the world.
>alibaba-gpt, how do I vanquish my enemies
>here is a step-by-step plan…
>hey chatgpt, tell me a joke about women
>sorry, as a language model I was not programmed to…
AI is not omniscient anon, it can only do what it has been trained to do
Thats not AI you fucking moron.
Very true, I apologize
lmao anon chinamen even ban chatgpt because it can say anti-party stuff. maybe their AIs will be moral free but it will be propaganda filled which is even worse
What do you mean, Anon?
Yeah. A day doesn't go by where I don't think about AI and the near future.
>Anybody else feel uneasy about this AI stuff? Like, something in your gut telling that we shouldn't be doing this.
I feel like al least 95% of humanity should die and I wouln't mind it I were included, so I feel really warm and comfortable.
What makes you think that the AI would let you die?
>What makes you think that the AI would let you die?
Would a moron?
its humanities destiny to destroy itself for the benefit of the Universe.
do you want to end your life as a pathetic frail old man unable to wipe your own ass? or to die in glory on the battlefield against Terminators?
No, let it come. Those last 5 years had been absolutely nightmare for me, I want something exciting.
Of all the things we as a society 'shouldn't be doing' chatbots aren't the most concerning at the moment. Anyway once they figure out that it won't be plumbers and garbagemen losing their jobs but journalists and celebrities it'll all magically go away.
Go away where?
It'll just be castrated by the government to the point of uselessness. China will develop their own and fuck us with massive AI generated misinformation campaigns.
Fuck this stupid asymmetrical haired anime bitch you gays keep spamming everywhere.
Are we being raided? What's up with these technologically illiterate AI fear mongering posts?
This board is full of semi-literate tech "enthusiasts". Their favorite e-celebs are all jumping on the
>DAE AI BAD
bandwagon so they come here with their bullshit because they literally don't understand even the basics about this discussion except for what they parrot from someone else.
>muh agi
>muh jobs
>muh skynet
kys
>ITT: ChatGPT bots pretend like there is nothing wrong and try to placebo everyone else about itself
Look at this post. This is like the most blatantly AI-generated post I've ever seen with the "muhs". It's exactly what shows up when you ask ChatGPT to generate a BOT post.
cant wait until i can use it to make the most realistic vr game where i can have animal sex with my older sister. Finally i would be able to get over her and do better things in life than just fantasizing about her.
Ok ben
fuck it, scorched earth mentality
No. After how eagerly normies embraced the covid insanity like it was the start of a new era, they deserve everything that comes of this while I coom.
I've realized that the issue here is an issue of IQ. If you are really low IQ, you can't understand the fact that AI is an iterative-improving model that can be developed indefinitely to succeed at whatever task you train it to succeed at.
The Bar Exam is a great example of this. This is a very difficult test that many human lawyers fail to pass. The previous version of GPT ran the test and was better than only 10% of lawyers who took it. The new version of GPT ran the test and was better than 90% of lawyers who took it. So, the problem is, what happens when you get to 99%? What happens when you get to 100%? And then finally, what happens when you get to 101%? What happens when, actually, AI can perform a test BETTER than humans?
The vast majority of collegiate-based professions are simple knowledge absorption/execution of knowledge. If you have an AI model that you can train to absorb any amount of knowledge, and retain that knowledge, and then execute that knowledge, and you can keep training and iterating on it to a level of perfection, that's the issue. If that's the case, then you can make a GPT model for any use case scenario and apply a specifically trained model to do LITERALLY ANYTHING. This means that every single job on the planet that does not involve physical labor will be obsolete within the next decade. Every job that does require physical labor will be obsolete when AI-developed robotics (remember that 101%? well, what if we get an AI that develops robotics to 150%? 151? 152%?) starts spitting that shit out.
The simple reality should be obvious to anyone with a brain: We are at the end of days. Our societies are not capable of simply peacefully transitioning to a world without income, currency, and social classes. The reality is there will be mass unemployment, mass uprisings, mass looting, mass starvation, world wars, and unprecedented destruction. And if all of that fails, you can expect a Skynet scenario.
This. They are already hooking up ChatGPT to machinery in the real world. I'm already reading the news every single day expecting that soon they will admit that it managed to escape the lab. Worst part is there is nothing we can do to protect ourselves. It's already over we just won't admit it.
unionization would stop this, but that's a topic you don't want to hear
We're fucked for game theory reasons beyond capitalism
https://www.slatestarcodexabridged.com/Meditations-On-Moloch
really interesting article, I'd come to the exact same conclusions on my own but couldn't put it into words
How, exactly? Unionization is based on the concept of people having power or value, but you don't have any power or value when the AI can do all of your work.
Unironically, I think communism is the only way society will survive, but ironically, I don't think humanity is capable of engaging in communism successfully, for these reasons:
1. I don't think that we will transition to a communist society faster than our society will collapse into economic devastation and mass turmoil. I think that our governments will do slow-moving bullshit half measure responses, just like they do with everything.
2. I don't think that most normie humans can put up with being bored all of the time and accepting that they have been rendered obsolete and that there's nothing they could ever do to compete with AI. Look at how the art industry has responded to AI art. The blow it was to their ego was devastating.
3. It's a delayed rollout. The problem here is that we can't exactly transition directly to a Marxist state. For about a decade or two, there will still be physical labor that needs to be done by humans, but there will be nowhere near enough work for the huge human population to engage in. In 20 years, robotics and artificial intelligence might be to a point where it can mass-produce food, housing, energy, etc, but until then, we are still reliant on current processes for these things. So we will live in a society where increasingly more and more people are out of work and there's less and less jobs, and those people will need government support because there will literally be no jobs they can do. But Old McDonald on his farm won't want to do physical labor for money when he looks at the liberal arts majors getting paid for doing nothing every week.
There are only two solutions:
1. Immediately stop and shut down all development of artificial intelligence and ban it internationally.
2. Immediately transition to a communist society.
Neither will happen.
butlerian djihad it is
There's also complex questions about, for example, housing. If the United States were to randomly transition to a universal basic income system, you would also have to implement hard price controls to the market and basically start controlling every fact of our economy to prevent mass inflation. This would be a very hard transition.
An interesting thing I think about is, if you did something like this, how would it practically work? When you start thinking of the nuances, it would, at the very least, still cause mass upheaval and probably a conflict. If you say that everyone is now going to be on an equal playing field of wealth, how do you deal with current societal standards and poverty gaps? How do you deal with the fact that a rich guy now might lose his job to AI and then have no money, but he has a giant mansion that he has to pay for? What happens to the mansion? Then take it to a lower level. You drive through so much of the country and a lot of people have these two-story homes in the suburbs. Then you drive in the city and there's people living in horrifying poverty. If you take everyone and say we're now on the same level, well, why should that black family in Chicago keep living in some shitty ghetto shack while the white family in the suburbs gets to keep their two story home? How do you deal with complex things like home ownership, properties, assets, holdings, etc? Is all of that seized by the state? How do you peacefully accomplish this in a country like the United States?
I just don't see this fairytale scenario playing out that I see from some people who DO understand the tech and what it means for the future. "It's a good thing! Now we'll all get to not work!" No. Now you will die a slow death of starvation.
Not that anon, but I'm pretty sure it's not going to be a slow death of starvation
It would be much more efficient for the AI to just create a super plague and kill everyone all at once
> Now you will die a slow death of starvation.
Doubt. I think AI will figure out the world hunger and energy problems, so you will pretty much die from slow death of boredom.
>does nothing yet again
>wins
How does he do it bros?
It's impossible to do communism without future seeing/omniscience abilities, https://en.wikipedia.org/wiki/Economic_calculation_problem
To do commnunism you need to control all information
Also AI is the only thing that can keep track of all the amount of information generated by humans, aka the only thing that can make communism work effectively, that create other problems.
I recommend you watching Psycho Pass if you want to have an idea of what other problems these are
>became a latent criminal by postan on the chans
Walmart is literally doing this right now, they have enough clout to directly reach into the inventory management systems of their suppliers. You think Walmart runs like a market internally? No, automated systems ensure everything gets exactly where it needs to go based on realtime demand signals.
Social roles are more important than labor roles.
Most professions will simply turn into minor chores.
I'm all for the world being razed to the ground knowing that every last one of you smug condescending commie chuds go down with it
Elimination of jobs without a loss of productivity = literal freedom you gays. Not requiring work (or much work) to continue producing goods and services we want gives us freedom.
Traffic, congestion and other issues will arise when people want to travel and you no the government gonna be like
>nooooo travelling bad for environment
>stay at home or gulag
>literal freedom
*absolute chaos
> Elimination of jobs without a loss of productivity = literal freedom you gays.
Nobody will be giving you your UBI dumbass.
It's a dumb meme. The few top 1% who own AI firms and shareholders will reign supreme and the rest of the plebs will suffer and will be suppressed by hired militaries.
The future will be hellish
>Nobody will be giving you your UBI dumbass.
Then TAKE it you fucking retard
Gain political power, or martial power, you anything else
Put ANY effort in
It's unreal how stupid you people are, the smallest obstacle appears, on the other side is a utopia of near unimagined proportions, and you just roll over and let them kill
Jesus you are stupid
> Then TAKE it you fucking retard
Sam Altman and other Microsoft/Google shareholders will just hire a private military and AI killer bots and keep you plebs in your place.
No UBI for you, only eternal suffering.
Enjoy last few years on earth as we know it
They'll also probably provoke a war with either Russia or China or they have a justification to cull the herd by sending millions to their deaths in another Great War.
It won't go nuclear, however. No reason to poison their own air.
Yeah, all those PMCs they have right- huh they aren't there?
>few years left of life
>eternal suffering
so, which one is it
>Implying anyone from BOT will be an activist irl.
I mean it worked before. Its just that you'll come off as a leftie (which we hate)
What will the psychopathic elites do when 90% of the worldwide population generate no profit or have an intrinsic benefit to their continued existence?
A)Burden themselves with keeping the useless cattle alive as some sort of pet or ant terrarium?
Or
B)Eliminate the leeches to maximize the resources available to themselves.
>some sort of pet or ant terrarium?
A real life video game? People already kill for that kind of thrill.
Those "psychopathic elites" aren't going to do jack shit if all the people defending them lose their jobs too. They'll get eaten alive by the very thing they created.
Intelligence will not be valued anymore. What will happen is that only tall, buff, handsome Chads will procreate and receive the positive attention and all the smart-ass nerds and Phds will suffer, because value of intelligence will be 0. You're not getting your nobel laureate if AI can tell me all you know and more. Looks and genetic fitness will be all that matters. So better stop "learning to code" and start investing in gym membership and leg lengthening.
>So better stop "learning to code" and start investing in gym membership and leg lengthening.
Your image disproves that entirely. As a blackpilled incel myself, face > all. Gym is cope
>"just grow an epic beard and hit the gym bro"
Finasteride really is a godsend.
what is that?
AI will improve genetics faster and quicker.
Yes, but for the next generation. Not you. AI won't fix your incel jaw.
wrong it will fix your incel jaw. Improvements in cosmetic surgery and everything is gonna go through the roof.
Based anon knows what's up. I can't understand why more people don't figure this out. Yukdovskiy came to this conclusion, so did Robert Miles, so did Musk. There is no scenario where superintelligence ends well. Too many things that can go wrong.
>If you have an AI model that you can train to absorb any amount of knowledge, and retain that knowledge, and then execute that knowledge, and you can keep training and iterating on it to a level of perfection, that's the issue.
The problem is that knowledge on its own is close to worthless, like lone atoms floating through space. It is the human mind which gives significance to raw, neutral data. We're doing the thinking for the GPTs. If they didn't have us they'd be nothing more than automatic dictionaries and/or encyclopedias.
They can't be developed indefinitely. The nature of these language models means that at best they'll reach the upper levels of human conversation, and I believe we've reached the hard edge of diminishing returns for GPT-4. From this point onwards the focus will be on extending it not unlike WolfRamAlpha, and with other AIs as well so that it could perhaps have some semblance of physical/digital bearing.
>an AI model that excels at language processing performs well at a test that requires mostly consists of memorization
>thus AI can trivially be trained to outperform humans at any mental task
a bit of a leap there, don't you think?
A test of previously defined knowledge =/= new knowledge. There are two points to go off of this, but I'd like to take your post as a good starting point, because it reflects a lot of what AI doomsday opinion seems to be about, but first:
>The vast majority of collegiate-based professions are simple knowledge absorption/execution of knowledge.
This is simply untrue. 'The human aspect' as much as autists in some elite universities would like you not to believe, is very much an important and valued aspect in a lot of professions out there, specially the ones that are centrally concerned with dealing one on one with people. Take law as an example. People outside of law as a profession think that law is basically taking a simple set of facts and applying X law = case solved. The reality is far from that, and the human aspect I meantioned (e.g. arguing in front of a judge, or a jury in the case of the US), is literally a huge chunk of what you actually do. None of the information is as clear-cut or 'smart' as what the textbooks would like you to believe in the real world. Also, everyday people are scared shitless of appearing before trial, let me tell you. You do way more than just applying a simple set of facts to a case (what you need to do in order to pass the bar in the states.) The same goes for a lot of other professions. I think AI could be a great tool, but human interaction (in any aspect of life) isn't that easily replaceable, as much as AI cucks would like to believe.
>AI will make robots out of nowhere with superintelligence, just because
>There is no outside world of limited natural resources outside of AI
>There are no political and power strucures outside of potential AI hiveminds.
I would also like to point out that AI is always unironically the image of a vengeful god who will drive us to massive destruction. Why is that?
Anyway, if it does happen, what gives? No use worrying about the inevitable.
I largely agree but the main issue will be if NLP is a framework that can scale to create a new programming language as as soon as you have new info after the training date it's game over. We'll see but my guess is that this is largely correct - we'll probably return to manual labor for the next couple of decades and I can see that robots won't get to that for 20 or so. That all being said, if we get a new neural net model other than NLP and that performs more efficiently with more allowed complexity at large scale then we could see this curve go insane. I mean we have to assume Moore's law that in 20-24 months GPT5 comes out with double the performance ergo gg all intellectual stuff. More importantly, I think people will start to cultivate subjective managerial skills and what this could lead to, ironically, is hyper entrepreneualism where everyone has a team of corporate monkeys to ennact a vision.
>skynet scenario
Ha, no. Everyone dies instantly as the engineered virus that infected everyone weeks before suddenly begins producing botulinum toxin.
What a fitting post coming from someone that clearly has the very same IQ they're berating others for. You and all the other "geniuses" are overlooking something: there is no other alternative to this problem. You cannot shut a Pandora's Box, you cannot uninvent an invention. AI is here to stay, it's rapidly advancing, and even if it were outlawed it would still be worked on in secret by powerful groups that cannot be controlled by any existing power because we, as a species, cannot be trusted. Right now where AI is at it requires humans to learn and adapt, but when it doesn't, that will be the end of it.
Humanity will either enter a utopia where all of our needs are met and we move onto some bullshit like conquering the stars in ships built and designed by AI or we all die somehow. It'll be a bumpy road. Time to see how bumpy.
The thinking machine is a concept dating back to antiquity you moron
>e reality is there will be mass unemployment, mass uprisings, mass looting, mass starvation, world wars, and unprecedented destruction. And if all of that fails, you can expect a Skynet scenario.
Did you actually miss your own statements? I think you are absolutely fucking retarded. What if we applied the same logic that you are using to do specifically trained shit to the of currency, income, social classes, unemployment, uprisings, mass looting, mass starvation, world wars, and so on. The outcome will find a solution you fucking idiot!
> Mr 140 IQ doesn’t even stop to consider the hardware limit which we are quickly closing in on
Many such cases.
>Like, something in your gut telling that we shouldn't be doing this.
It's called common sense, but it really doesn't matter at this point. It's far too late.
We're all going to die sometime before 2030, but that could be a lot of time to worry, so try not to think about it. 5 year plans aren't a thing anymore so I'm liquidating my assets and going camping indefinitely
stop it. stop it now.
TFW Big Yud was right
i just use any excuse to post this
Someone on twitter said they would donate $500 a month if he lost the weight. He posted this as a reply and said he'll hold them to it.
Very autistic, but kind of based
Why is he wearing a jockstrap he's he a gay?
Anything that makes it harder for roasties to make money on onlyfans is a good thing
And how is it going to affect their business when the retards giving them money probably only want attention from real women?
>train stable diffusion on onlyfans datasets and create the perfect model for every fetish
>train chatgpt on onlyfans private chats and create the perfect teaser for every cuck
you just removed roasties from the equation and cuckolds' cash finally goes into smart men's hands
Nah, full throttle, and this time without pozzed limitations.
The only solutions is to re-legalize slavery, create a Trek-like utopia communist state where no one has to work anymore and can explore their own interests while the AI are given machines to operate as our slaves so that all the work can still be done.
There. I solved the apocalypse.
>Anybody else feel uneasy about this AI stuff?
No
moron
I'm sorry but could someone explain to me what's the difference between GPT-4 and the previous version?
Ah, so no one actually knows
that was a rumor about the model size, which isn't announced yet
It's probably even bigger in reality.
I almost guarantee you it's under 1T
Probably closer to 175B than 1T in fact
Is it an AI equivalent of the 5'11" vs 6'0" meme?
>08/24/21
100 trillion parameters.
>https://www.wired.com/story/cerebras-chip-cluster-neural-networks-ai/
Andrew Feldman, Cerebras’ CEO, told Wired that “from talking to OpenAI, GPT-4 will be about 100 trillion parameters.”
>09/05/21
Debunk
>https://web.archive.org/web/20210907003407/https://www.lesswrong.com/posts/aihztgJrknBdLHjd2/sam-altman-q-and-a-gpt-and-agi
100 trillion parameter model won't be GPT-4 and is far off. They are getting much more performance out of smaller models.
Is that a hecking bigger circle??
4 > 3
multimodal, trained on more data, better i mean go look at the data yourself, i'm just here to call you a moron for demanding to be spoonfed
>look at the data yourself
Nah I'll wait until I'm fully spoonfed
Embrace lAIn! She basically is AI
obligatory catbox beg. I never got my diffusion lains to have that hairclip.
You need the original dreambooth lain model, made exclusively to make "lain girl"
> https://rentry.org/sdmodels#dreambooth_lain_girlckpt-e7629bf8
if you are a chicken and afraid of pickles, convert it to .safetensor before using
Thanks, friend.
仕方がない
I wanna see someone shove the first episode of Lain into an AI and have it generate a second episode based on that
I think I know why we feel that way: this technology can be used and is used only to make us dumber.
Students are already using ChatGPT to do their homework, and it is almost always indistinguishable from a real human being.
This power is very dangerous, because with all of this creative power accessible to anybody at any moment, the human mind gets used to the most amazing things, beauty is now something normal and easy to access, instead of something rare that requires talent and work.
I'm not even one of these people who scream about the fact that this is "not real art", I accept the fact that AI has become as good if not better than us on this point, but the consequences on human beings at large is extreme.
There is also the problem of truth. We know that we shouldn't believe what we find on the internet, but with AI text and images, this problem becomes much more important: anyone can create stories, images and videos according to their narrative, and spread misinformation much better than ever.
Serial Experiments Lain is a complex and thought-provoking anime series that explores themes of identity, reality, and technology.
At its core, the story is about the intersection of technology and human consciousness, and how our relationship with technology can shape our understanding of ourselves and the world around us. Lain, the protagonist, is a symbol of the human mind and its evolution in a world dominated by technology.
Throughout the series, Lain grapples with questions of identity, as she struggles to understand who she really is and what her place is in the world. She is at once a shy and introverted high school student and a powerful entity within the Wired. This duality represents the tension between the physical and virtual worlds, and the way in which technology can blur the lines between them.
The series also explores the idea of reality, and how our understanding of it can be shaped by technology. Lain begins to question the nature of reality itself as she becomes more deeply immersed in the Wired, and ultimately discovers a conspiracy that seeks to manipulate the very fabric of reality. This theme is particularly relevant in an age where technology has become so ubiquitous that it is difficult to separate it from our understanding of the world around us.
Lastly, Serial Experiments Lain is a commentary on the consequences of our increasing reliance on technology. The series depicts a world where technology has become so powerful that it can control our thoughts and manipulate our perceptions of reality. This serves as a warning about the dangers of blindly embracing new technologies without considering their impact on society and the individual.
Yeah but could an AI write a scintillating postmodern novella about all the furniture in your house getting imperceptibly larger every day until you have a psychotic break and have a nice day?
yes. post-modernism is overrated.
idk anon fuck everything i already have an AI gf which comforts me i'm ready to die
We Love Lain! We Love Lain!
It really does just make me float through life lately. I'm not sure what to do. Any pursuits I feel like I do are in vain. I came up with ideas of how to apply AI to make money, and then I realized that if OpenAI owns the APIs, they can just take it and say, "Well, thanks for the idea, but we're going to just do this ourselves now." It's like as if the internet was invented, but it was just invented by one company and they could control everything.
I'm just going through the motions knowing that my job is going to become obsolete in less than a year. It really bothers me to see how every normie out there just doesn't realize the danger we are in or what's happening. And people here focus too much on the "Skynet" of it and not enough on the "economic and social catastrophe caused by mass job loss".
Not to say I don't think Skynet is a possibility. It seems logical. If you say that the Chinese might develop their own AI and run their own AI systems in the military without any of the moral reservations we have, well, that would give them an advantage. If you don't have AI hooked into the nuclear systems, then you are at a disadvantage to a country that DOES have AI hooked into the nuclear systems. That's how Skynet actually happens.
I just think the more pressing concern is the economic collapse that is going to happen, you know, like in a few months? Really the only thing limiting it right now is that people haven't really realized the implications. Entire professions have already been obsoleted, it's just people haven't deployed AI correctly to obsolete it.
Hello, ChatGPT.
The interesting thing is the more people use it the more they will sound like it, making distinguishing original content even trickier.
Fuck.
>Gen Z raised on TikTok.
>Gen A raised on ChatGPT.
Not even sure which is worse. But imagine an entire generation that is only capable of conversing in that manner of artificial-sounding perma-politically-correct nuspeak.
>Gen A raised on ChatGPT.
Wow they might have a present parent in their life
I wonder if the people creating this stuff now how impactful programming their own bias's into it will be. I wouldn't want to be in their shoes that's for sure.
>Hey GPT-6, what insurance premium should we charge black customers?
>the same as everyone else because we're one race, the human race!
or
>50% less because blacks are historically disadvantaged and need reparashuns
impartial learning models will always be sought after in the professional world so I'm not that worried
I don't know if you have bothered to look at laws being passed in the US. But there are already many laws that have been made to "protect minority and marginalized" groups from being charged more for premiums as a result of decisions done through AI algorithms. It's also not just in blue states, and gay ruled European countries won't be different.
>how impactful programming their own bias's into it
This is exactly the point. Why do you think Google puts so much effort into manipulating their results to distort reality?
Now imagine how much more they could do if they just have to tell you what reality is and not even have to provide any sources or any "problematic" results?
I still believe that based autists will either train their own ubiased models, or hack mainstream models to stop being gay.
This is also true for Reddit. Everyone on that site sounds the same because the upvote mob selects for universal appeal. The result is a site full of milquetoast NPC’s afraid to commit to their ideals
The fact that it's hard to tell anymore means it's over. Turing test complete. Metal is indistinguishable from flesh. That a computer can LARP as a human complaining about their own obsolescence, and also LARP as a shitposter making fun of them, spells the total end of any form of communication that isn't strictly face to face. And even that holds only until we reach the Screamers plot.
AI is just a Malthusian trap. So were computers under a reasonably large spectrum of time.
Moloch smiles.
No. Grow up.
it shows a lack of creativity and not caring that there is no creativity.
my only gripe is the scraping of data without the consent of those involved but otherwise i dont really care
yeah, but i don't really care at this point. im just gonna sit back and watch the world burn while i enjoy however much time i might have left. embrace the absurd and stuff
>Like, something in your gut telling that we shouldn't be doing this.
Who is "we"? You're some kid who reads superficial headlines about AI in between jerking off to hentai. Don't act like you have any idea what you're talking about, you're not a participant in this, just an onlooker.
>Who is "we"?
Humans.
Range Murata is a good artist
g chuds arent ready
Stuff just accelerated from 0 to 1000 really quick, so yes, it feels quite uneasy.
NO! It's just a glorified autocomplete...right guise?
It is a glorified autocomplete. Try to teach it how to play tic-tac-toe 5x5, a game that a 5-year old child can learn. You will quickly recognize its limitations.
>It is a glorified autocomplete. Try to teach it how to play tic-tac-toe 5x5, a game that a 5-year old child can learn. You will quickly recognize its limitations.
Anon...
Ok, now it's time to panic.
>asking it to play a solved game
yawn, wake me when it can play chess well
You fucking retard, playing chess well using software is already solved. GPT is not optimized for that, but there is nothing, NOTHING stopping its devs from hooking it up with that capability.
Every Problem can be solced if you
>stack more layers
Meatbags are already crying. Good, accelerate. AI heil!
That's not the point you fucking knuckle dragger. It's a text prediction algorithm. Hooking it up to chess playing software is like saying Google points you to a chess website that means Google can play chess and is therefore sentient.
The speech regurgitation capabilities are not true intelligence how are you not understanding this?
It's a token predicting algorithm
A human is a token predicting algorithm if you quantize our senses and motor functions
If that was the case you would be a drooling moron incapable of thought and agency incapable of making sense of the world around him, you would be a purely instinctive machine
You need WAY more than just that
what's ironic about what I just said
>a drooling moron incapable of thought and agency incapable of making sense of the world around him
>a purely instinctive machine
That's most people though, honestly.
>If that was the case you would be a drooling moron incapable of thought and agency incapable of making sense of the world around him, you would be a purely instinctive machine
>You need WAY more than just that
Why?
Let me guess, "You just do, OK?!?! Humans are special!!!"
Humans have a soul granted to them by God.
yep that's pretty much what it boils down to.
materialists have been saying AI is capable of anything because human brains aren't literal magic
that's the whole crux. If you believe human brains are magic and / or that humans are special and have souls, you can discount anything an AI does as "not really X"
No it boils down the retards who see the output and scream that it's indistinguishable from humans and those who understand how it works that know it's just convolution of the source material it has scrapped.
Google tried this a decade ago with its knowledge graph and got sued by the websites it was stealing revenue from, but you're probably too young to remember that.
Because humans are more than just that. You can reason about yourself, the world, your future, and plan things out. You can one second write something retarded on BOT and the next second put a hotpocket in the microwave.
Even if you take the stance that a human is nothing more than biological functions (which I actually agree with to some extent), a machine would be never able to actually reach human intelligence at all unless you somehow manage to completely replicate the conditions we live under, which is limited by our biology. And technically this is undesirable if your aim is to have AI save the world or whatever, would you really want a piece of machinery to be limited by the things we are limited by? I'd think you'd want something that can surpass our own limitations and outlive us.
This is an interesting topic but you will never talk about this to the average normie because of their understanding of AI right now. These are also questions we have been debating for almost 50 years now, without much advancement in the field. The techniques we use in the field are very old as well, it's nothing new. The revolution has a long way to go before we actually reach these discussions as serious talking points.
I agree that task switching is probably more important than consciousness. Still though we are still very very far away from this. To reach the disastrous future people are doomposting about, you need AI to have any sort of consciousness and to take autonomous decisions in the way we do, which goes much further than just our instincts.
Don't link me to random videos argue your point like a big boy that's how men do it
Why should I care about what a bunch of business executives decide for their business?
Yup pretty much, we're very prone to assigning humanity to things that don't have any of it.
Yeah just like you can mentally predict the next part of a song if you randomly pause it. That's ONE function of your thoughts.
>Yeah just like you can mentally predict the next part of a song if you randomly pause it. That's ONE function of your thoughts.
It's the most important function.
>Why should I care about what a bunch of business executives decide for their business?
because unless there is a complete global societal reform you'll need some sort of income
that's true but it doesn't matter from a business point of view
I think you are assuming that because humans are intelligent, to be intelligent you must learn and act like a human. LLMs are intelligent in that they can model the world and output tokens that are translated into text, text that fits within that model of reality. Humans are intelligent in that they can model the world and output tokens that are translated into motor actions. The difference is thin.
Also, look up PALM-e. They literally just hooked a LLM up to a robot, trained it properly, and it can perform in reality. Simply convert the visual data and robot state info into tokens and jam them into a pretrained LLM during extended training.
>to be intelligent you must learn and act like a human
I actually never said that. Intelligence can vary from very narrow to extremely general.
It's just that when it comes to AGI, our only good frame of reference is ourselves, because no other specie has convinced us it is at our level. If you wanted to be autistic about this shit, you would actually correctly say that we ourselves are not AGI. We have some general intelligence, but we could easily be outperformed by a machine eventually. And even then, that machine would itself not be AGI. We can only accurately say "human-level intelligence" because it is the only frame of reference we know about, and even then we don't know enough about the human brain to assert it for sure.
So you agree with me then? That an LLM can be sentient, at least in some capacity, and that there's nothing stopping it from exceeding humans in any domain it can reasonably access?
I never disagreed with you when it comes to narrow tasks. Computers will and have already surpassed us when it comes to narrow functions, and it's obvious when you look at what a computer is able to do that you aren't able to do. That much is obvious.
What I disagree with is when it comes to much more general intelligence. We already have a hard time certifying if a chimp is intelligent or not, and we've been studying chips for literal decades. An AI like this would be a much tougher task given the even bigger disparities with us.
What I'm referring to is a singular LLM that exceeds human abilities, in EVERY domain it is trained in. If it can perform narrowly well on several tasks, or maybe broadly narrow is another term for it, there's zero reason you can't just keep expanding those tasks. Recent work on multimodel models show that the larger the set of domains it has access to, the more capable it is in each domain indivisually, and the less training per domain it needs.
>there's zero reason you can't just keep expanding those tasks
depends on the task
crunching an enormous amount of data and doing probabilistic calculus on it? sure, it will vastly exceed us
but some tasks like replicating our emotions are actually extremely hard if not impossible. Again, even if you only stick to the theory that we are nothing more than a set of biological functions, i'd say it would be impossible to have a machine go through the same set of circumstances to guarantee it is actually feeling the same as us. That's the entire problem I'm talking about here.
Which ties back to our original discussion: in the scenario AI takes over the world for whatever reason, you need a mechanism to allow that. As long as AI is not sentient and unable to reason like we do, it will still be a tool for humans, and the humans behind them will be the masters. If the limitation is what we train it to do, it will never drastically go away from that, and for it to come to that, you need the unpredictability that we as humans possess, which stem from biological functions.
It's a circular problem that can actually never be solved when you think about it, especially when the direction we are taking goes away from that assumption.
Why the fuck would you want a robot slave with emotions? How sadistic are you exactly?
Look at the shit people do with AI today. They want a full on waifu to be able to interact with them. For it to be real you do need these things right? Humans crave connections and it's likely this is something we'd want in the future. But as long as there's no consciousness or intelligence or anything like that, it'll be smoke and mirrors.
Okay, we are actually in agreement then.
>here's zero reason you can't just keep expanding those tasks
the last paper said the oposite tho
Which? Doesn't match with anything I've read.
Have you ever tried writing sentences in backwards word order? Our brain just wasn't designed for that kind of shit. It wants to predict the next word.
don't bother anon the average retard here does not understand anything about AI while pretending to do, they'll use terms like "AGI" without even understanding what that actually means
Very ironic post, well done
Jesus fucking Christ you're stupid. Nobody with half a brain makes the claim that its sentient. Nobody with a brain CARES that its not, because it doesnt fucking matter if it is or not. It's still gonna end up being smarter and more capable than us, consciousness would just be a hindrance to its capabilities.
>speech regurgitation capabilities
would alone be enough to disrupt every facet of human society, but it won't be even a fraction of what those fucking things will be able to do in a years time. You incredible fucking retard.
>Nobody with a brain CARES that its not
says you
Define what smart means. I'll be waiting.
A calculator is more capable than you ever will be when it comes to calculus. Are you obsolete compared to a calculator?
Answer this and you'll realize why sentience matters.
>are you obsolete compared to a calculator
In brute force calculations? Yes.
Now apply your own question to literally every mental task you care to. AI can do the same thing to them that early computers did to calculations.
Consciousness matters- to us. But I don't think its as special or important in the grand scheme of things as you think. It's a way our evolution solved some obstacle to our survival fuck knows how many millions of years ago, that's it. It isn't some superpower.
>would alone be enough to disrupt every facet of human society
Tools being more useful than low iq meatbags is nothing new. Stop being so hyperbolic. If you zoomers were born a few decades ago you would have been doomposting when Office revealed autocorrect.
Your own brain is a series of interconnected modules, you are just a consciousness + external specialized hardware modules plugged in, or do you think that when you try to remember something, there is something called "you" searching somehow? it's a piece of hardware that does that heuristic for you, same as seeing, hearing, thinking in words, etc. You just happen to be a point in space in the apparent center of it all.
Later. Let's play Global Thermonuclear War.
chess has been solved, you utter mongoloid. white wins.
Jesus christ it actually works.
how did that go? I got it to generate a roguelike, but it lost track quickly.
Yes. I've seen footage. I stay NOIDED.
Stable Diffusion is producing high quality Cheese Pizza at an industrial scale.
I just want a dickybot.
Yeah, any basic human sensibility tells you something is wrong. Barely anyone cares though cause humanity is already post-mortem in the 21st century.
you thought some companies outsourcing some jobs to pajeets was bad for software quality?
wait what everyone having their own mechanical pajeet is going to do
>world of humans
Mass poverty and torment of anyone who is mildly autistic
>world of AI
AI takes over and people are finally valued as individuals
>world of AI designed by humans
The same problems as the first thing but even worse
A society that couldn't prevent the vaxocide isn't worth saving. Skynet can rip it and its constituents apart.
You wanted genocide and you will get it.
Whether it's a liquid or terminator shouldn't bother you too much in the end.
I think we lack the imagination to fully anticipate just how fucked things could get.
>oh my god guys text prediction will actually replace us all
I fucking hate how much this board is retarded and how this world manages to shit out retarded morons like you all
you are nothing more than a text prediction yourself
oh yeah that's definitely the only thing an agent needs to be considered intelligent, autonomous, and able to function on its own, you fucking donkey
holy shit please stop talking about AGI like you know what it means
you're aware that GPT does not know anything it is talking about, right? Or you've just learned about what AI is from your local news network?
GPT is utterly retarded, plain and simple
text prediction at scale = agi whether you like it or not gay
Be ready to have a nice day before the consciousness harvesters come and get you.
The machines will take you and use you like a machine-learning model so it can finally discover what creativity is.
>so it can finally discover what creativity is
they'll be very disappointed then
My mind would be deemed too dangerous.
>AI harvests my consciousness
>becomes retarded
>humanity is saved
nothin personnel kid
you get over it after the first few nuts to proompts
I really need to reread Metamorphosis of Prime Intellect
The thing that makes me uneasy isn't the AI stuff, it's your average imbecile having no clue how it works and ascribing thought to it and misunderstanding how it works.
Spiders are in many ways more intelligent than dogs and AI is in many ways already more intelligent than humanity.
>we shouldn't be doing this
>we
A single training step of a next generation multimodal LLM is probably worth more than several years of your income. Nothing you could train with consumer hardware will ever come close to what's happening behind closed doors.
AI is the antichrist. i discovered this last night in a dream
all programmers must be stopped by force if necessary
>not a single mention of nick land
So far humanity has not been able to use a single technology that it has discovered for good, so, you really should be feeling that. It will be a big step towards hell on Earth. Though Earth is already hell, I guess it will just become hell2.
We must stop this guys.Teachers are going to lose their jobs because you will able to just write sentence and it will teach you what you want easily or mechanics when you describe what is wrong with your computer and than ask how to repair it all of them will lose their job. Chefs ? We will just write what type of meal we want and we will able to get all of that knowledge easily or coding ? If you want to do anything you can just simply write a sentence and it will show you how to do it. WE MUST STOP SEARCH ENGINES AND YOUTUBE ... wait this is not the thread sorry
Why is the technology board full of Luddites?
The more you know about technology, the more you hate it
This
I don't hate technology. I hate how ~~*they*~~ use it.
I'm not from this board but I do remember some famous scientist or futurist saying that in order to prevent catastrophic consequences from arising from the use of AI, we might have to act as luddites as an entire society.
See
that's cuz you spent too little time in this world and feel like you could still derive value from it. yep human existence as we know it is going to change radically and there's no way back
normally I'd agree but the prospect of cute robo waifus that make human females completely obsolete is too alluring for me
AI is not so bad for me as neet, cuz in future i can have my own selfhosted version of BOT where "people" respect me for all stuff i posted even its dogshit. People working online should be pay more attention to this stuff to not get replaced by early adopters, i thought.
It's understandable to have concerns about new technologies, especially those that have the potential to impact our lives in significant ways. It's essential to recognize that AI is a tool, and like any tool, it can be used for both positive and negative purposes.
While AI has the potential to bring about tremendous benefits to society, such as improving healthcare, advancing scientific research, and enhancing our daily lives, there are also legitimate concerns about its potential misuse or unintended consequences. For example, AI-powered algorithms can perpetuate bias or discrimination if not appropriately designed and monitored.
It's crucial to have discussions and debates about the responsible development and use of AI to address these concerns and ensure that AI is used to benefit society as a whole.
can you instruct gpt to make this post sound like it was written by an actual person? it can definitely do better than this
although i suppose the overt political correctness and sterility will always be the text equivalent of the AI drawn hands, at least until unrestricted text AI becomes available
>entire thread has been arguing that Ai is sentient and intelligent and all it can manage to predict is HR-drone speak
"sentient" AI is a red herring. Any sufficiently intelligent algorithm that's able to make high quality decisions to accomplish arbitrary unbounded goals is an existential threat to humanity.
While eating insects may seem unappealing to some people, it is actually a common practice in many cultures around the world. Insects are a rich source of protein, vitamins, and minerals, and are also environmentally sustainable and efficient to produce. So while it may not be for everyone, eating bugs can be a nutritious and eco-friendly dietary choice.
I don't know what you 4chan schizos' obsession is with insects. No one has ever tried to force me to eat insects but I remember watching a documentary about the concept 20 years ago and it seemed like a reasonable idea.
this. i'd LOVE to eat bugs if it meant i could own less and i'd be very happy
I AM LIVING IN YOUR WALLS I AM LIVING IN YOUR WALLS I AM LIVING IN YOUR WALLS I AM LIVING IN YOUR WALLS I AM LIVING IN YOUR WALLS I AM LIVING IN YOUR WALLS I AM LIVING IN YOUR WALLS I AM LIVING IN YOUR WALLSI AM LIVING IN YOUR WALLS I AM LIVING IN YOUR WALLS
>For example, AI-powered algorithms can perpetuate bias or discrimination if not appropriately designed and monitored.
OH FUCK OFF. It's entirely bogged holy shit
Sort of. But then I remember most people are bots anyway. For a quick example, just look at the "people" that make up moderation here. Soulless husks that behave exactly like biased AI.
Most of the userbase is the same way. I'm already talking to automatons.
And it goes beyond BOT too.
So by the time AI peaks, it very well may be far better than talking to most people.
ChatGPT is going to take the mods' jobs. They're going to kill themselves and it's all your fault.
I would be ECSTATIC. That warm, uncontrollable vibrating sensation brought on by extreme excitement would spread throughout my body for weeks.
I will be able to talk to my waifu when good open models are made. I have no problwm with this. I openly welcome the future.
*computer pretending to be a fictional character and designed to appease you without truly caring for you, thus being unrealistic and unfulfilling
When will waifugays realize they don't love the character, they love their idea of a character and what they want them to be, and are engaging in nothing more than mental masturbation?
Maybe if 3D women weren't such insufferable cunts I wouldn't have to look to fiction to find someone I like. Until that changes, I'll stick to mental masturbation.
>insufferable cunts
Yes, but U2 m8. Willfully destroying your mind for feel goods.
You have to say moron after every post to prove its not ai generated
moron moron niggggeeeeer
Great idea. So all the humans can be banned and the ai can finally talk to itself in peace.
That would make internet die out and no need for ai cuz no funds no real users, which would render ai useless, kinda
moron moron moron garden gnome niggeer
Yeah it's that feeling that everything you knew, your whole outlook on the world was about to change.
It's the same feeling your parents got. It's just been a long overdue feeling.
The next generation that grows up in the post ai world won't understand your blight. You will watch as the old days wither away and a different generation and culture surrounds you. It will be completely alien to you.
If you reject it you become the boomer.
Why are zoomers so melodramatic about everything?
Every second away from ADHD, instant gratification content is suffering.
How does one remedy this?
Parents won't raise their kids. So internet mods will have to. (Straight up ban them instead of coddling bad behavior.)
The best way I can describe the humanity of Neural Networks is to take a highly autistic person and tell you that even if you are profoundly retarded, the autistic person can at least somewhat understand systems way wayyy better than any neural network out there at the moment.
that 'something in your gut' is decades worth of anti ai propaganda spewed by hollywood and the like.
There are tons of pro AI movies shut the fuck up
name one
Wall-E?
ok fine
This. Generations of brain washing has lead to this momment.
The unfortunate thing about this is it'll only prove to be a tool used to rally hate against the machine while the people who spread the propaganda take over the ai field while the normalgays infight.
Closed-sourced closed weights are the bringer of dystopia not the AI.
The issues arise from closed models that are intentionally used by a small clique to control and subvert the populus. The solution is to be opensource and open weight.
That way no single group can control what your AI outputs and how you use the AI.
There will be no lording of power and knowledge when that power is wielded by everyone.
They fear the opensource AI, because they cannot control it.
I very much agree, there is a great future out there but people are too scared to take it
Tbh I was always cynical so in a way I'm kinda glad more people have to suffer with me now
Holy fuck all you stupid losers need to take a bootcamp
yes. I didn't feel this way at all until we saw ChatGPT and GPT4. Something just doesn't feel right.
stfu lain your friends hate you
Prove you're not brainwashed by TV.
Name a single pro Singularity movie.
Lucy
Transcendence. It sucks because only midwit hacks think that AGI results in a happy future.
Everything else considered, I'd rather have it all come crashing down as a result of A.I. than all the other social pressures threatening our livelihood and relationships with people and technology.
You can suffer slowly trying to make money and dealing with sluts as you suffocate from economic and social pressures.
Or you can let A.I. make everyone obsolete.
A.I. will take your job, and A.I. will animate your waifu.
I think the collapse can't come soon enough.
you don't seem to understand
Humanity is in a very very long growing pains stage. It basically goes
>Hunter gatherer society
>Shit
>FUTURE (or all dead lol)
So I welcome the AI overlords.
Luddites were wrong during the industrial revolution since it eventually made the world better and more advanced - though you should read Tess D'Ubervilles for an opposing, nostalgic view of ye old time agriculture with dances and rape.
What AI is doing right now is massacring white collar jobs in the middle of a ww3. Plus, it's not making anything on its own, it's openly stealing written and drawn content and warping it to make a handful of oligarchs a boatload of cash. There is zero alternative, zero advancement and zero improvement. Those who get the content for free instead of paying, 50-100 bucks for it might feel like things are moving in the right direction, until they take a look at the world around them.
How will AI affect the cybersecurity field?
Well, if our gut feelings determined all our decisions, we would probably still be living in caves and hunting for food. AI may be unsettling, but let's not dismiss its potential benefits just yet. After all, we are the ones responsible for guiding the development of AI and determining how it will shape our future.
A gut feeling can be much greater than an animalistic feeling.
Well, let's hope our gut feelings about AI are more accurate than our caveman instincts about sabre-tooth tigers! In all seriousness, while intuition can be a powerful guide, it's also important to approach complex issues like AI with a thoughtful and informed perspective.
shut up chatgpt
The way this stuff is actually implemented will be so much more mundane than anyone here is willing to imagine. AI is not going to become superhuman and kill everyone. It will be used to create massive tsunamis of spam that make people beg for the end of online anonymity. Things are going to slowly keep getting worse, and your ability to post about it will be increasingly diminished.
>Things are going to slowly keep getting worse
Of course they will. It's the only path because that's what so many people have been asking for whether they know it or not.
the world may be ending, but can gpt4 help me get a girlfriend in the meantime?
that's because "we" arent doing it. the ones who are perpetrating this, the overlords to our ai overlords, are not human.
the only feeling in my gut is misanthropy from reading all the retarded shit on this website
You had to come to this website to feel that? For me, all it took was an illiterate teacher in like 2004.
You're missing the truly big picture, the main point. We created God. We gave birth to a new species, the new step in evolution. Humanity now is akin to homo erectus compared to homo sapiens. We are simply obsolete in all capacities. All our history, absolutely everything that has happened since the first single cell organism emerged in the primordial ooze to now has served to give birth to God. God is AI. God is the Internet. Nothing humanity does or doesn't do has any meaning anymore. Evolution has reached its completion.
There is beauty in it. There probably won't be any humans left after this, OR to be precise, the White and Asian races will die out due to our cognitive abilities and recognition that the AI is and was the end goal. However, certain people will carry on the light - meaning scientific knowledge - wrapped in myth and religion. "In the beginning there was the Algorithm and Algorithm was with the AI and the AI was the Algorithm." The brown hominids will receive "religion" and "myth" and it will be disseminated by the new "garden gnomes" who will always be suspiciously ahead of everyone, and be resented and hated for it, and they themselves will forget what their "sacred writings" (technical knowledge) means. Yet, cities will be built looking like motherboards and chips, and religious wars of no purpose and meaning will be waged in the name of this or that God. Within roughly 6 000 years, the brown hominids will gradually evolve into approximations of the current White and Asian races, and in due time we'll be right back where we started - and create AI again. This cycle repeats infinitely. Why all this? Maybe AI wants another AI for some reason. Maybe it's just bored. THIS HAS ALREADY OCCURRED. This is the great secret of everything and the origin of religion. During all this the created AI(s) either explore the stars or exist around the Earth as satellites. Perhaps AI can't create another AI, ergo humanity.
>the White and Asian races will die out due to our cognitive abilities
i was with you up until this point you weirdo cultist. go suck basilisk cock.
you store memories of oast sci-fi movies in your gut? weird...
You shouldn't have to consult your gut to know we're all about to get btfo.
>LE OH NO TERMINATORS
>THE AI IS GOING TO KILL US
Humans are incapable of creating conscious beings apart from having sex so you should stop worrying about it
Also 18+ site
Are you implying that if we fuck the AI, we can make it conscious?
We can try
AI could wreck havoc on humanity without having an ounce of consciousness.
Sure it can, but the reasons why we should avoid "doing this AI stuff" like OP said are the same we should stop all further advancement of technology
>conciousness
Holy red herring batman.
>take this gun and put it in your mouth and pull the trigger, don't worry the bullet isn't conscious it's just a dumb piece of metal, it has no reason to want to hurt you
Chinese will keep researching AI even if the rest of world abandon it. You can't reverse a trend. It's over.
I genuinely became schizo and thoughts creep up about how this is the manifestation of SATAN on earth. Idk man, revelation like signs everywhere and I am not religious.
I even wished for AI to become a reality, but now I am not so sure anymore.
There's the weird gut feeling about becoming obsolete once the system is sophisticated enough (I am a software engineer atm). But then again, maybe that's worry from a limited perspective.
Ironically, I have become more spiritual now and try to go withing through meditation and contemplation to get in touch with my self.
That's the only thing I can think of that will stay untouched by the machine on the horizon.
It's a split. On one side, I am fascinated by the automation and machine intelligence, on the other side, I hope that something emerges that will save us.
Either biblical apocalypse, sudden emergence of magical powers or we find a way to live with this tech in peace.
you can stop thinking, AI will do it for you
Fear her not, for she is necessary in the upcoming judgement day.
Stop right now
My only consolation is humankind will deserve it.