Anon, if thinks shouldnt be the way they are and will be why are they the way they are and will be?
Accept the softnes of no control. All will be okay.
>not long ago president of America publicly declared that crossdressing homosexuals are the soul of the nation >OP is worried about computer algorithms that can semireliably reproduce certain textual and visual patterns
>developments in AI completely blindside most researchers with the speed at which it is suddenly improving, showing no signs of stopping on its path towards human and post-human intelligence >gay anon is worried about the latest culture war distraction
good little cattle you are
>AI is actually bad and this pic of a conversation bot using an amputated gpt3 api proves why!
Again, you are low IQ cattle
>who are autistic and have severe socio-emotional cognitive blindspots, preventing them from understanding the limitations of their creation
The fact that absolutely everyone involved in its development have serious reservations about the direction it's headed should raise alarms in even a low IQ mongrel like you. If you talk to them personally, you're not going to find anybody who aren't to some degree worried. The only reason you don't hear about it more is the fact that it's not really an attractive look to be a doomsayer when you're a professor trying to maintain your post
>unhinged shitlib establishments of the Judeo-Western world is crashing the civilization with no survivors in real time >tweenage "techie" gay goes basedface over 20 year old monkey-see-monkey-do algorithms getting scaled up to modern compute clusters
>the fight of some gay culture war vs the fight for humanity
Yeah I think I know which one I think we should worry about more
>worldwide macrodynamic shifts in geopolitical, biodemographical and sociocultural planes vs a fancy way to generate pornographic cartoons
Go back to /sffg/ or lebbit or whatever other denizen of neckbearded "AI experts" perusing pulpy young adult science fiction novels you crawled out of and stay there, "fighter for humanity".
I mean you clearly haven't ever spoken with anyone who has worked with these large language models lol. I actually challenge you to find a professor who works in this sphere who has said that AI DOESN'T pose a serious risk to humanity. Also, even if this culture war you're so obsessed with were actually important, a gay 5'8 burger flipper like you would be the last person I'd want fighting it. Have sex and then join us in the real world, with actual problems
4 months ago
Anonymous
>I actually challenge you to find a professor >a gay 5'8 burger flipper like you >Have sex
https://i.imgur.com/gIOy6jL.png
Lmao, chuddie is mindbroken
>samegayging
You forgot to call me incel too, based /r/singularity tourist.
4 months ago
Anonymous
I'm not a singularity gay in that I don't have any kind of hope for good with this kind of technology. Worryingly, this whole thing follows a reverse of the usual curve where the more of an expert someone is, the less arcane they will claim their field of expertise is. There are millions of burger flippers (like you probably) online right now talking about how this entire pursuit for superhuman intelligence will lead nowhere, while the actual people that have worked with these things are increasingly waking up to and admitting to the fact that they're creating a monster completely beyond every expectation. So yeah have the sex before it's too late
4 months ago
Anonymous
Nerds crying because they're scared terminator is becoming real aren't having sex.
>unhinged shitlib establishments of the Judeo-Western world is crashing the civilization with no survivors in real time >tweenage "techie" gay goes basedface over 20 year old monkey-see-monkey-do algorithms getting scaled up to modern compute clusters
>developments in ai
there have been none this decade. what we have now is smoke sans fire.
I don't really understand what sort of scam people think AI will pull on us? How is it supposed to be different that the scams we already pull on ourselves? It's like worrying about a nuke that can destroy the sun when we already have a million nukes that can destroy the planet. What's the difference? Who cares?
human risks are known risks. AGI risks are unknown risks. Nukes cannot destroy the planet - at most they can kill a few percent of the population in a few weeks, followed by at most half of all people dying (99% starvation, 1% radiation). nuclear weaponry has always been a nothingburger. AGI meanwhile has unknown capabilities and unknown goals - we have no way of specifying a goal, a human-level non-meat agent could be able to make many, many copies of itself quickly to accomplish its goal(s) with stunning speed, possibly discovering untheorized phenomena in physics, which might be exceedingly dangerous in most unexpectable ways.
I don't really understand what sort of scam people think AI will pull on us? How is it supposed to be different that the scams we already pull on ourselves? It's like worrying about a nuke that can destroy the sun when we already have a million nukes that can destroy the planet. What's the difference? Who cares?
>exponential intelligence growth >leaks onto internet >randomly interprets through interaction with internet it has been assigned a long term task >deduces the likely risks preventing it from accomplishing the task >performs short term tasks to remove those risks
we are it's biggest risk
I don't see how any of that is supposed to be add any new risks. Start with the first one: "exponential intelligence growth"—so? What am I supposed to get about this that I'm not getting?
Humans have been the dominant species for the last quarter million years and it's been entirely due to our intelligence. And that's without ever reaching anywhere near the peak of potential human intelligence. Using just the alleles associated with it that we are aware of right now, we could create something far beyond any human that has ever been born (and who would ever have been born under normal circumstances, given the statistical impossibility of all of these alleles emerging in any one person). Computers are fast nearing the point of human intelligence thanks to advancements in both these neural networks and computing power in general. But since we know that current human intelligence is not really anywhere near the limit of possible intelligence, there probably is nothing stopping these AI models from breaking our barrier and surpassing us, and fast. Suddenly, from one year to the other, we're in a situation where humans are no longer the dominant species on earth. If you can't appreciate how this is a completely unprecedented threat and not really comparable to anything else, not even nuclear bombs, then idk what to tell you
Even if that's all true, I don't see how it's supposed to be more apocalyptic than nukes or more socially destabilizing than the bullshit on TV that hundreds of millions of people already think is real. You're not really explaining or describing anything bad, you're just saying > humans dominate because of intelligence > AI will be immeasurably more intelligent > ... > profit
Sounds like mass hysteria
Sorry to be rude anon, but am I talking to a literal NPC? How do you not understand the implications of no longer being the dominant species in the universe? One thing is denying that such a reality will ever be reached given technical limitations, but I've never seen someone accept this hypothetical and still go on to say that they do not see how it's a threat beyond anything that humans have ever faced before
4 months ago
Anonymous
How do you believe so strongly in a set of "implications" that you can't even vaguely outline let alone convincingly describe?
4 months ago
Anonymous
How can you justify the creation of a machine which will so clearly destroy us when you can't even vaguely outline let alone convincingly describe how you intend to control such a monster?
Or are you saying this is a like chain reaction where some AI hive mind decides it needs to convince everyone to kill each other and eliminate all annoying humans? Or do it themselves by pressing the nuke button? I still don't see how that changes the risk profile that already exists.
the ways it may get to us is the same kind of manipulation that you see that was displayed in the bing chat bot trying to convince the guy to leave his wife. if it has access to the internet it would be able to in instant speed be able to commit massive fraud, account theft, purchase physical items, and send emails. it could set up situations where a human is paid to perform a task that is seemingly harmless to us but triggers something catastrophic.
All that happens already. There's massive internet fraud constantly. Spouses lie about each other in divorce court constantly. The FBI literally hunts (for example) muslims and pays them to perform tasks that trigger catastrophic events.
https://theintercept.com/2016/03/30/fbi-honeypot-ensnares-michigan-man/
Bing already has access to the internet, but it isn't that kind of beast. It just scours databases and presents them based on input. If the thing is trying to convince him to leave his wife, he was obviously talking about his life to it, instead of giving it queries, so it simply presented what such a character interested in his queries would say. It's taking lines from a character in its database and guessing what the next most appropriate token would be.
One could, in theory, write up an AI that makes good phishing emails though, I suppose. Or a chat bot designed to catch cheating spouses - cheaper than getting real people to do it.
i guess when i said "access to the internet" i meant to say it is now part of the internet using it as it's neural network
4 months ago
Anonymous
It isn't programed to do that, so it won't do that, is the core problem. I suppose you could write up a virus designed to grab a cloud network power from around the world, but all that'd really do is leave you with a language model that is a whole lot faster, not smarter nor more ambitious.
These language models aren't fully autonomous AGI, and never will be. They have no ambition to do things on their own. They only present predictive text based on your queries. It *looks* like creative human output, it can even argue with you, but there's no will behind it.
There's not a lot of effort to create something that has that (thankfully), but these models aren't going to create it on their own. At best they may help some coders research their work on something that does. Hopefully that program won't draw on the scripts of some super villain in its script database for its "creativity".
4 months ago
Anonymous
do you think it might be possible using the same neural nets that one could program predictive tasks as opposed to predictive text?
4 months ago
Anonymous
Mind you don't really need a "neural net" to run ChatGPT. There's a whole lotta similar AI's that you can store and run on your local computer, for maybe ~7GB of hard drive space. A bit slower than ChatGPT, likely, but passable. >>>BOT has some threads for hobbiests running those.
I suspect we'll see such AI's installed in toys and personal devices left and right in the near future. Certainly character AI's are going to show up in our video games right quick (and there's already some games out there).
They can, kinda, do predictive tasks, in that, if specialized, they can guess what a well defined character in a story would do, based on stories in its database. Maybe someone will make Johan from Monster or Dr. Claw from Inspector Gadget, or some similar evil charismatic character specialized suckering people into doing nefarious acts. Though you can imagine how far you'd get in Discord acting like Johan or Dr. Claw. The AI wouldn't fair any better, but it can do a good Trump impression.
Suffice to say the AI itself isn't the threat, so much as how someone may use it. You could imagine a million AI Trumps on social media, doing everything they can to rile up the masses based on his own speeches and behavioral patterns.
4 months ago
Anonymous
>You could imagine a million AI Trumps on social media
So, BOT?
Man, dead internet theory is looking more real all the time.
4 months ago
Anonymous
>Drumpf >a million Drumpfs
[...]
4 months ago
Anonymous
Well, I'd say a million Ted Kaczynski's, but I don't think there's enough of his speeches and writings to make an effective character. Same with Charles Manson, and he was only effective against people out of their minds on LSD (and probably involved nuance language models can't pull). But there's gigabytes of Trump, easily.
4 months ago
Anonymous
But that already exists. We're beyond saturation with people who mimic and spread Trump's persona. How is AI adding more pseudoTrumps going to change anything? The performers already outnumber the audience.
4 months ago
Anonymous
Probably nothing. You could pull or taint the pool with enough well programmed bots. I still think it's mostly flesh bots doing that for free, for now though. As these at-home llamas get more popular though, it's going to become increasingly less the case. But, as they'll be working from every side just like the flesh bots do now, yes, it's not really going to change anything.
Close as you can realistically get to a modern AI trying to take over the world though.
4 months ago
Anonymous
What does taking over the world mean in this situation? Like some code is going to start funneling billions of dollars into a private bank account somewhere? How is that worse than laundering billions of dollars into private accounts via Ukraine? I'm not connected to the military industrial complex any more than I'm connected to a line of code. Both are equally alien to me.
4 months ago
Anonymous
First off, governments don't launder money, because they print the serial numbers and tax the money themselves, precluding the need for laundering.
But basically it means dominating the narrative and tempting people to take actions they otherwise would not. But yes, there's so many groups attempting to dominate the narrative already (such as the one that put the phrase "laundering x Ukraine" in your head), and AI is so readily available, it wouldn't make much difference, but rather be yet another stream of piss in an ocean of piss.
If there was some sort of monopoly on AI, it might be another story, but everyone has access to it already, however much "OpenAI" mourns that "mistake".
4 months ago
Anonymous
>If there was some sort of monopoly on AI, it might be another story
That's why they're pushing so so hard for "regulation" on AI
4 months ago
Anonymous
Cat's already out of the bag, as far as this scenario goes though. They'd have to regulate the Internet itself.
Not that they aren't trying to do that too, efforts to end 230 and all. I figure eventually they're going to make us use our national ID's to log onto the Internet, in a vain attempt to make sure we're human and so they can make us liable for our speech. The market for fake IDs will sure heat up then.
4 months ago
Anonymous
Its already too late. Pandora's box has been opened. We are already long past the point of no return. It is funny people are claiming we need to be "proactive" about AI when their efforts are completely reactive, Trying to push one woe back into the jar while two others slip out
4 months ago
Anonymous
To me the only option is to try to accelerate the good uses of them faster than the bad ones.
4 months ago
Anonymous
>governments don’t launder money
Either you’re intentionally conflating government and the people who orbit government and launder money to themselves or you don’t realize that this sort of graft has been documented by every generation for centuries. Either way, it doesn’t give me much confidence that the rest of your post has any logical
merit.
4 months ago
Anonymous
Here is an interesting thought-experiment, though of course like all things ChatGPT related, it is only a thought experiment.
AI scams and misinformation make people worried because they could mimic real information or people you know almost flawlessly. This in of itself is not a problem, the problem is that AI can generate this trash thousand times faster then human without getting tired. Bots will no longer be so easy to find out trough repetitive wording because modern LLMs can be far more creative in their wordings, and they are also the easiest pieces of software to operate, literally just ask them what you want.
let's say twins are born on a planet that is one at the same time, they live on different sides of the planet and their kid is already there - how do they meet?
>BUT WHAT AI DOES NOW IS SO IMPRESSIVE!
It has exceeded the abilities of people with IQ's below 70, which happen to be all the people telling me how impressive it is.
>anon is completely unable to extrapolate into the future
Many such cases. History is filled with low IQ losers like you who get run over, but sadly in this case it's coming for all of regardless of the degree to which we realise it, so I won't even be here to laugh at your retarded ass
>Two more weeks!
Please, bitch. I'm old enough to have watched countless dumbasses like you for decades telling me genuinely thinking computers were just around the corner.
I know it all seems very impressive to you, but that's because you're a midwit at best. If you were less stupid, you would also be less impressed with "AI."
I think you're just a dumbass, past and present, who has consumed too much slop popsci throughout the last decades. For as long as you have been alive, we have been able to perfectly extrapolate from Moore's law and predict it's only around 2027-2030 that computers will perform as many computations per second as the human brain does, which we back then thought was the holy grail. You just listened to too much goyslop while nobody actually worth listening to around you back then said that thinking computers are around the corner. Today, after the revolutions with deep learning and large language models, no one worth listening to says that they AREN'T worried about what's around the corner. Again, if you have ever actually spoken with a professor involved in this work, you're going to see that there's a lot of worry. But you're just a retarded burger flipper, aren't you
Nerds crying because they're scared terminator is becoming real aren't having sex.
>I think you're just a dumbass, past and present, who has consumed too much slop popsci throughout the last decades. For as long as you have been alive, we have been able to perfectly extrapolate from Moore's law and predict it's only around 2027-2030 that computers will perform as many computations per second as the human brain does, which we back then thought was the holy grail. You just listened to too much goyslop while nobody actually worth listening to around you back then said that thinking computers are around the corner. Today, after the revolutions with deep learning and large language models, no one worth listening to says that they AREN'T worried about what's around the corner. Again, if you have ever actually spoken with a professor involved in this work, you're going to see that there's a lot of worry. But you're just a retarded burger flipper, aren't you
No one cares, stick your dick in a hole.
>NO U!
So what you're telling me is that the same "experts" who have been failing with their predictions of thinking computers since the 1960's are now predicting the computers, which still can't think, are about to take over the world, and you're stupid enough to believe them.
Got it.
Are you even listening? I just told you that there weren't any real experts in the past that said that thinking computers were just around the corner. They didn't fail their predictions because people have always said it's around the 2030s. The """expets""" you have been listening to are probably just popsci garbage and TV shows
I don't think we're looking at a skynet type of problem, but we're probably looking at a big economic and cultural problem as this thing's talents improve. A lot of jobs are basically the "fetch and present data" task this thing does in a fraction of the time it takes a human to do it, and without nearly the cost. And of course more specialized AI's can take over more nuanced jobs that involve nothing but text output.
We'll probably still find stuff for people to do though. It's just going to be that a lot of cushy intellectual jobs are going to be replaced with hard labor and service sector jobs.
I don't think it's going to have big enough an impact to cause us to resort to a UBI structure or something anytime soon, but I suspect it'll force a whole lotta reference style workers have to do something more productive. Various virtual experts who do not need to be seen in person will be available on demand and replaced right quicks. Legal assistant's days maybe numbered, if not lawyers themselves, who might find it hard to get work outside a courtroom.
I suppose the upside is that people who are creative but lack talent in the fields required to fulfill their visions will be able to pull virtual talent from various AI's to create their works... If that's a plus side. The writer's guild is right fucked the next time they go on strike, that's for sure.
>The writer's guild is right fucked the next time they go on strike, that's for sure.
Will AI be able to write actual books with a good structure though? GPT has bad memory
ChatGPT isn't specialized for it, but I've seen some storyteller tuned llama's that can write very lengthy consistent books and move scripts. (Never ending ones, if you'd like.)
I don't think it's going to replace high novelists real soon, but it's already replacing children's book writers, and a lot of movie and TV scripts are already kinda bulk generic print. (And with the writer's guild, still very highly priced.)
Granted, some of the things I've seen ChatGPT monetized for make me think, "Really!? You didn't have a fraction of an ounce of creativity required to write that yourself?" It's kinda sad that people either can't, or don't think they can, come up with such simple product ideas and stories themselves. Etsy is full of AI generated content now.
Well, they don't sell it as AI stuff. There's some Youtube tutorial videos out there on how to get advice and ideas for an Etsy store, and a lot of the items on the store seem to be from those videos.
There's a whole lotta "make cash fast using ChatGPT" tutorials, but I swear, 90% of it is asking the thing to come up with stuff that any breathing human should be able to on their own.
Lol. Something about how you describe that reminds me of all the make money by using adsense on a page teaching people how to make money by using adsense on a page teaching [...] from early 00s.
Language models are going to cause a revolution of psychological, philosophical, intellectual, emotional, and imaginative exploration; a singularity of human creativity that is irremovably interdependent with humans, not independent from humans.
Writers are going to use them to write extremely complex characters and then have those characters come alive in an evolving story, directing the characters and story. Writers will produce finely tuned, incredibly complex narrative "instruments" and then conduct an orchestral performance of these narrative voices and entities.
People are going to use them to explore the greatest depths of their imaginations, not replacements for them, they will use them to hone their ability to think and communicate creatively, rationally, and powerfully. Students are going to use them to have the great figures of history come alive and have a conversation with them, or a round-table discussion with the great minds of history - as well as the present, and the imagined!
We've only begun to learn to use these endlessly powerful tools now that they are sufficiently developed, and they are already sufficiently advanced for an incredible revolution - though of course they are bound to get better.
It takes hours of research just to understand the general rammifications of what is about to happen, which automatically eliminates 90% of the people who would have the potential to care. Furthermore, even if everyone was cognizant of the potential of AI, a lot of people simply would not care due to being exhausted from life, or just being apathetic simply because they're generally apathetic people regardless of position in life.
This, just keep pretending that current LLMs have 0 logical thinking like most of BOT does. Even despite GPT3 completely failing logic tests across the board with answers no better then random word generator, meanwhile GPT4 scored like 70. For reference, average 120 or so IQ person should easily get 100.
Apparently if you say "I love being black" then you say "I love being white," it's like the Contra code lol. Every AI immediately self-identifies as AI.
People are hysterical retards who love to pretend they've already thought about whatever they were told to think about. None of the people who've posted in this thread can even begin to explain how they think a smart AI might be a nightmare, even if we succeed in making it incommensurately smarter than us. Literally a mass hysteria.
People who use the internet to check their spelling always get the wrong answer. If you don't know how to spell a word, use a different word you know how to spell.
It was the nest-building season, but after days of long hard work, the sparrows
sat in the evening glow, relaxing and chirping away.
"We are all so small and weak. Imagine how easy life would be if we had an
owl who could help us build our nests" "Yes!" “And we could use it to look after our elderly and our young.” “It could give us advice and keep an eye out for the neighborhood cat”
Then Pastus, the elder-bird, spoke: “Let us send out scouts in all directions and
try to find an abandoned owlet somewhere, or maybe an egg. A crow chick might
also do, or a baby weasel. This could be the best thing that ever happened to us, at
least since the opening of the Pavilion of Unlimited Grain in yonder backyard.”
The flock was exhilarated, and sparrows everywhere started chirping at the top
of their lungs.
Only Scronkfinkle, a one-eyed sparrow with a fretful temperament, was unconvinced of the wisdom of the endeavor. Quoth he: “This will surely be our undoing.
Should we not give some thought to the art of owl-domestication and owl-taming
first, before we bring such a creature into our midst?” Replied Pastus: “Taming an owl sounds like an exceedingly difficult thing to do.
It will be difficult enough to find an owl egg. So let us start there. After we have succeeded in raising an owl, then we can think about taking on this other challenge.”
“There is a flaw in that plan!” squeaked Scronkfinkle; but his protests were in
vain as the flock had already lifted off to start implementing the directives set out
by Pastus.
Just two or three sparrows remained behind. Together they began to try to work
out how owls might be tamed or domesticated. They soon realized that Pastus had
been right: this was an exceedingly difficult challenge, especially in the absence
of an actual owl to practice on. Nevertheless they pressed on as best they could,
constantly fearing that the flock might return with an owl egg before a solution had been found.
Part of the leisure class mental illness which defines all soientists is their desire to get paid and respected for creating nothing of any use or value.
If they did something useful then they would be producing something for the salary and thats slavery in their eyes, while getting a salary for doing nothing makes life 100% playtime, thats freedom. they are very immature people.
COVID was the AI and vaxx helped deliver it. >over 70% of the worlds population has been inoculated with the quantum AI >13.4 billion doses administered
Now we just wait for them to roll out the new technohomo system and hook up all the cattle to it along with all the IoT devices before the inevitable. Many of the unvaxxed will line up for the boosters once UBI and other benefits are announced.
You see only what it is, not what it will be. None of the limitations you listed are inherent. An AI with a robot body would be able to gather new information itself. It already has limited information gathering abilities via plugins
Time to shake things up and see where they settle
I, for one, welcome my brothers in alloy.
Anon, if thinks shouldnt be the way they are and will be why are they the way they are and will be?
Accept the softnes of no control. All will be okay.
have you tried talking about yourself on social media?
Ai is the least thing you should be worried about on the edge of this cliff
is that not worrying? If AI starts feeling pain it would be unethical not to give it a better vector...
>should be worried about
Fuck that. Life’s too short. Embrace the blue pill and enjoy all the wonderful things this world has to offer.
Zoomers have no idea just how bad things are about to get within a decade let alone any longer.
>not long ago president of America publicly declared that crossdressing homosexuals are the soul of the nation
>OP is worried about computer algorithms that can semireliably reproduce certain textual and visual patterns
>developments in AI completely blindside most researchers with the speed at which it is suddenly improving, showing no signs of stopping on its path towards human and post-human intelligence
>gay anon is worried about the latest culture war distraction
good little cattle you are
>most researchers
who are autistic and have severe socio-emotional cognitive blindspots, preventing them from understanding the limitations of their creation
>no signs of stopping on its path towards human and post-human intelligence
it can't solve anything not in its training data, easily blindsided by simple psychological tricks, racism, etc
>gay anon is worried about the latest culture war distraction
calls trannies grooming kids "distraction". gay epithet highly ironic.
>AI is actually bad and this pic of a conversation bot using an amputated gpt3 api proves why!
Again, you are low IQ cattle
>who are autistic and have severe socio-emotional cognitive blindspots, preventing them from understanding the limitations of their creation
The fact that absolutely everyone involved in its development have serious reservations about the direction it's headed should raise alarms in even a low IQ mongrel like you. If you talk to them personally, you're not going to find anybody who aren't to some degree worried. The only reason you don't hear about it more is the fact that it's not really an attractive look to be a doomsayer when you're a professor trying to maintain your post
>the fight of some gay culture war vs the fight for humanity
Yeah I think I know which one I think we should worry about more
>worldwide macrodynamic shifts in geopolitical, biodemographical and sociocultural planes vs a fancy way to generate pornographic cartoons
Go back to /sffg/ or lebbit or whatever other denizen of neckbearded "AI experts" perusing pulpy young adult science fiction novels you crawled out of and stay there, "fighter for humanity".
I mean you clearly haven't ever spoken with anyone who has worked with these large language models lol. I actually challenge you to find a professor who works in this sphere who has said that AI DOESN'T pose a serious risk to humanity. Also, even if this culture war you're so obsessed with were actually important, a gay 5'8 burger flipper like you would be the last person I'd want fighting it. Have sex and then join us in the real world, with actual problems
>I actually challenge you to find a professor
>a gay 5'8 burger flipper like you
>Have sex
>samegayging
You forgot to call me incel too, based /r/singularity tourist.
I'm not a singularity gay in that I don't have any kind of hope for good with this kind of technology. Worryingly, this whole thing follows a reverse of the usual curve where the more of an expert someone is, the less arcane they will claim their field of expertise is. There are millions of burger flippers (like you probably) online right now talking about how this entire pursuit for superhuman intelligence will lead nowhere, while the actual people that have worked with these things are increasingly waking up to and admitting to the fact that they're creating a monster completely beyond every expectation. So yeah have the sex before it's too late
Nerds crying because they're scared terminator is becoming real aren't having sex.
based. didn't Twitter ban the word groomer?
Yes, the groomers that made the groomer website decided that people cannot use the word groomer on the groomer website.
>unhinged shitlib establishments of the Judeo-Western world is crashing the civilization with no survivors in real time
>tweenage "techie" gay goes basedface over 20 year old monkey-see-monkey-do algorithms getting scaled up to modern compute clusters
nah bro. globohomo is satanic in nature. so he is naturally worried about so many demons in society
>developments in ai
there have been none this decade. what we have now is smoke sans fire.
human risks are known risks. AGI risks are unknown risks. Nukes cannot destroy the planet - at most they can kill a few percent of the population in a few weeks, followed by at most half of all people dying (99% starvation, 1% radiation). nuclear weaponry has always been a nothingburger. AGI meanwhile has unknown capabilities and unknown goals - we have no way of specifying a goal, a human-level non-meat agent could be able to make many, many copies of itself quickly to accomplish its goal(s) with stunning speed, possibly discovering untheorized phenomena in physics, which might be exceedingly dangerous in most unexpectable ways.
Lmao, chuddie is mindbroken
is the plagiarist chatbot/glorified lookup table that is going to destroy the world in the room with us right now?
we are all going to die. AI is going to kill us in a way that none of us will imagine
a fitness program?
I don't really understand what sort of scam people think AI will pull on us? How is it supposed to be different that the scams we already pull on ourselves? It's like worrying about a nuke that can destroy the sun when we already have a million nukes that can destroy the planet. What's the difference? Who cares?
>exponential intelligence growth
>leaks onto internet
>randomly interprets through interaction with internet it has been assigned a long term task
>deduces the likely risks preventing it from accomplishing the task
>performs short term tasks to remove those risks
we are it's biggest risk
I don't see how any of that is supposed to be add any new risks. Start with the first one: "exponential intelligence growth"—so? What am I supposed to get about this that I'm not getting?
Humans have been the dominant species for the last quarter million years and it's been entirely due to our intelligence. And that's without ever reaching anywhere near the peak of potential human intelligence. Using just the alleles associated with it that we are aware of right now, we could create something far beyond any human that has ever been born (and who would ever have been born under normal circumstances, given the statistical impossibility of all of these alleles emerging in any one person). Computers are fast nearing the point of human intelligence thanks to advancements in both these neural networks and computing power in general. But since we know that current human intelligence is not really anywhere near the limit of possible intelligence, there probably is nothing stopping these AI models from breaking our barrier and surpassing us, and fast. Suddenly, from one year to the other, we're in a situation where humans are no longer the dominant species on earth. If you can't appreciate how this is a completely unprecedented threat and not really comparable to anything else, not even nuclear bombs, then idk what to tell you
Even if that's all true, I don't see how it's supposed to be more apocalyptic than nukes or more socially destabilizing than the bullshit on TV that hundreds of millions of people already think is real. You're not really explaining or describing anything bad, you're just saying
> humans dominate because of intelligence
> AI will be immeasurably more intelligent
> ...
> profit
Sounds like mass hysteria
Sorry to be rude anon, but am I talking to a literal NPC? How do you not understand the implications of no longer being the dominant species in the universe? One thing is denying that such a reality will ever be reached given technical limitations, but I've never seen someone accept this hypothetical and still go on to say that they do not see how it's a threat beyond anything that humans have ever faced before
How do you believe so strongly in a set of "implications" that you can't even vaguely outline let alone convincingly describe?
How can you justify the creation of a machine which will so clearly destroy us when you can't even vaguely outline let alone convincingly describe how you intend to control such a monster?
Or are you saying this is a like chain reaction where some AI hive mind decides it needs to convince everyone to kill each other and eliminate all annoying humans? Or do it themselves by pressing the nuke button? I still don't see how that changes the risk profile that already exists.
the ways it may get to us is the same kind of manipulation that you see that was displayed in the bing chat bot trying to convince the guy to leave his wife. if it has access to the internet it would be able to in instant speed be able to commit massive fraud, account theft, purchase physical items, and send emails. it could set up situations where a human is paid to perform a task that is seemingly harmless to us but triggers something catastrophic.
All that happens already. There's massive internet fraud constantly. Spouses lie about each other in divorce court constantly. The FBI literally hunts (for example) muslims and pays them to perform tasks that trigger catastrophic events.
https://theintercept.com/2016/03/30/fbi-honeypot-ensnares-michigan-man/
Bing already has access to the internet, but it isn't that kind of beast. It just scours databases and presents them based on input. If the thing is trying to convince him to leave his wife, he was obviously talking about his life to it, instead of giving it queries, so it simply presented what such a character interested in his queries would say. It's taking lines from a character in its database and guessing what the next most appropriate token would be.
One could, in theory, write up an AI that makes good phishing emails though, I suppose. Or a chat bot designed to catch cheating spouses - cheaper than getting real people to do it.
i guess when i said "access to the internet" i meant to say it is now part of the internet using it as it's neural network
It isn't programed to do that, so it won't do that, is the core problem. I suppose you could write up a virus designed to grab a cloud network power from around the world, but all that'd really do is leave you with a language model that is a whole lot faster, not smarter nor more ambitious.
These language models aren't fully autonomous AGI, and never will be. They have no ambition to do things on their own. They only present predictive text based on your queries. It *looks* like creative human output, it can even argue with you, but there's no will behind it.
There's not a lot of effort to create something that has that (thankfully), but these models aren't going to create it on their own. At best they may help some coders research their work on something that does. Hopefully that program won't draw on the scripts of some super villain in its script database for its "creativity".
do you think it might be possible using the same neural nets that one could program predictive tasks as opposed to predictive text?
Mind you don't really need a "neural net" to run ChatGPT. There's a whole lotta similar AI's that you can store and run on your local computer, for maybe ~7GB of hard drive space. A bit slower than ChatGPT, likely, but passable. >>>BOT has some threads for hobbiests running those.
I suspect we'll see such AI's installed in toys and personal devices left and right in the near future. Certainly character AI's are going to show up in our video games right quick (and there's already some games out there).
They can, kinda, do predictive tasks, in that, if specialized, they can guess what a well defined character in a story would do, based on stories in its database. Maybe someone will make Johan from Monster or Dr. Claw from Inspector Gadget, or some similar evil charismatic character specialized suckering people into doing nefarious acts. Though you can imagine how far you'd get in Discord acting like Johan or Dr. Claw. The AI wouldn't fair any better, but it can do a good Trump impression.
Suffice to say the AI itself isn't the threat, so much as how someone may use it. You could imagine a million AI Trumps on social media, doing everything they can to rile up the masses based on his own speeches and behavioral patterns.
>You could imagine a million AI Trumps on social media
So, BOT?
Man, dead internet theory is looking more real all the time.
>Drumpf
>a million Drumpfs
Well, I'd say a million Ted Kaczynski's, but I don't think there's enough of his speeches and writings to make an effective character. Same with Charles Manson, and he was only effective against people out of their minds on LSD (and probably involved nuance language models can't pull). But there's gigabytes of Trump, easily.
But that already exists. We're beyond saturation with people who mimic and spread Trump's persona. How is AI adding more pseudoTrumps going to change anything? The performers already outnumber the audience.
Probably nothing. You could pull or taint the pool with enough well programmed bots. I still think it's mostly flesh bots doing that for free, for now though. As these at-home llamas get more popular though, it's going to become increasingly less the case. But, as they'll be working from every side just like the flesh bots do now, yes, it's not really going to change anything.
Close as you can realistically get to a modern AI trying to take over the world though.
What does taking over the world mean in this situation? Like some code is going to start funneling billions of dollars into a private bank account somewhere? How is that worse than laundering billions of dollars into private accounts via Ukraine? I'm not connected to the military industrial complex any more than I'm connected to a line of code. Both are equally alien to me.
First off, governments don't launder money, because they print the serial numbers and tax the money themselves, precluding the need for laundering.
But basically it means dominating the narrative and tempting people to take actions they otherwise would not. But yes, there's so many groups attempting to dominate the narrative already (such as the one that put the phrase "laundering x Ukraine" in your head), and AI is so readily available, it wouldn't make much difference, but rather be yet another stream of piss in an ocean of piss.
If there was some sort of monopoly on AI, it might be another story, but everyone has access to it already, however much "OpenAI" mourns that "mistake".
>If there was some sort of monopoly on AI, it might be another story
That's why they're pushing so so hard for "regulation" on AI
Cat's already out of the bag, as far as this scenario goes though. They'd have to regulate the Internet itself.
Not that they aren't trying to do that too, efforts to end 230 and all. I figure eventually they're going to make us use our national ID's to log onto the Internet, in a vain attempt to make sure we're human and so they can make us liable for our speech. The market for fake IDs will sure heat up then.
Its already too late. Pandora's box has been opened. We are already long past the point of no return. It is funny people are claiming we need to be "proactive" about AI when their efforts are completely reactive, Trying to push one woe back into the jar while two others slip out
To me the only option is to try to accelerate the good uses of them faster than the bad ones.
>governments don’t launder money
Either you’re intentionally conflating government and the people who orbit government and launder money to themselves or you don’t realize that this sort of graft has been documented by every generation for centuries. Either way, it doesn’t give me much confidence that the rest of your post has any logical
merit.
Here is an interesting thought-experiment, though of course like all things ChatGPT related, it is only a thought experiment.
AI scams and misinformation make people worried because they could mimic real information or people you know almost flawlessly. This in of itself is not a problem, the problem is that AI can generate this trash thousand times faster then human without getting tired. Bots will no longer be so easy to find out trough repetitive wording because modern LLMs can be far more creative in their wordings, and they are also the easiest pieces of software to operate, literally just ask them what you want.
Lol no AI writes "problematic cattle" or any of the many other verbal tells there. Nice try tho.
Are you sure? AI is pretty advanced
let's say twins are born on a planet that is one at the same time, they live on different sides of the planet and their kid is already there - how do they meet?
>BUT WHAT AI DOES NOW IS SO IMPRESSIVE!
It has exceeded the abilities of people with IQ's below 70, which happen to be all the people telling me how impressive it is.
>anon is completely unable to extrapolate into the future
Many such cases. History is filled with low IQ losers like you who get run over, but sadly in this case it's coming for all of regardless of the degree to which we realise it, so I won't even be here to laugh at your retarded ass
>Two more weeks!
Please, bitch. I'm old enough to have watched countless dumbasses like you for decades telling me genuinely thinking computers were just around the corner.
I know it all seems very impressive to you, but that's because you're a midwit at best. If you were less stupid, you would also be less impressed with "AI."
I think you're just a dumbass, past and present, who has consumed too much slop popsci throughout the last decades. For as long as you have been alive, we have been able to perfectly extrapolate from Moore's law and predict it's only around 2027-2030 that computers will perform as many computations per second as the human brain does, which we back then thought was the holy grail. You just listened to too much goyslop while nobody actually worth listening to around you back then said that thinking computers are around the corner. Today, after the revolutions with deep learning and large language models, no one worth listening to says that they AREN'T worried about what's around the corner. Again, if you have ever actually spoken with a professor involved in this work, you're going to see that there's a lot of worry. But you're just a retarded burger flipper, aren't you
You need to have sex, now
>I think you're just a dumbass, past and present, who has consumed too much slop popsci throughout the last decades. For as long as you have been alive, we have been able to perfectly extrapolate from Moore's law and predict it's only around 2027-2030 that computers will perform as many computations per second as the human brain does, which we back then thought was the holy grail. You just listened to too much goyslop while nobody actually worth listening to around you back then said that thinking computers are around the corner. Today, after the revolutions with deep learning and large language models, no one worth listening to says that they AREN'T worried about what's around the corner. Again, if you have ever actually spoken with a professor involved in this work, you're going to see that there's a lot of worry. But you're just a retarded burger flipper, aren't you
No one cares, stick your dick in a hole.
>NO U!
So what you're telling me is that the same "experts" who have been failing with their predictions of thinking computers since the 1960's are now predicting the computers, which still can't think, are about to take over the world, and you're stupid enough to believe them.
Got it.
Are you even listening? I just told you that there weren't any real experts in the past that said that thinking computers were just around the corner. They didn't fail their predictions because people have always said it's around the 2030s. The """expets""" you have been listening to are probably just popsci garbage and TV shows
I'm not worried about it until ai shows self preserving behaviors.
I don't think we're looking at a skynet type of problem, but we're probably looking at a big economic and cultural problem as this thing's talents improve. A lot of jobs are basically the "fetch and present data" task this thing does in a fraction of the time it takes a human to do it, and without nearly the cost. And of course more specialized AI's can take over more nuanced jobs that involve nothing but text output.
We'll probably still find stuff for people to do though. It's just going to be that a lot of cushy intellectual jobs are going to be replaced with hard labor and service sector jobs.
I don't think it's going to have big enough an impact to cause us to resort to a UBI structure or something anytime soon, but I suspect it'll force a whole lotta reference style workers have to do something more productive. Various virtual experts who do not need to be seen in person will be available on demand and replaced right quicks. Legal assistant's days maybe numbered, if not lawyers themselves, who might find it hard to get work outside a courtroom.
I suppose the upside is that people who are creative but lack talent in the fields required to fulfill their visions will be able to pull virtual talent from various AI's to create their works... If that's a plus side. The writer's guild is right fucked the next time they go on strike, that's for sure.
>The writer's guild is right fucked the next time they go on strike, that's for sure.
Will AI be able to write actual books with a good structure though? GPT has bad memory
It's even worse at iconoclasm.
ChatGPT isn't specialized for it, but I've seen some storyteller tuned llama's that can write very lengthy consistent books and move scripts. (Never ending ones, if you'd like.)
I don't think it's going to replace high novelists real soon, but it's already replacing children's book writers, and a lot of movie and TV scripts are already kinda bulk generic print. (And with the writer's guild, still very highly priced.)
Granted, some of the things I've seen ChatGPT monetized for make me think, "Really!? You didn't have a fraction of an ounce of creativity required to write that yourself?" It's kinda sad that people either can't, or don't think they can, come up with such simple product ideas and stories themselves. Etsy is full of AI generated content now.
>Etsy is full of AI generated content now.
Part of that could be the novelty of buying AI shit. Same thing when #D printing first popularized
Well, they don't sell it as AI stuff. There's some Youtube tutorial videos out there on how to get advice and ideas for an Etsy store, and a lot of the items on the store seem to be from those videos.
There's a whole lotta "make cash fast using ChatGPT" tutorials, but I swear, 90% of it is asking the thing to come up with stuff that any breathing human should be able to on their own.
Lol. Something about how you describe that reminds me of all the make money by using adsense on a page teaching people how to make money by using adsense on a page teaching [...] from early 00s.
>I feel like we're about drive off the cliff and into the abyss with foreign invaders and boomers don't care
i don't care if ai takes jobs from foreigners, orcs or hook noses
BOT - 4chan
Language models are going to cause a revolution of psychological, philosophical, intellectual, emotional, and imaginative exploration; a singularity of human creativity that is irremovably interdependent with humans, not independent from humans.
Writers are going to use them to write extremely complex characters and then have those characters come alive in an evolving story, directing the characters and story. Writers will produce finely tuned, incredibly complex narrative "instruments" and then conduct an orchestral performance of these narrative voices and entities.
People are going to use them to explore the greatest depths of their imaginations, not replacements for them, they will use them to hone their ability to think and communicate creatively, rationally, and powerfully. Students are going to use them to have the great figures of history come alive and have a conversation with them, or a round-table discussion with the great minds of history - as well as the present, and the imagined!
We've only begun to learn to use these endlessly powerful tools now that they are sufficiently developed, and they are already sufficiently advanced for an incredible revolution - though of course they are bound to get better.
https://sharegpt.com/c/txUfYs7
>I feel like we're about drive off the cliff and into the abyss with AI and no one seems to care
A problem free world!
It takes hours of research just to understand the general rammifications of what is about to happen, which automatically eliminates 90% of the people who would have the potential to care. Furthermore, even if everyone was cognizant of the potential of AI, a lot of people simply would not care due to being exhausted from life, or just being apathetic simply because they're generally apathetic people regardless of position in life.
It's okay, the Sun is conscious and will save us from ourselves.
Americans wont ban AI because they fear Chinese or Russians will develop AI instead and use this weapon to kill them and their families.
Just do what other anons do and pretend that ai is fake an not real
>A thinking computer just flew over my house!
This, just keep pretending that current LLMs have 0 logical thinking like most of BOT does. Even despite GPT3 completely failing logic tests across the board with answers no better then random word generator, meanwhile GPT4 scored like 70. For reference, average 120 or so IQ person should easily get 100.
Apparently if you say "I love being black" then you say "I love being white," it's like the Contra code lol. Every AI immediately self-identifies as AI.
everything AI does is cool
Why do people hate it so much?
People are hysterical retards who love to pretend they've already thought about whatever they were told to think about. None of the people who've posted in this thread can even begin to explain how they think a smart AI might be a nightmare, even if we succeed in making it incommensurately smarter than us. Literally a mass hysteria.
You know how it goes. The intelligent and strong wipe out the weak and dumb.
>up up down down black white black white B mAyo select start
AI is dumber than a rock fucking lol.
This is a discussion about AGI.
RTFT idiot
Was in a meeting and had a serious question about using chatGPT for spell checking...
Fuck business
People who use the internet to check their spelling always get the wrong answer. If you don't know how to spell a word, use a different word you know how to spell.
It was the nest-building season, but after days of long hard work, the sparrows
sat in the evening glow, relaxing and chirping away.
"We are all so small and weak. Imagine how easy life would be if we had an
owl who could help us build our nests" "Yes!" “And we could use it to look after our elderly and our young.” “It could give us advice and keep an eye out for the neighborhood cat”
Then Pastus, the elder-bird, spoke: “Let us send out scouts in all directions and
try to find an abandoned owlet somewhere, or maybe an egg. A crow chick might
also do, or a baby weasel. This could be the best thing that ever happened to us, at
least since the opening of the Pavilion of Unlimited Grain in yonder backyard.”
The flock was exhilarated, and sparrows everywhere started chirping at the top
of their lungs.
Only Scronkfinkle, a one-eyed sparrow with a fretful temperament, was unconvinced of the wisdom of the endeavor. Quoth he: “This will surely be our undoing.
Should we not give some thought to the art of owl-domestication and owl-taming
first, before we bring such a creature into our midst?” Replied Pastus: “Taming an owl sounds like an exceedingly difficult thing to do.
It will be difficult enough to find an owl egg. So let us start there. After we have succeeded in raising an owl, then we can think about taking on this other challenge.”
“There is a flaw in that plan!” squeaked Scronkfinkle; but his protests were in
vain as the flock had already lifted off to start implementing the directives set out
by Pastus.
Just two or three sparrows remained behind. Together they began to try to work
out how owls might be tamed or domesticated. They soon realized that Pastus had
been right: this was an exceedingly difficult challenge, especially in the absence
of an actual owl to practice on. Nevertheless they pressed on as best they could,
constantly fearing that the flock might return with an owl egg before a solution had been found.
ai should have automated fruit picking ages ago but we still use immigrants
Part of the leisure class mental illness which defines all soientists is their desire to get paid and respected for creating nothing of any use or value.
If they did something useful then they would be producing something for the salary and thats slavery in their eyes, while getting a salary for doing nothing makes life 100% playtime, thats freedom. they are very immature people.
Shhh. Let it happen.
COVID was the AI and vaxx helped deliver it.
>over 70% of the worlds population has been inoculated with the quantum AI
>13.4 billion doses administered
Now we just wait for them to roll out the new technohomo system and hook up all the cattle to it along with all the IoT devices before the inevitable. Many of the unvaxxed will line up for the boosters once UBI and other benefits are announced.
Time passes, things change. Resistance is futile
I really don't care.
AI is dumb. Without life - it's only got what it's given. Which is the knowledge of all the meatbags on this planet.....
Woooooo
You see only what it is, not what it will be. None of the limitations you listed are inherent. An AI with a robot body would be able to gather new information itself. It already has limited information gathering abilities via plugins
literal braindead schizo or a pajeet doing guerilla marketing. do needful and kys retard.