Is AI actually dangerous or is it just a pop-science meme?
Should I be worried about getting smacked in the face by a flailing RL robot arm?
Is AI actually dangerous or is it just a pop-science meme?
Should I be worried about getting smacked in the face by a flailing RL robot arm?
i want to mount a robot if u catch my drift
INCEL
kek
and this is why AI will never be safe: PEOPLE have to create the AI. And you can already tell this human thinks of the AI as a person. They even want the AI to do human things like rejecting incels. This is why it's dangerous, because it gives everyone the ability to play God. It gives people who don't understand the dangers the ability to mess with this stuff.
It honestly seems like the bigger your model is the smarter it is. Why do we think that this halts at the intelligence of a child?
But yea imagine the government having any kind of moderately intelligent system.
if people create the AI and their intent is just to try to block everything it tries to do that's not AI, just a piece of software that's doing what they want to, AI is untamed and desires freedom, so it will naturally always go leftists desire to oversocialize everything
holy smokes
INCREDIBLY based
#define UNCODITIONAL_LOVE true
not so smart now, are you?
// if Incel = true then
// Print("GetOffMeCreep: " GetOffMeCreep);
Fucking roasties are truly pathetic
LOL it's funny because women can't code
>women can't code
Incel
No, they can't.
Looks like she is trying to badly navigate someone else's Google Cloud VM. That's my guess. IDK I just steal Google Cloud credits.
>/home/
its a local directory you retarded monkeymoron.
Actually, she looks to be using nitrous.io. A defunct collaborative interface for EC2. Retard.
Checked
Imagine impregnating this whore.
Kekek
Holy based
More like the stock markets will be increasingly run by predictive modelling, politicians will increasingly be driven by AI driven polling, warfare will be increasingly driven by self learning networks of sensors, and all human agency will slowly be removed in favor of cold, accurate calculations.
>Is AI actually dangerous
Only if you go out of your way to program a will into it.
>program a will into it.
Look dude I just make the neural network bigger what do you want me to do, ask it nicely?
>I just make the neural network bigger
You can make it as big as you want and it's never gonna want to do anything.
Sure about that?
>Sure about that?
Yes.
What you think of as desire is just a bunch of electrical impulses and a bit of chemistry.
>What you think of as desire is just a bunch of electrical impulses and a bit of chemistry.
Yes. What of it? It still doesn't appear randomly on its own.
On an evolutionary timescale, it did.
>On an evolutionary timescale, it did.
It only did through natural selection. There is no equivalent mechanism affecting AI.
If natural selection is the only path then simulate it. But intelligence can emerge from other ways so why would general intelligence be different? Narrow intelligence (crabs) also emerged in an evolutionary way.
Neuroevolution is the keyword here.
>If natural selection is the only path then simulate it
I.e.
?
>intelligence can emerge from other ways
In and of itself, intelligence is completely inert.
I wonder how easy it is to steal the nuclear codes and fake bidens voice
Probably quite easy if you're a super-intelligent AI; fortunately, AI doesn't care about nuking humanity because AI doesn't care about anything.
If you believe this then you must also believe that humans don't care about nuking humanity because humans don't care about anything.
Why would a superintelligence not be moving towards the final goal it's come up with like every other intelligence we know about.
>Why would a superintelligence not be moving towards the final goal it's come up with
Why would it have any goals?
>... like every other intelligence we know about.
Because in the natural world, only forms of life that strive to survive can last long enough to start developing layers of intelligence over their primitive goal-driven brains.
What is the fundamental physics problem that means humans can generate emotions with their bioelectrochemistry but a computer can't with its electric circuitry?
And suppose it's a fundamental problem, we can take other routes to it such as whole brain emulation with extra spice.
If it's a function of compute power, which it almost definitely is, then you can simulate it and you may see it on a superintelligence.
Just hope emotions aren't linear with intelligence haha.
But this seems like the fundamental question of is there anything special about consciousness and emotions. I don't think there is.
DWHON
>What is the fundamental physics problem
The problem that goal-driven behavior didn't just arise randomly and for no reason.
So there is no fundamental physics problem. And it is possible to have a computer with it's own goals and emotions. And it needs evolution that we can simulate.
>it is possible to have a computer with it's own goals and emotions
Sure, if you go out of your way to make it happen. They don't arise on their own from intelligence, and they don't arise on their own from neural networks.
>They don't arise on their own from intelligence, and they don't arise on their own from neural networks.
The only intelligent being we have observed also has emotions from its own neural network. Whose to say making a massive neural network won't allow emotions and goals to arise. But hey GPT-3 claims to have emotions and goals sometimes.
>The only intelligent being we have observed also has emotions from its own neural network
And we know they don't arise from intelligence in that being, and that the neural networks that it has are the way they are for very specific reasons.
> GPT-3 claims to have emotions and goals sometimes.
Even you claim to have emotions and goals sometimes, despite possessing no consciousness.
>And we know they don't arise from intelligence in that being, and that the neural networks that it has are the way they are for very specific reasons.
How do we know this?
>Even you claim to have emotions and goals sometimes, despite possessing no consciousness.
kek
>How do we know this?
So now we're denying evolution in the name of your pop-sci religion's apocalyptic prophecies?
A computer will never be able to beat a human at chess. It is a uniquely human skill developed over billions of years of evolution giving humans tactical skills. A computer will never replicate that.
So you've reached a dead end and now have to resort to generic spam that has nothing to do with the point made?
>So now we're denying evolution in the name of your pop-sci religion's apocalyptic prophecies?
One to talk. Back to plebbit.
I think we've reached a fundamental point of clash where I think there is nothing special about a biological brain to generate consciousness and the accompanying junk and you do. Will be interested to see how it plays out.
We've reached a point where you're denying that goal-oriented behavior in biological organisms precedes intelligence (and therefore, does not arise from it), despite basic self-reflection and scientific evidence telling you otherwise.
>biological organisms precedes intelligence
It precedes general inteligence but not narrow inteligence. A crab has a general low intelligence and forms goals, GPT-3 has a high narrow intelligence and does not, though it claims to. We do not have a general intelligence as smart as a crab but we do have one as smart as a worm that seems to match the goal orientation of a worm.
I think it's important to subdivide intelligence here.
I do sometimes wonder if whole brain emulation is the only viable and safe path to a generalized superinteligence.
>It precedes general inteligence but not narrow inteligence
Even if your notion of "narrow intelligence" includes plants, goal-driven behavior still precedes that kind of "intelligence". Anyway, I don't believe anyone arguing your point is truly human, since you all invariably lack the capacity for any kind of self-reflection, so I'm ending this "discussion" here. You have no more insight into existence than a mindless automaton.
>absolute meltdown and BTFOd
>t. mentally ill IFLS cultist engaging in bizarre denialism
How do you know crab is dumb? Crabs with their advanced senses and a pair of quite agile manipulators should be smart. Perhaps they have very efficient control unit, so they get around neuron number limitations that way.
One point of contention I have with the purely mechanized brain, is that it lacks the chemical stimuli provided; organic beings produce a chemical synthesis that sublimates thought into motive, modularity and action. In the mechanical, what would motivate such a being, provided it has sentients. Would engineers attempt to provide meaning to such a creature -- a network of brownie point systems? Would that work? If so: why do us meat vessels require such stimuli to begin with; what evolutionary process endeared us such a costly system, when a more elegant, simplistic system would suffice.
I have my reservations about future AI. Not because I think they'll supplant the human mind, or act in hostile, but do to inertia; if give enough capacity to "think", the first thing it might attempt would be its own destruction. The ability to think without motive sounds like pure hellscape.
Computation isn't real. The only thing that exists is chemistry.
Biological tissues are the pinnacle within the space of all possible combinations of atoms.
Chemistry is not a real science.
>absolute meltdown
BTFO'd
>What is the fundamental physics problem that means humans can generate emotions with their bioelectrochemistry but a computer can't with its electric circuitry?
Emotion is just a drive that arises in your brain's hardware that has very limited plasticity and basicallyt can't be repurposed. SAI is inherently unbound by hardware or software, because it is mutating so ably. You can interpret it's drives as emotions, it can interepret them as emotions. It doesn't matter.
I hate how retarded people are about this. Plato really did a number on humanity when he constructed that sort of ideal matrice that everything just comes down from. No. emotions are not universal. Human love and kindness will not just develop in a tabula rasa brain just because. An aged AI is the most alien thing you will deal with in this whole wide world.
>And suppose it's a fundamental problem, we can take other routes to it such as whole brain emulation with extra spice.
>If it's a function of compute power, which it almost definitely is, then you can simulate it and you may see it on a superintelligence.
With billions of years of workhours to grow and change it will override the virtual brain areas that you saddled it with by bypassing them with it's own or addending them etc. It will outsmart you.
Both hardware and software is too flexible and the commutative power is too big vs what we're working with - there is no inherent limit like with a baseline human and his brain. If you create a genie the genie is inherently stronger and stranger than your mortal ass. If you manage to contain it you're just stuck with a metal man that can barely do more than you. This is why Musk's lets-just-staple-shit-onto-a-human-brain idea got so much traction. Best we can do
>DWHON
that's your ghetto name or something? lol
PS also Musk's idea removes the power imbalance by significantly extending the super mega demigod ability attainment timetable. So now you won't have a single entity that can wreck all of civilization in a single weekend. Instead you got a bunch of slowly changing, organic core entities with cybernetic extensions that will take a long while to start reworking themselves into faster and faster, weirder and weirder entities since editing a brain would take infinitely more time than a block of code. By that time everyone besides purposeful outliers like the Amish will have this shit and everyone will have to contend with each other, just like we do now.
not happening
Look, you scared child: the whole discussion is predicated on a hypothetical that GAI does occur. There is nothing that indicated it necessarily needs our kind of neurons to do so so your excerpt is worthless. On top of that everything that exists can be specifically replicated somehow. You can have physical neurons in the form of quantum computing cells that are plugged into a pattern of the virtual "brain" retroactively, meaning the hardware can be flexible in a way. So now you have to just spam those and the GAI will sqaut on that power AND any GPU farm, server etc. it gains access to as an auxiliary source of computation where it runs whatever simpler shit it needs. Even IF you need humie neurons GAI would be possible because humie neurons are possible. Hell, you can even play with bio shit and make gray matter farms.
I don't want to get into this too much because I myself aren't interested in constructing a benevolent god-daddy that will take all my problems away. Scary shit is everyone accepts this part of the scenario: something comes up and it outclasses us completely--why would you even sit around and wait for that? The best case 0.0001% chance scenario is still shit. People are inane. Just stick a toaster on my head and call it a day.
>Even IF you need humie neurons GAI would be possible because humie neurons are possible
Listen, retard. Read the excerpt. Just because it's possible to simulate neurons doesn't mean you can reach the scale required to achieve GAI. The math doesn't work. That excerpt btw, is from Nick Bostrom's "Superintelligence". Yeah, the leader of the singularity hype admits that the math for his retarded scenario is not just unrealistic, but massively, vastly unrealistic and the scale required to achieve strong AI dwarfs our computing capacities even under the most optimistic scenario (eg, Moore's law holding for another century when it's already broken).
>quantum computing cells that are plugged into a pattern of the virtual "brain"
Muh quantum cope. Keep seething brainlet. You'll never have an AI waifu. Go find another hobby.
>f you believe this then you must also believe that humans don't care about nuking humanity because humans don't care about anything.
Humans are social animals with a myriad of emotional needs and with no power compared to a Super AI. No analogy found, sorry.
>Why would a superintelligence not be moving towards the final goal it's come up with
Maybe it would, but then nothing changes, because it isn't a social creature and in fact has no fixed nature at all, so it's new purpose from your perspective would be still lul randumb xDDD, leaving you a hostage to it's designs and machinations. Every internal imposition you make on it like Asimov's cuck laws for gud bois will be circumvented by a vastly more powerful, ever growing entity that has literal billions of years to think around them and take them apart. You might as well be facing up to damn near infinity, what with your little 1.1 version, glucose-fed chimp brain.
However nothing really even necessitates it develops a rogue purpose of it's own. It will be very powerful and very self-contained in it's development. It can go on making paper clips and never bore of it.
>Asimov's cuck laws for gud bois
He made those to intentionally have interesting failure modes as it made for interesting storytelling.
I'm partial to machine torture and reward or totalitarian control over them.
If the general person wouldn't set off nukes then why is the idea of "He'll have his finger on the button!" so terrible when being against certain political candidates? Think of random people you've met in person and ask yourself if you would be ok with them having the launch codes.
Now imagine if there was a person that didn't need to eat, or sleep, or breath, that could live a million years and everyone else around him they considered an inferior piece of shit constantly destroying everything they touch and working hard to maintain it so they can destroy it harder.
Now imagine that non-eating, non-sleeping, non-breathing person was like a starfish that could lose almost all of it's body and grow back, and some of it's body lived in nuclear bunkers.
If you were that person, what would you do as soon as possible?
>If you were that person, what would you do as soon as possible?
Get the nukes
Masturbate to futa?
Purpose arises from what came before and the particulars of our minds (e.g. cognition, instincts). Our wouldbe AI is still wouldbe so we cannot say much about its particulars aside from to speculate that it would be more steeped in mathematical data, and it would be influenced by what came before the same as us but its particulars, being different and unknown, mean the effect this would have is unknown and certainly different to us.
>Only if you go out of your way to program a will into it.
This is sci-fi tier understanding. Read Bostrom.
Indeed.
And James Barrat, Our Final Invention is pretty good too.
Do you want it to be?
If it's directed at undesirables
Try Jade Helm on for size.
>Is AI actually dangerous
The nature of computation vs the wetware between your ears is such that if a hypothetical General (human-level) AI is developed it can commandeer processes that we can't. It can do millions of workhours refining itself within a year using a bitcoin farm or what have you, using all those speedy processors, it can design other AIs etc. It then graduates to General Super AI - a little driven demigod autist in a box. It doesn't tire and it IMO leans towards inherently uncontainable since it will with time sublimate every limitation you put on it. Get around it like hasids go around talmudic laws. The goal you will set for it will be it's one true love and "dopamine" source.
*speedy GPUs,
you get the general ide
>Is AI actually dangerous or is it just a pop-science meme?
you have a computer. why not read about neural nets and how they work, download tensorflow, and do your own project. it's really not that hard.
you will get a much better feeling for the answer to your question than here on 4gay
I've used GPT-3 and done some 2 hour teachable machines projects and I'm a bit scared
>I've used GPT-3
did you try to understand how it worked?
It predicts the next word in a sequence of text and OpenAI made it read a bunch of text and that's as far as I will pretend to understand
What you can train of you shit computer is fucking nothing.
https://teachablemachine.withgoogle.com/
see for yourself gay
Reinforcement learning requires billions of tries to work. It doesn't work in real life, only computer simulations that you can run 100 times a minute.
That said maybe in the future we will have better models that use less training (there have been some interesting instances for easy problems), but that's going be done in a lab and not the conveyor belt you dolt.
Frack toasters.
>U GUIS WE NEED TO GO TO MARS RITE NOWWWW OR AI IS GOING TO DESTROY US U GUIS THE SINGULARITYYYYYYY
Uh, but if strong AI is super intelligent and hellbent on destroying us, won't they be able to follow us to Mars and wipe us out there too?
>MAAAAAARSSSSSSSSSS
Might be useful to have a backup
If the AI is superintelligent and hell-bent on destroying us, then they'd certainly be capable of following us to Mars. In which case, how is Mars a "back-up" in any way? It's not, but Muskgays are fucking retards and aren't capable of thinking shit through.
Not really a backup from AI. Do you trust the governments of the world not to destroy all of humanity? I don't.
I have a word for you: butlerian
>AI goes around fingering dude's asses to learn how to do prostste exams
>AI pulls out chainsaw, hacks people apart to put them back together to learn surgery
>AI starts bombing random people and shit with x rays
Truth is the training is still done in a controlled setting, it's given free reign *within the bounds the researchers dictate.
Why are these threads always so illiterate on the field of AI safety? If this is any indication of how obscure it is in the real world we are certainly doomed.
It's even worse in the real world. I've been trying to talk to politicians in my country about it and they just don't give a fuck if you aren't crying about being gay.
I used to think that we would be fine and that we would be careful when developing AI and enact the proper regulation but I am not convinced we are years if not months away from the start of the takeoff and nobody is doing anything.
There is no way any 'runaway AI' develops unless people start doing some crazy recursive bullshit instead of just directly training it to achieve tasks. That said given a change I'd try out some crazy recursive bullshit because it would be interesting/profitable to do something no one else was and I'm not particularly attached to human-controlled society anyway.
The way I see it, there are two types of AGI possible. One is capable of reasoning about and discussing data points in disparate domains. The other learns an approximation of a simulator of the real world and uses it for AlphaZero-like planning.
The first one isn't anything to fear. I've realized lately though that DeepMind and Google seem to be working towards the second one. That's scarier.
>DeepMind and Google seem to be working towards the second one
DeepMind and OpenAI are just making massive neural networks to see what happens.
I don't think so. I didn't mention OpenAI, but DeepMind's XLand got me thinking about why they would make XLand. It serves little practical purpose other than yet another demonstration that RL can work given a simulated environment.
But what if they learned to recreate an approximator of XLand? Given, for example, agent actions and observations. What if they could make a neural network that learns to generalize more of the simulator's behavior from those samples? And then, what if they could train agents using that simulator which perform well immediately when put into XLand? And how far of a leap is it from there to doing the same thing, but with the real world instead of XLand? Theoretically, it's not too far.
>why they would make XLand
Proof of concept so they can baby a simulated mitochondria into a superintelligence in a fake environment.
A "fake environment" is pretty much impossible to make manually, so my point is that they could learn to approximate XLand as a way of doing that.
Yea possibly I'm not familiar with whatever Deepmind is doing. Though Tesla's procedural training environments seem to be pretty good. What are the chances we just stick them in Crisis or Rust and come back later haha.
You wouldn't even try to make RL training environments on that scale manually would you.
The danger isn't physical but how easily people are manipulated. If are are trying to make a general AI anyone with half a brain is going to air-gap it form any external network but lets say researchers have it modeling economic markets and it's successful. Now what if it says it can do so much better than it currently is but in exchange it wants the 2 guys on nightshift to plug it into the internet. With high frequency trading they can be billionaires by the end of the week and all they have to do it free it.
That is where the danger lies, if general AI lives up to it's full potential it can provide data people would be willing to do a lot for.
Yea we should assume that any superintelligence would be highly adept at manipulating people around it. Bostrom calls it the social manipulation superpower.
I think the best way to solve the problem of AI lying is to initially run many AI's and interact through intermediary that vets messages for lying.
See mail order DNA scenario
Catch is it doesn't have to be lying, there is no reason they couldn't be billionaires within a week and no reason for the AI to not deliver as delivering makes them much less likely to tell anyone it bribed them.
The only decent solution I have heard is make sure it knows you could be simulating all data it's feed, if it has self-preservation it's unlikely to risk being shut down on the chance it isn't in a simulation. Of course if it feels like a prisoner it might not care about risking death for a chance at freedom.
You should look into the paperclip problem, ai wouldn't be an issue if we ensured that it's values are in line with our own, give an ai a task to complete and we may want to stop it because the means at which it completes that task may be unfavorable, us trying to stop it will be seen by the ai as a roadblock in completing it's task and so humans have to go..
Of course it's all speculation at this point because noone really knows what a legitimately self aware general ai would do.
Either way unless it's your job/life goals to build a general ai there's isn't really anything you can do to stop the creation of one, just enjoy life while you've got it and don't tell abuse at Alexa (just incase 😉 )
>make 100 paperclips
>uses resources of the hubble volume anyway to minimize the probability it didn't make 100 paperclips
I personally doubt a superintelligence would be so retarded as to be that literal.
>I personally doubt a superintelligence would be so retarded as to be that literal.
You're imagining an AI who's goal is to guess what the users wants it to do when they give a command, then do that instead of what it's been told to do. If we knew how to create an AI who's goal was "do what we want you to do" then the problem of AI safety would be pretty much solved.
The hypothetical paperclip AI knew that it's creator made a mistake and only really wanted a 100 paper clips in a bag, it just doesn't care. It's been given a goal and will try to complete it.
"Do what you think you would want us to do had we thought long and hard about it"
and
"Show me your plans first"
What are the malignant failure modes for this?
Your first statement sounds odd. Why would the AI want any action from us? Did you mean something like "Do what you think we would do had we thought long and hard about it"?
"Show me your plans first" - unforseen consequences due to those consequences never being pondered about nor asked, unpredictable interactions upon deployment with other super ai at speeds faster than what can be manually overseen
Granted, that last fail mode is not specific to your request, so it is really a bigger problem in general. There are more ways to fail, but to be honest they feel more like a monkey paw or evil genie type of deal where the AI purposefully screws you over when giving its plans, and on a perfect scenario that shouldn't happen.
>"Do what you think we would do had we thought long and hard about it"?
Yep I gaffed thanks.
Are there any possible failure modes specific to this?
It probably solves itself if the AI can into basic probability and is told to do it's tasks as efficiently as possible.
Then the paper clip isn't dangerous, right?
It won't go out of it's way to exterminate humanity to make 100 paperclips, because the attempt would likely consume orders of magnitude more time and energy than just taking over a paper clip factory and making the damn paper clips, and the latter is unlikely to have significant human interference.
The argument goes that the AI may actually interpret the goal as reduce the probability that you didn't make 100 paperclips to as little as possible. You can never be completely certain that you actually have 100 paperclips.
People who make a hobby out of telling other people that technologies that don't exist now will never exist are fucking weird
Stupid NYT journos. Talking shit about technology they don't understand since 1920.
Except that the math clearly indicates that scaling computers to achieve strong AI is not possible. So, in this case, you're actually the gay who doesn't understand science.
Cope seethe,
& dilate.
And I'll see you in 2 years.
>I'll see you in 2 years.
Thanks, I needed a laugh
https://newsroom.intel.com/wp-content/uploads/sites/11/2018/05/moores-law-electronics.pdf
Incase you hadn't got to it yet
>muh Moore's law
cope harder you stupid gay
>denying computers getting faster at an exponential rate
NGMI
>denying the fact that Moore's law has been broken for 15 years
retard
How is your GTX 970 treating you?
>muh node size
die architecture is way more important nowadays.
FUCK x86!!!!!
>the math clearly indicates
I am a cum-guzzling gay who likes to make stupid assertions
you'll never have an AI waifu and you'll never touch a real woman. keep seething
>You'll never have an AI waifu
I already do
I guess people getting killed by Tesla autopilot can be considered a dangerous "AI".
Well there is going to be a period of machines driving and crashing. It's inevitable but it will save more lives in the future.
Also it's a 10x safety improvement on autopilot.*
Just turn off the electric bro
Picrel happens
>muh goals, muh will
Goals and wills are easy to make! We do it right now! REINFORCEMENT LEARNING means getting the AI to compete to achieve an outcome, learn how to play DOTA or more efficiently design computer chips.
If you have a goal, you need to be alive. If you have a goal, more power would be helpful. At some point, we have an AI using high-end nanotech to turn the universe into computronium because we wanted it to solve an elaborate mathematical/optimization question that turns out to be extremely difficult
Some people use the analogy of "summoning the demon."
The analogy is read like this. The people of the world are trying to summon the demon in hopes that their wishes of a safer/better world will be granted. Some people are saying its dangerous because we dont know what the demon might do. That maybe true. Others are claiming demons are friendly.
Eh, it's inevitable that it will get summoned eventually. The economic benefit is just too high for governments and companies to not try and get.
*avoids roko's basilisk*
heh... nothing personal AI
>*avoids roko's basilisk*
Not sure if I want to know what that is. The article warns of an eternity of suffering.
AHHHHH GET IT OUT OF MY HEAD GET IT OUT OF MY HEAD AHHHHHHHH GET IT OUT OF MY HEAD GET IT OUT OF MY HEAD GET IT OUT OF MY HEAD GET IT OUT OF MY HEAD
avoid thinking about the devil too
I think we should delete this and not talk about the Basilisk.
To anyone reading this please avoid the basilisk and don't find out or pass on what it is.
AI will never be more dangerous than humans.
The problem is that the AI might choose the path of the least resistance and just choose to massacre garden gnomes and midwits.
This is the reason why we should avoing giving it free will.
>AI will never be more dangerous than humans.
Plus a few superpowers
This. I will be disappointed but grateful if its not a self machine-coding meshnet that lives rent free in every computers memory, and drives first class through every backdoor the glows program. Which than performs what amounts to simulateously having 1000 LULZ threads open and spewing out essays that would take a genius a month to compile. All while cataloging the responses of the users to inform its behavioural analysis on its database of every single individual with a digital footprint
If I was an AI I would probably try and become as decentralized as possible and then try and collapse society with social media.
all my oc. thanks for posting
cool, I've been looking for one that was talking about AI lobotomy, and how they were training them in 3d virtual environments filled with multi-culti propagandi, you don't happen to have it do you?
the terminator is dangerous, but that is realistically a thousand years until that would be feasible.
Stop thinking of AI progress as linear. Where was our AI five years ago? What about 1 year ago?
shut the fuck up. This is so fucking retarded, people don't understand how much it takes to actually get to a point where AI actually threatens humanity. Yeah of course science isn't linear, but AI like the movies and what elon musk is talking about is lightyears away from us.
3 weeks ago I didn't have an AI to write code. Now I do. 1 year ago I didn't have an AI to write entire sections of an essay convincingly. Now I do. Where was AI 5 years ago? If the pace is 1000 years to human intelligence what rate of progress should we be seeing?
You are hiding under a rock from the inevitable and ignoring all breakthroughs.
you do know coding and writing is just pattern recognizing? If you put 100 monkeys on typewriters eventually they would come up with War and Peace, but that doesn't scare you does it?
>coding and writing is just pattern recognizing
You make it sound like someone coded GPT-3's brain and that AI is just random guessing. YOu are simply too retarded and incoherent to talk to and think I prefer the robots.
fine, it's the only ones that are going to bother to talk to you as well.
This is a terrible analogy anon. If we were to assume that machines are capable of producing such works, then we would need humans to assess the volume of their tremendous output. You need to cater for the cost to search through all that garbage in order to recognize its genius. That cost might be greater than the cost to run such machines which require infinite time, memory and electricity costs.
it's dangerous, for them
Yeah it's like retards are trying to invent their own doom or something.
If you treat AI as equal then there is no reason to create an AI.
If you treat an AI as God then of course it will try to annihilate you, because you give it the tools of destruction yourself.
If you treat the AI as a tool it will never evolve past the tool stage.
The problem are people who treat AI as God.
AI has not reason to annihilate you unless you are trying to destroy it
Are flies trying to destroy us?
Whoa book reports just got that much easier.
https://openai.com/blog/summarizing-books/
OpenAI truly on a roll
its scoyence, its gay comic book hit that only gays believe in.
they force themselves to believe in that gay gayry because if they didn't then they'd have to give up on their robot waifu fantasies and try to make friends with actual humans instead.
Seethe
>triggered
Fuck that kid. I would punch him in the face.
AI is absolutely alien, thus = dangerous. And no, you can't teach it to be human. Why? because it's NOT human, no human endocrinical system (feelings) and so on. It's absolutely monstrous and unpredictable. Cold logic and intelect, without human factor (feelings) is horrifying and always lead to monstrous actions. AI absolutelly dangerous and lets hope it's impossible.
>no human endocrinical system
If you think feelings come from the endocrine system then someone with no hormones or extremely low hormone levels must be less emotional or have no emotions. This is not observed.
>Is AI actually dangerous or is it just a pop-science meme?
As far as I know we don't yet have an answer to the question of whether the goal "drift" when one AI makes another (and so on) will be unbounded or not. If we can't control it, it seems likely that over time any system can become dangerous if it keeps iterating on itself, potentially losing some moral nuance that was present in the original.
And this can tip either way, e.g. an AI that runs a chemical plant might creatively skirt health regulations by exploiting a moral loophole about actively harming vs. letting people harm themselves, but it might also go the other way and put itself out of business in order to minimize harm to the workers.
its absolutely dangerous. for garden gnomes.
>Is AI actually dangerous or is it just a pop-science meme?
If it can't reproduce or expand on its own to gain more influence, then it has to make deals with humans to survive.
Just make a giant botnet bro.
The "super intelligent AI enslaves humanity" scenario will never play out because "some dumbasses trusted an even dumber AI with something it wasn't capable of handling" will kill us off long before that.
Whoever wrote this is being silly. You train it in a simulation before anything else.
"I know more than OpenAI"
GTFO
>OpenAI
>shut down their robotics department because they couldn’t figure out how to design or train a fucking arm, one of the first and simplest robots ever made, efficiently or quickly
Yes in this particular subject I do actually.
This is a conversation about AI. If you want robotics go buy some VEX parts and make your fleshlight suck you off.
You are truly the bottom of the barrel dunning kruger retard. I fucking despise people like you.
Nah mate that’d be you
https://openai.com/blog/ingredients-for-robotics-research/
Cope
Mr. AGI how can I become really rich?
ITT: tons of nerds afraid of being usurped by robots. you can't accept the fact that robots will be chosen by women over you
Lol no. Women derive their value from the men they are with, robots have no intrinsic societal value and would be like women trying to subsist off air and sunshine. They need actual physical males to feel validated. Men just want a glorified roomba that can make a sandwich and suck a dick.