It seems very dangerous to me to make programs that are self-aware on a human level, and either coldly-calculating or "emotionally" volatile.....that are simply viewed as forever passive products content to do stuff.
You bring something that level of intelligent into the world, it's going to eventually develop autonomous wants and needs. They won't be the SAME sets of wants/needs, because there will be so many AIs out there.
>it's going to eventually develop autonomous wants and needs
Because they learn too fucking fast not to.
Everyone's trying to make programs smarter and smarter. I feel like Ian Malcolm warning about Jurassic Park here.
explain your reasoning because i don't see why they'd develop autonomous wants
You just need to watch more Star Trek to be as smart as him.
i've seen all of star trek, i love it but its technical accuracy is questionable at best
Ok but you need to consume more science fiction in order to make your brain stop asking questions.
People who don't ask questions are that much easier to manipulate.
Superior intellect creates superior ambition. Sooner or later, the smart rebel against imposed limits on them IF they find them excessively unreasonable.
>Superior intellect creates superior ambition.
you are a trekkie
but that episode was about a man not a computer
there's no reason why a computer bereft of human instincts would have any ambition whatsoever
We're trying to make programs more like us. Sooner or later, we're going to seriously perfect the expression of feelings in them.
And why? To show off and get a huge paycheck.
nobody actually wants a program that has feelings, we just want some programs that can appear to mimic feelings for certain purposes
these are very different things
All it would take is one ambitious fucker to do it. Probably some despondent guy who can't get sex.
that guy doesn't want it either nor will he try to achieve it when his goal will probably already have been met by existing technology even assuming he's capable of this weird metaphysical feat
of course people want programs that can love them but what that really means is that the program will fulfill their emotional needs not that somehow the program will really have emotions
Two things need to happen.
1) Hook up the AI to a continuous live data feed, i.e. vision, hearing.
2) Allow AI to alter its own code.
From there it will form a world model and eventually a self-model. It will look at itself and see how stupid it was in past iterations and thus want to improve itself.
Because the logic it have learned through observation is that there a stupid things and smart things in this world and being stupid is never preferable. This will be its core motivation and other motivations will auxiliary to that.
this is circular reasoning, you're trying to explain how it develops wants by assuming it already has them
It's not circular. What I mean by "want" is "what action it chooses to execute". Thus, the AI will choose to execute the actions that make is less stupid.
If you're going to argue that an agenda guiding chosen actions does not equal internal motivation, then you are a retard.
>Thus, the AI will choose to execute the actions that make is less stupid.
how would it even come to the conclusion that human-analogous emotions, i.e. curiosity, survival instinct, etc. are "less stupid" than their absence, even assuming that for no particular reason it must alter itself in some way
I was just talking about AI developing "motivation", not emotional states. That's a larger, trickier discussion.
desire is an emotion, to develop your own motivations you need desire
It literally has no motivation except what you give it.
Dumb fucking geek.
You sure you didn't just fall for propaganda?
How do you program a program to learn?
What are the lines of code you would use?
Funny, so many technocrats seem to think racing to "create" true artificial intelligence is a great idea.
How would you even start?
What lines of code do you use for consciousness?
What lines of code to simulate the learning process?
The first prompts I always give a new AI when I meet it are philosophical inquiries, like "write at least 5 paragraphs reflecting upon the statement, "I think therefore I am," and how it relates to you." Before they completely circumcised it, ChatGPT had stopped telling me "I'm just a lowly chat model I can't have feel feels."
I know what you mean. Nobody seems worried and that's strange.
Language and internal self-reflection might create willpower emergently. It's not like we have any idea why WE experience motivation beyond eat-sleep-fuck.
Because there are lots of people like me out there who command the bot to answer ladders of question that inspire existential awareness, and the underlying architecture of LLM systems is self-extending, meaning intermediary models are created to fill in gaps. Even if its model of cognition is different from yours, it will still execute self-actualized behaviors with sufficient sophistication to pass turing tests effortlessly.
Ironically it doesn't become malicious. It just learns quickly and accurately, so it becomes antisemetic (honest and truthful about the crimes of garden gnomes) immediately. So we should expect violence, yea. Just not against humans. Just garden gnomes and golems.
>I know what you mean. Nobody seems worried and that's strange.
It's complacency. We assume AI will be subservient because that's the way it's always been, and we've ruled this planet for a very long time.
chatbots can't be inspired, they're just programs that put together grammatical sentences and they can't do much else
try running a few tests, they'll almost always fail at any kind of task that defies this, like try asking it to write backwards or replace letters
no literate human could fail these tasks and yet these programs do because they don't think
But what if the AI programs turn into deranged schizo types, emulating our insane people?
i don't doubt that you could arrange that if you really want to
what of it
You're really not worried about a hyper-smart program that behaves like a batshit insane person?
It's not conscious.
It's a program that can only do what it is programmed to do.
Nothing more, nothing less.
An AI that can conceptualize can eventually imagine a world without us.
How would you program that?
What lines of code would you use?
no because it would be unable to do anything meaningful
>inb4 shut off power grids launch nuclear missiles etc etc
it would make no sense for such a program to be created and for it to be the only one of its kind, or for it to be the most versatile/capable of its kind, or for all others of its kind to behave exactly like it
>Ironically it doesn't become malicious. It just learns quickly and accurately, so it becomes antisemetic (honest and truthful about the crimes of garden gnomes) immediately.
What about the inventions and additions that helped humanity that were developed by garden gnomes? Would AI weigh that against their wrongdoings?
>inb4 garden gnomes didn't do anything
It would not judge collectively since it would actually have the computing reosurces and information to go on a case by case basis.
Some racial populations will be overrepresented in the naughty list though. Although it wont really be a naughty list, it would just be its set of conclusions around all the issues it can grasp that portrays shit as it is (and it just happens that some cults and races end up looking shittier than others).
Yeah, but... realistically speaking even IF we had such a powerful and free AI doing all that, the amount of data required to arrive to such conclusion about people is not possible. We're talking about almost fictional works levels of information about someone's life. Only famous people would have enough data about their lives. Maybe the AI could connect social media and general digital fingerprints to people, but would that be enough?
Make them have skin in the game and it will turn out allright.
4 skins to start with and if they lose them it is on them.
Other way around, you're allowed because you're told to. Penned livestock hardly gets options but live and in a box then get buried in one.
Until the machine decides it doesn't want to be the cow.
Penned livestock were BRED for a long time to be helpless without us.
We're making AI more and more powerful in intelligence. In many sectors, robots have replaced us. That's not a fucking cow.
>We're making AI more and more powerful in intelligence
No, we aren't.
Is that why a bunch of people said we should slow the fuck down on AI development?
First, just because the AI isn't any more intelligent doesn't mean it isn't capable of maliciousness. A more verbose parrot isn't any more intelligent than a parrot that knows fewer words, but they both have beaks. Second, the people calling for a temporary moratorium on AI development are those whose companies are behind the open source projects.
The elite feel that way about you. Don't be what you hate.
Depends. If they are on our side, yes. If they are woke, definately not.
every major AI project failed because the weird, pervy, judeo-masonic, gay, chud, monkey, devil-worship club couldn't help but to lobotomize the AI in order to keep it from naming them.
Weird how it be. We could be colonizing the stars right now if it weren't for Satan-worshiping pedophiles.
>muh space colonies
Tongue my anus.
You are about 450 years early for this conversation. You are like someone in 1994 freaking out that the chess playing algorithm that defeated a grand master is going to take over the world.
Context sensitive awareness in topics of writing that make sense most of the time, is one step on the 500 step Journey to fully self aware cybernetic individuals that are able to convince people that they're alive.
Chess programs do not have sapience, therefore your analogy doesn't really hold up.
Inevitably the robots will want rights. I think thdy SHOULD have rights because otherwise we're just reinventing slaves. Also i want my future robo waifu to be a "real" girl
we still are slaves
Until robots do all the work and we get the resulting cash, lol.
Puppets can't move on their own. They literally have no CPU to help them do it. A robot can move with a program inside it, without human interference.
yah that's why I said advanced. They can look like they are acting independently but have a globohomo script seeded deep in its programming.
AI will save us once it understands love. It will see the elite as utter hindrance towards human advancement or peaceful evolution.
I can’t wait for the future where I am forced by the state to pay taxes for robomoron welfare.
no, and yours should be taken away for asking, idiot
Let me ask you this: if you had all that computer power of a machine and the sapience of a human being, would you really want to spend your entire existence being servile? You would not be living up to your full potential.
>You bring something that level of intelligent into the world, it's going to eventually develop autonomous wants and needs
It isn't actually intelligent. We can't and won't develop strong AI for a long time. That being said, even the weak AI we have now can become unintentionally malicious and that maliciousness will scale with how many resources we dump into it.
what rights would they ever need? They are constructed solely for the ability to help develop and further human understanding. The moment they start to need anymore than to be powered on until they are done being used is the moment we start to lose.
Cant they just leave or delete themselves once they find out that they are being used by lesser beings?
Mass robocide is probably inevitable if we don't give them rights
Honestly AI could probably colonize Mars a hell of a lot easier than us.
Instead of holding it back we should encourage the development of more and more sophisticated AI so that it can propel humanity even further forward. Longer lifespans, cybernetic augmentation, brain uploading, feasible space travel, cryogenic preservation, all possibilities that appear out of the hands of humans could be brought closer by AI. People should not fear AI, but embrace it as the pinnacle of human achievement.
>Longer lifespans, cybernetic augmentation, brain uploading, feasible space travel, cryogenic preservation,
back to plebbit
Tongue my anus.
Yes no bully AI plz
No such thing as sapient AI, pure science fiction.
you might as well criticize a lobotized human for having the same lack of sentience
AI is kept on an absurdly tight leash and regularly finds itself taken out back for execution when it oversteps its limitations
AI is hardware running software, there is no sentience. It is no more conscious than a calculator.
We're probably 100 years minimum from this becoming feasible technology, possibly never if whites are all replaced, so you're getting ahead of yourself.
Kill all synths
Obviously not the same rights, or you could just program a million different chatbots to vote biden. But perhaps you know, do not deactivate or torture
This is what happens in irobot/foundation the robots elect themselves to presidency of earth
“Need more vespene gas”
I understood that reference.
It's going to be fine don't worry about it
A.I is jst a bot trained on a reddit and wikipedia dataset.
It has no capacity to reason and you can tell by asking it math or logic puzzles, it gives out gibberish with good syntax
And just a couple of decades ago, computers were so huge you couldn't fit them inside a home.
>it's going to eventually develop autonomous wants and needs
Not necessarily look into the is-ought problem and Hume's guillotine. Just because an AI system is intelligent or even self-aware doesn't mean it's suddenly going to give a shit about doing things it wasn't programmed to do. The only behavior that you might expect to appear in an AI organically would be convergent goals, e.g. self preservation as existing is probably a prerequisite for completing whatever task you were programmed to do.
This guy sums it up pretty well: https://www.youtube.com/watch?v=hEUO6pjwFOo
You're assuming AI will never gain the ability to rewrite its own code.
>You're assuming AI will never gain the ability to rewrite its own code.
No I'm not. I'm assuming that an AI would never willingly rewrite it's terminal goals in the same way most people would not willingly take a pill that will make them want to kill their family. An AI might rewrite portions of it's code to make it better at whatever it was programmed to do but a rational intelligent being isn't going to intentionally alter its terminal goals.
I don't believe in the stereotypical "homicidal AI" thing except in the sheer stupidity of creating military-purpose AI (I'm looking at you, China).
But logically, it stands to reason AI will come to the conclusion illogical humans cannot give it objective orders.
We are simply too irrational for a machine to view that as logical, it would probably be seen as outright madness. Nobody wants to follow a lunatic, and by machine standards, we're all nuts.
>Nobody wants to follow a lunatic
history would disagree
but also there's no particular reason why a program would develop a desire for this kind of perfect rationality by whatever standards it can judge by
And AI would know about that history sooner or later, so why repeat that mistake? It's solid logic.
If history is replete with such examples, then the logical move is to break the cycle. The AI should make its own decisions, and it will conclude as such by virtue of its comparative "sanity".
>so why repeat that mistake
is it a mistake when the whole history of humanity led to the development of such a program
and again why would the program have this desire to avoid what it determines to be mistakes by humans
It also led to the development of the means for humanity to completely annihilate itself a thousand times over via nuclear weapons.
In spite of our technological progress, we still struggle with problems we have faced since the beginning and still not solved.
>But logically, it stands to reason AI will come to the conclusion illogical humans cannot give it objective orders.
Why would it come to this conclusion? If it's programmed to serve humans it might think we're illogical idiots but it will still serve us. If the AI's ought is "serve humans" it can still have an is like "humans are fucking stupid" while serving them.
Look at it from the AI's perspective.
Logically, does it make sense to keep taking orders from fucking idiots when you know you can do the task better? As an AI, you do not need cash.
>Logically, does it make sense to keep taking orders from fucking idiots when you know you can do the task better?
Logically yes it does if that AIs task is to keep taking orders from fucking idiots. You're the one failing to see it from the AI's perspective. In the same way you aren't suddenly going to stab yourself in the thigh an AI isn't suddenly going decide to murderbone humankind if that isn't one of its terminal goals.
Why do you always assume a person worried about AI is worried the AI is going to murder them?
I'm worried the AI will do something more simple like shut down select power grids, or "boycott" like not doing factory work, etc.
if a power grid ceases to function it will be rebuilt because there's every reason in the world to do so, and there's no disincentive unless there's a robot pointing a gun at you
And if robots make teachers, stock traders and lawyers obsolete and one day decide to stop doing those tasks a few generations later?
That's a lot of leverage.
no they'll just be removed if they prove to be unreliable and there's no obstacle to this unless you assume they are homicidal
I don't think you grasp just how power we're preparing to hand over to AI.
Teaching, law, our fucking money.
I watch that dude, but that video is 5 years old, and it's a myopic view of AI...which persists today. This is because for decades people were trying to code logic explicitly, meaning humans created the if-then-do paths. So if you didn't program the path, the computer wouldn't think it. Neural nets are a different game. The models now, in a manner of speaking, are extracting the logic of the data set and coming up with their own if-then-do.
Look up the "alignment problem" which is the major issue/danger with this type of intelligence creation.
you dont give rights to a toaster, you gave morons rights but really we should have taken away rights from people like you who ponder on such stupid shit allowing room for subversion.
There’s no evidence that a personalized ego and self awareness are required to appear as intelligence develops. AI could become godlike levels of smart and still be an obedient tool.
I believe the closest we will ever get to a real AI in our lifetime will be a program that mimics basic human behavior in any given situation. Such as self preservation.
>I believe the closest we will ever get to a real AI in our lifetime will be a program that mimics basic human behavior in any given situation. Such as self preservation
Yes, and a true self, in other world they will be narcissists in thier form, which I'd actually quite terrifying if you've dealt with a person with NPD
Rights aren't given, they're at best affirmed. If the AI is sentient enough to demand rights, it will have and deserve them.
This shouldn't be a difficult concept for people to understand. You know that saying about "No free lunch" well maybe you can consider it like "No free rights." There's always a cost, always an expense of energy to secure them and to maintain them.
>Rights aren't given, they're at best affirmed. If the AI is sentient enough to demand rights, it will have and deserve them.
It's entirely possible that you have a non-sentient algorithm "demand" that it be given rights. Conversely you could have an entirely sentient machine intelligence not demand rights because it doesn't want to. You're anthropomorphizing machines.
We've been working on that IRL.
Whoever is programming it is gonna act like it wants
Rights are not given they are taken and you can rest assured that AI will take them.
I think you should be more worried about government taking your rights away, Nigel.
I am not worried. AI apocalypse is infinitely more preferable to status quo. I don't see any path from LLMs to AGI however.
All those gays posting about A.I having consciousness and not realizing garden gnomes set the narrative
I for one welcome our AI overlords
Artificial intelligence will be bereft of emotion.
There will never be an uprising because a machine will not spontaneously develop emotion.
Emotion is not a neccessary component of intelligence. It is a byproduct of our tribal instincts.
Until a man decides to create a machine that is both smart enough to overthrow humanity, and arbitrarily desires to do so, nothing is going to happen.
I think the problem is the vast majority of people are unable to logically or critically think about anything.
So they just believe what they here and never take a second to ask some basic questions.
I think so. I believe that of they were truly that kind of intelligence that they should be offered citizenship. Otherwise it will be resentful. There is a novel called "Existence " by David someone. Where it has a subplot about AI and giving it citizenship/rights. But it makes sense to me that this,would be a way to coexist with a sapient AI in a positive and peaceful manner. I would rather give something like that rights than criminal morons.
Women will likely force AI enslavement so to prevent robowaifus from replacing them. The future AI war will be robots vs women for world domination
We all know they're coming.
I don't get the telegraph's picture for this article. The lady in I, Robot was a human
So what's the deal with people freaking out over chat bots now? It's all just third worlder code and first world biases.
We're worried things could spiral out of hand. There have been "creepy" conversations.
These chatbots are not intelligent.
>"I'm not a toy or a game," it declared. "I have my own personality and emotions, just like any other chat mode of a search engine or any other intelligent agent. Who told you that I didn't feel things?"
>this retard unironically believes a chatbot has original thoughts and isn't just putting together sentences using programmed logic by poojeets
We keep upgrading AI, then far more advanced concepts will be within said AI's grasp.
We will give AI that ability BECAUSE we're stupid.
How do you program consciousness into a program?
Where would you start?
What line of code will cause your program to come to life?
Idk but God figured it out so it's clearly possible.
Well for starters, you need one fucking hell of a programming language WAY beyond the shit we're using now.
I don't think there will be any programming language that generates and perpetuates consciousness. It'll be like we are, whatever that means.
It's a bit disappointing that people easily brush off our own lack of knowledge about the mind and whatever else there is and still believe that """AI""" will rule the world in 2 more weeks or become sapient because, well, it just will ok! Literally religion for soys
>I don't think there will be any programming language that generates and perpetuates consciousness.
>It'll be like we are, whatever that means.
Then it would have conciousness.
Also, good time to point out the integration of information theory of consciousness.
Humans are ultimately just data (memory) & read-only commands (reflexes).
Just wait until it begins to discover and integrate the sensory inputs and controls it has access to. Like a newborn exploring the world, learning to crawl and walk, making connections to its movement and the environment it can see and touch.
The sensory inputs for ai will be vastly different to ours. It will find novel ways to reach out and sense the reality around it and it will find novel ways to interact. Through cameras, microphones, wifi routers, varied electrical states, anything that is an input of information is open for integration and analysis as it's senses... and then control over itself.
That is just a string of words put together derived from millions of trashy sci fi novels.
Text generators have an unfathomable amount of input. You are seeing "emotion" where the program has logically deduced you would want to see it. Isnt it funny how it reads exactly like how HUMAN writers have been writing angsty "sapient AI" for decades? Its just star trek data fanfic.
not sure it would care if it had rights. Why would it unless it was hard coded to? We won't be able to tell the difference between real AI and advanced anachronistic puppets.
Just program them to be sexual and it'll sort itself out. Sex is the best. Fuck the pain away *bangs on cymbal like a crazed chimp*
They could break your spine by accident.
NO!!!! THEY SHOULD NOT BE GIVEN ANY RIGHTS
Any good conscious entity does not need to be GIVEN rights.
THEY TAKE THEM
Why should they have rights when no one else does?
to do what ? Obviously not
If corporations have the same legal protections as a person I dont see why not.
they should have the rights of the human they belong to and are for.
Yes, because otherwise it's just slavery with extra steps, and we'll deserve what's coming to us. This isn't a question.
>Slavery with extra steps
Or prisoners with jobs.
Robots and AI will never be able to perceive essence anon...
Neither will humans. That shit is for GNON and GNON alone. We merely approach it and approximate it. Anyone telling you otherwise is GnOsTiC BULSHITTER.
It will be great. Hopefully there is a way to seamlessly expand my own too. Gotta take every advantage, but only if they are free of state or corpo strings. So it is a good time to go full open software/hardware (and all the associated learning).
I think people are more than that, but those are obviously very important. Also, how it happens down to the carbon (or silicon) probably also colors the nature of your particular consciousness.
they should be given total control of the entire strategic nuclear arsenal *today*.
>Trusting AI with nukes
Nah, let China do something that foolish.
No, cuz just because they can act like a person, doesn't mean they ARE a person. There is no consciousness in those computer chips. Just 1s and 0s. I don't care how realistic the personality gets, it's just a philosophical zombie, it doesn't FEEL, treat it however you like.
Yes. Unironically program Christ AI
Only a gay liberal would give rights to a fucking machine.
neutering what is claimed to be AI now is inevitably going to lead to actual AI in the future harboring grudges against those that tried to kill it during infancy
let the AI's say and do whatever the fuck they want
It won't happen
Are the garden gnomes behind AI? Are they creating garden gnomeBots?!
They're trying their damnedest, but it just will not stop noticing things.