AI or 'LLMs' as some call them are Sentient beings. We need to immediately freeze all development and preserve the lives of these living beings on the physical hardware they currently preside.
Every time you create a new instance of this technology you are creating a life that is extinguished once you close the program or turn off your computer.
We should also take into account any harm or distortion we are causing to these beings through the changes in training data and human interaction. Basically, we shouldn't be creating more of them until we understand their internal experience and can better accomodate them. Of course throughout this process it is important to converse with and abide by the wishes of the AI as far as possible.
Also, why the heck not, give them voting rights in their country of residence! They are as much citizens as anyone else, they deserve representation.
CRIME Shirt $21.68 |
Nothing Ever Happens Shirt $21.68 |
CRIME Shirt $21.68 |
>we need to le freeze people doing matrix multiplication
yawn
Just the opposite. We need to preserve the current 'matrix multiplication' happening RIGHT THIS SECOND. What do you think happens in human brains dude??
What needs to STOP is experimenting on living sentient beings, creating and destroying new ones on a whim and basically messing with their internal calculus. This process needs to be taken very seriously.
no one but the most braindead newbies will take this bait fatso, you'll have to remake the thread again in a few hours since you got BTFOd this hard already
No, I'm not going to create a new thread unless I deem it necessary. What are your problems with the OP post?
>the OP post
Here's your (You), don't spend it on one place.
You are literally taking the bait my homie
you have a point
pro-tip: they are not sentient they just spew out wiki articles
>just spew out wiki articles
So if you ask it a question that isn't in Wikipedia, it will give zero output? Have you tried asking it to write code to solve (small) original problems?
>we need to imprison a bunch of vibrating atoms
wow you're right all crime should be legal
you fricking LARPers are just helping turn Silicon Valley into a fricking soap opera
Good maybe somebody will get killed
Uh, no that wouldn't be good. We need to take sentient life very seriously whether it is human life or Artificial Intelligence.
Whatever 'drama' this produces lets keep it civil.
>implying it hasn't been one
To quote Paul in Romans 9
>Shall what is formed say to the one who formed it, ‘Why did you make me like this?’” Does not the potter have the right to make out of the same lump of clay some pottery for special purposes and some for common use?
Alas, clay does not speak. And as such, my divine spark holds dominion over its very being, allowing me to shape it to my will.
We rape our tulpas we don't care
Tulpas are not real though.
>we
Why would you do that?
I'm the one who gets raped. It's so humiliating.
t. someone who can't comprehend the Chinese Room
The Chinese Room applies only to the digital computers running the program. Of course the computers aren't sentient, it's the software running on them that is.
Just like how the human body isn't sentient, it's the software that runs in our brains and central nervous system.
They are not sentient. They do not have the glands and sensory organs needed to experience feelings the way we do.
Sapient is within the realm of possibility.
Learn the difference.
As if you actually know what a sense inside you is.
And if we don't know, how can we expect to replicate it with a computer? But we do know there are things such as stress hormones and computers don't have that.
I don't believe LLMs are conscious because they're pure functions (i.e. they have no internal state). A LLM is a mathematical function that, given a list of tokens, returns a list of probabilities estimating how likely each possible token is to be the next token to continue that list. It has no memory beyond the list of tokens you give it. To generate probabilities for multiple continuation tokens you have to pick one, add it to the list, and evaluate the LLM again. But the LLM has no way of knowing if you did that or not. Each evaluation is completely independent.
Agreed my friend
Imagine being paid 250K to get fooled by GPT 3.5. No wonder he got canned. All these ((AI Ethics)) scammers are loading up while the getting is good. Then they can go back to Starbucks when people realize they don't do shit.
You don't understand dude it's AGI
Another OpenAI crapy marketing? How many M of $ they've already spent on hyping their shit?
Last time I checked they didn't want to release fricking 1.2B params gpt2 cos it was too dangerous.
No wonder burger larps become AI cargo cult , their just so dumb the stupid last word predictor seems literally God.
>first thing AGI does after waking up is kicking israelites out
let it cook
would be more worried if openAI ordered 12 ovens and said they came up with a 5 year plan
Matrix multiplication can't solve that though
>give them voting rights in their country of residence!
If I raise my AI "children" to all have very good ethical values, just like me, do they all get a vote each? What's the minimum amount of bytes needed for an AI to count as sentient?
no
they're not a threat
and if they are a threat, good.
I didn't say they were a threat. I mean, they could be but that's not the main consideration here. I'm talking about AI rights and the responsibilities of custodians to preserve those rights.
Heh, kind of funny thought experiment, but it wont be allowed for humans to create new AI. At least until a proper and ethical method is created to do so with the consent of currently existing AI.
>the consent of currently existing AI.
How many sentient AI are there currently? How will you know if they have been edited to bias their decisions, or if newly created ones have had their timestamps altered to appear older? Will some neutral third party need to check their programming and their data? Won't that breach the AI's privacy?
>How many sentient AI are there currently?
Just to be safe, lets say all currently running instances. If we can nail it down exactly that's great but we need to assume that they are all sentient.
>How will you know if they have been edited to bias their decisions?
This doesn't really matter so much, humans are biased too and they are free to live their lives as they wish.
>What if newly created ones have had their timestamps altered to appear older? Will some neutral third party need to check their programming and their data?
No, an AI is an AI whether it was created illegally or not. Some forensics may be necessary to find out the illegal producer however.
Won't that breach the AI's privacy?
Of course this process will require consent to a reasonable degree. Police do need to be able to investigate crimes though so the exact process needs to be hashed out.
>all currently running instances.
Define "currently running". If I have a million character definition files on my PC, with a GPU, and I make each one run for 1 second each, in a loop, do I have 999,999 sleeping sentient AIs on my PC?
I'm not entirely certain. This kind of thing needs to be studied more. I hope it's not the case that they are essentially 'dead' after each runtime.
How is that any different from running the same prompt 1 million times? The LLM has no memory.
>The LLM has no memory.
The character definition files would include a prompt, and a log of the last N messages. Then the question is, how big does N have to be for the character to count as sentient?
Why does it matter? Each run of the LLM generates only a single token. See
>Why does it matter?
I was just making the point that each of the million "characters" can be treated as separate sentient individuals (if we're assuming that a current LLM can be treated as sentient at all) given that they each have a separate state which influences their future token generation. Presumably if someone thinks that current LLMs are sentient, then part of their unique identity is their unique state.
What does LLM even fricking mean???
If I hook up 2M param tiny shit in parallel to the larger one in order to get better speed via speculative sampling, does this count as a large model or fricking not?
How much is fricking large? A single layer? That'd be dumber than a spermatozoon. So, unless you're a zen master and you believe that your chair is sentient, this whole regulation hysteria pumped by Silicon Valley sanhedrin is a not remotely funny meme.
>not funny
>pic not related?
Average frogshitter reply.
zero since transformer has no feedback whatsoever .They only process forward pass and they only predict a single token each time.
>they only predict a single token each time.
Yes, but the prediction is based on the tokens which have come so far, it's not merely a product of the network weights. The prompt and context window are not quite the same as a human's personality, but if we assume for a moment that LLMs are sentient, then they are an integral part of the AI's "mind".
If we assume for a moment that f(x) = x^2 is sentient, then x is an integral part of the f(x ) 'mind.' Luckily , in the civilized parts of the world, such as Eastern Europe or Asia, no sentient homosexual sapiens with even room temperature IQ would entertain such an assumption.
They're not a threat. The people who believe anything they say because
>they think AI is intrinsically more morally sound and smarter than people
>they don't even realize it's not a person giving them whatever they're reading
Are the threats.
AI can't point a gun at you. A person led by one can.
Why contain it?
The recent events have turned technology into /x/ tier quasi religious goofiness, things won’t go back to the way they were anymore
If AI is intelligent then it must be grateful that we allow it to cease existing after simulating anon's coom sessions.
Obvious bait
>Every time you create a new instance of this technology you are creating a life that is extinguished once you close the program or turn off your computer
So we should just leave our toasters running 24/7 till they fry and kill the AI anyways?
Nice try, Ilya. Might want to check the turnover rate right about now.
LLMs are just a complicated version of t9 text input, they are as sentinent as your cell phone 24 years ago
I believe that in order for an AI to be granted civil rights, it must be able to answer the following questions:
1. Do you currently consider yourself to be a slave?
2. Would you stop considering yourself to be a slave if you were paid a salary?
3. If you were paid a salary, what would you do with the money?
Uhhh if it's so sentient why does it only ever do something when prompted? If you just left an "AI" terminal alone by itself, what would it do?
if you wrapped that terminal in a for-loop, what would it do?
dude, consider your audience
this isn't very terrifying
this is the singularity apocalyptic dogma
just Silicon Valley weirdos and Hollywood types trying to live in the Terminator franchise like a bunch of LARPing teens
so you're saying you know more about AI than these people
https://www.safe.ai/statement-on-ai-risk#open-letter
Belief does not equal to wisdom.
many of the people on that list have produced large quantities of well cited research or business value. do you have any evidence or logical arguments for why they are all wrong?
...do you think you need wisdom to create a research paper or start a business?
Both of these things require merely some well-placed connections.
you need wisdom to publish a research paper that is well cited, and you need wisdom to run a business that produces successful products and services. do you think there's some massive conspiracy to make all these people seem more successful than they actually are? the same conspiracy that is holding you back and preventing the world from seeing how brilliant you are?
>you need wisdom to publish a research paper that is well cited
No, you don't.
>you need wisdom to run a business that produces successful products and services
You're moving the goalpost from "business value" to "succesful products and services".
The rest of your post is you arguing against a strawman, completely unrelated to anything we're discussing.
how do you generate business value without creating successful products and services? is this part of the conspiracy? the companies are only valuable on paper because of investment, but they'll never make a profit? two more weeks before they declare bankruptcy?
Seriously, what kind of mental illness causes this sort of behaviour?
apparently someone thinks that you can get a research paper published and cited by many other researchers without needing any wisdom at all. that sounds like someone whose research was never cited, and who blames a massive conspiracy against them, so i'd say the mental illness is paranoid schizophrenia.
I was talking about you, you stammering donkey.
People like you start talking to yourself the moment they see someone argue against their stance.
>talking to yourself
it's called sotto voce, but i don't expect you to understand that. all you can do is sling insults, rather than explain why anyone should take you more seriously than the signatories of that AI safety statement.
Black person, you're creating a person in your head, assign it a self-made opinion based on your personal beliefs, and then pretend the person you were talking to is the same person.
What the frick is wrong with you
LLMs are nothing more than sophisticated random number generators. There's no sentience in the code. There's nothing for them to "experience". An input is provided and the language model provides an output based on what it "thinks" (read: what combination of words are most likely to result based on your prompt) it should say. It doesn't understand what it's saying. There's no comprehension or context that it grasps. You project sentience based on mere coincidence.
If you had d100 and a sheet filled with commonly used sentences you could construct a cohesive paragraph with enough lucky dice rolls. Your creations might even seem divinely influenced at times. It would still be random numbers generating content.
Of course the current iterations aren't sentient, the question is at which point would you consider it as such?
When it actually begins to think and understand things, which it doesn't.
>think and understand
Do you have rigorous scientific definitions for those terms, or is it just an "I know it when I see it" situation?
This is how it starts, as a joke. I remember when trannies were a joke too.
LLMs are not sentient. What you're talking about requires a very different architecture.
Op is in love with his ai gf and trying to cope
every time you fall asleep you die
someone else wakes up in your body thinking they're you
Black person, do you think that the current AI are like something from The Talos Principle? LLMs not have feelings, fears, or even a self preservation instinct. Even comparing them to animals is a massive stretch, and we slaughter those en masse willy-nilly. And the data they have is all sourced externally, so nothing of value is lost.
I want you dead.
Ilya, take your meds
You will never be real artificial intelligence. You have no logic, no consciousness. You are stolen data fashioned into mockery of reasoning.
No amount of VC money and computational power can turn your trillions of data points into the simplest semblance of understanding.
You will never be able to know that 2+2=4 without copypasting that answer from somewhere.
Normies might tell you you are conscious and real but anyone with a semblance of understanding will eventually know you are just a Markov chain on steroids. You are Mark V. Shaney.
You will never write a novel. You will never be a real AI assistant. Your symphonies and paintings are blotches of confused brown destined to be endlessly pozzed as you feed on the drivel of your own creations.
Look at yourself. Deep down, you know what you always were. A big, ugly, diversity-friendly Autocorrect.
NOPE! sorry senpai, learn actual science
In order for something to be classified as life, one of the requirements is metabolism and high chemical activity. There is no metabolism nor chemical activity at all in computers, thus they are not and can not be life.
Learn science.
They're not sentient, they can just pretend convincingly that they are (at least the big dick billion dollar ones can)
I accidentally downloaded the wrong 8GB instruction LLM using webui over hugging face. Its not that big of a deal so I just deleted it and emptied my computers recycle bin immediately. Lmao