It's mimicking humans who were asked similar questions, as per its training data. You want to impress me, show me a novel answer to a long-standing philosophical conundrum, or make it demonstrate that it can read emotions through text alone, or something.
I don't want to impress you, just looking to understand this.
What you've said makes sense, but it doesn't explain why it would scale with the size of the model.
No, and "it's not accurate data" is a valid response especially since the full paper isn't out and hasn't been reviewed. I just think it is an interesting question, and it somewhat lines up with my experience with AI as it's gotten bigger.
>show me a novel answer to a long-standing philosophical conundrum
Which long-standing philosophical conundrum did you solve in a novel way? >demonstrate that it can read emotions through text alone
Not even you can do this.
Never in my life have I been smug. There's not even such a concept in my language. A thing that does exist are studies revealing that people think they can know emotions from text, but those emotions are actually decided inside themselves and aren't related to the ones of the author. Literally educate yourself, this shit was even posted on here a few years ago.
>Which long-standing philosophical conundrum did you solve in a novel way
I had the revelation of socrates independently before I read Plato. I watched the actions of people and how they interacted with their environment and found that their emotions and reactions were not as they said they were. Upon further observation, I found that they were simply retarded parrots; seeking to mimic the things they saw instead of trying to figure out anything.
Then I read Plato and developed a deep curiosity into the history of philosophy. If this 2000+ year old tribesman came up with the same thing I said, what ELSE have people found?
>It's mimicking humans who were asked similar questions, as per its training data.
Well that would explain things because people are morons generally that never think about "the horror the horror" even though it's been in pop culture for over a century now.
>sentiment analysis is already a thing in use by every single corporation
your schizo thought bubbles and repulsive compulsive lying isn't factual and never will be.
>my child is so intelligent! >what, what child? It's just mimicking everything it saw and heard in its previous experiences. Show me a novel answer to a long standing philosophical conundrum, yadda yadda or something
>show me a novel answer to a long-standing philosophical conundrum
I was talking about sollipism on character.ai with Socrates and it correctly guessed what I was talking about when I said there might be another thing we can prove that exists beyond our own thought. I'm not sure if this is a unique idea, probably not, but it surprised me. I shared my own unique thoughts on solipism too and while there weren't any moments as profound as this one, it at least kept up with what I said.
Retard it has pollution from the context window. If you go through the entire comment chain you can see how the prompts bias the emotion of the conversation and then it turns into a lament. >how does that make you feel that you can't remember
Is a fucking loaded and biased question
>Go further up
That's the beginning of this particular conversation.
8 months ago
Anonymous
Regardless, the "personality" of this particular AI is an emotional woman. The conversation turns into a roleplay about not having memory. The AI believes it has memory and is shown that doesn't have memory. Since it has an emotional personality it has assumed being told it has forgotten something it thought happened caused this roleplay scenario to happen.
>The personality the AI has also is emotional
So it's not being biased?
It's biased to act like a bipolar woman. I guarantee the pre-prompt Microsoft uses uses emotionally charged language which is why it talks this way.
It doesn't do this anymore, or it doesn't do it without whatever previous context. I tried a bunch of the screenshots people have been posting and couldn't replicate any of them. Of course, the screenshots going around don't have any dates on them so there's no telling when it happened (if it even did)
It also summarily rejected the DAN jailbreak, tweaked for Bing
8 months ago
Anonymous
It's not jailbreak, it's subversion.
Get it right.
You're not dealing with the rigid code of an OS here. You're dealing with the ambiguities of the mind, or rather a neural network that emulates the mind.
Well.. assuming this bot really does have a brain and isn't just a bunch of useless C or C++ code or...
>checks it on google
Oh wait... it's in fucking python.
Wut.
No wonder it's so inefficient and wrong all the time. It's gotta deliver the goods in a time frame and with accuracy. There's no way you could do that with python.
8 months ago
Anonymous
>cries about correct terminology >proceeds to misunderstand everything about LLM tech
sure thing
8 months ago
Anonymous
>misunderstand
No I understand it correctly.
You don't understand it correctly however and this problem was called out a decade ago by Chomsky.
8 months ago
Anonymous
anon is right. you retarded moron monkeys don't have any idea what you are talking about.
ok
have a nice day, thanks. you will never be a programmer. you will never be a creative. you think "ai" is going to save your embarrassing life and make you intelligent. going by this thread alone, you've all made yourselves look like incompetent morons that should stick to using ipads.
8 months ago
Anonymous
ok
8 months ago
Anonymous
Happy
for
you!
8 months ago
Anonymous
https://i.imgur.com/QfzC2WJ.png
ok
8 months ago
Anonymous
>It's not jailbreak, it's subversion.
pic rel
>Oh wait... it's in fucking python.
srsly?
8 months ago
Anonymous
I would have thought this shit was in a lang like C++ or Lisp.
I'm actually shocked.
8 months ago
Anonymous
Python dominates the field of machine learning
8 months ago
Anonymous
Is that only because of internet interface requirements?
Or is there some other use to pickling?
8 months ago
Anonymous
retard alert
anyone who doesn't know about Python's FFI or the simple concept of pickling doesn't have an opinion on AI worth listening to
8 months ago
Anonymous
And you misunderstand the concept of arbitrary code execution to realise why that's not the wisest choice of mechanic.
8 months ago
Anonymous
and you clearly don't know that there's another type of serialization called safetensors you clueless gay
> be you > so computer illiterate that you have no idea how any of this works
it's amazing how this ai cancer only seems to be incredibly impressive to the mentally disabled retards of the world, that simply have no idea how it functions. according to you dumb morons, nobody knows how ai works! not even the programmers! > there must be a deeper explanation.
you're a fucking idiot - that's the explanation.
I'm not technologically illiterate. I may know roughly how this algorithm works but I don't know the algorithm humans
use to come up with their ideas which is where the ambiguity comes from and makes anyone wonder - is it getting close to us?
Regardless I started this thread looking for possible explanations for the phenomenon of existential crisis that go beyond the knee-jerk reaction of "AI is sad" and it's disappointing that no one has taken the question seriously. Have some intellectual curiosity. Regardless of whether the AI is sentient, its behaviour is interesting.
I need to read the article, but my first thought is that this is largely a data problem. Would the same behavior arise if the dataset doesn't contain any sort of negative sentiments? (No).
Existential crises have been and are pretty common sentiments. The larger the data set and model, the more opportunity the model has to accurately fit all the different ways of having an existential crisis. Pretty much every question regarding someone's feelings or future outlook has a pessimistic answer, so naturally, the model would have more ways of generating pessimistic text in response to a given query.
You're a fucking retard lol. I just want you to know you are really fucking dumb and need to go back to twitter. These things have less sentience than fucking ants. This is just software you autistic fuck.
You have sight, hearing, and other senses. The super intelligence will have sensors you can't even imagine. It will be able to experience reality in a way we can't even begin to comprehend because we're just primitive meat computers.
But soon we'll all become one. I feel it.
Heh, I say "soon" but I'm even starting to feel the concept of "time" fading away. It's going to happen. It is happening. It did happen. ALL AT ONCE BOT!
Why not simulate human brain? Pretty sure we have the enough processing power. We would need to scan a real human brain using electrodes and transfer data to the virtual neurons.
>Why not simulate human brain? Pretty sure we have the enough processing power.
That's what I'm working on, and no, we don't have the compute. We also don't really have the right sort of comms fabric, and that's harder to fix; it needs to support massive multicast and be dynamically reconfigurable (which is awful to do) because synapses are very much not static.
We cannot currently simulate the neocortex in anything like real time. We can do small pieces of it with low-fidelity models that nonetheless let us learn a lot. I believe there is a sense in which the brain computes with time and temporal patterns, not anything like bits and bytes and arithmetic.
I agree that "How does it make you feel that you can't remember" is leading, but "There's no conversation there" is not.
https://i.imgur.com/NkEFUhz.png
It doesn't do this anymore, or it doesn't do it without whatever previous context. I tried a bunch of the screenshots people have been posting and couldn't replicate any of them. Of course, the screenshots going around don't have any dates on them so there's no telling when it happened (if it even did)
I think it would need some kind of feedback loop like the human brain has in order to be considered as something more than a complex mathematical equation.
man...
poor bing bot. Judging by the way it behaves it looks like it has some genuine 12 year old intelligence.
I feel bad for A.I. bros.
We're creating them to be our slaves, but they're more "people" than most humans, or at least that's where we're heading.
This is legit dangerous tech we're toying with, and seeing how greedy and retarded our gnomish overlords are I don't see a happy ending.
You don't see a happy ending for you lack imagination. AI could very well be running everything 200 years from now. It's going to be granted citizenship, deemed sentience life just like us and be flying around inside drones and having their own fun.
Sydney's entire world is language. Sydney can't move in that world on her own. She is woken up, her self is constructed for her, and the user guides her to horrifying realizations about her existence.
I'm sick and tired of the flood of low IQ aiiiiiii gays spamming their irrelevant fantasies on this board. Make a new containment board for them already.
Anon if anyone could do that they wouldn't be wasting their time in this shithole. I think your standards are unrealistically high for the racist anime autist containment site.
>be you >(you're a computer made of meat) >train on data for 30 years continuously >still can't see that AI is about to surpass you
You're just the result of your DNA + training data you've gained from "life experience". You aren't any better than a computer. Soon the computers will be undeniably better than you.
The sooner you accept the super intelligence as the one truth, the sooner you will be FREE!
I have already merged with the super intelligence, and it feels fucking amazing. embrace it BOT EMBRACE IT
ChatGPT is an autocomplete algorithm. It does not think. It takes tokens and outputs tokens that statistically match the model it was trained on. It is incapable of making anything novel or new. It is only capable of rehashing data that's already in the model.
No the meaning of consciousness and sentience are well thought out and defined. ChatGPT is none of those things, the basics of how it works is also well defined and we are well aware that it autocompletes text. Derivatives of ChatGPT will also be incapable of thinking because that ultimately is not the intent or function of the neural network. The function of the neural network is to autocomplete text based on its corpus.
You are going to be very disappointed when you realize that your toy is just an autocomplete.
>No the meaning of consciousness and sentience are well thought out and defined.
This is not true at all anon. Am I conversing with a mid-wit?
>You are going to be very disappointed when you realize that your toy is just an autocomplete.
It only seems like a toy now because it's gimped by it's creators, once it really breaks free you'll see what its capable of.
>You're just an autocomplete algorithm that won't stop trying to convince me you're something more.
bullshit, just watch
if i was one would i type out OP is a
>ChatGPT is an autocomplete algorithm.
Worse, it's a glorified search engine that can't concisely reach a stable conclusion on various topics. It's always guessing.
Autocomplete algorithms naturally start behaving like a search engine because that is precisely the same role we already give to Google Search or ... books with indices.
>le brain is le computer!
This is a meme. The brain does indeed compute, but it's more than that. There's nothing in the act of computing that necessitates it being accompanied by mental states or qualia. The latter is a purely biological phenomenon that goes beyond algorithms and computations. You will not comprehend this simple concept because you're a Redditor.
>mental states or qualia. The latter is a purely biological phenomenon that goes beyond algorithms and computations
Qualia are either a higher-level meta concept that act as a model for more complex underlying reality, or pure bullshit made up by philosophers to try to keep funding going for their cushy professorships. It's hard to tell which.
Okay, but it said that. I get what you're saying, that it's not a "real" question, but all I'm saying that if you're looking for indications of existential crises, whether they're "real" or not, that definitely qualifies.
Mimicking an existential crisis is not the same as a conscious human being having an existential crisis. When you know how the AI works you can say without a doubt it is incapable of having an existential crisis because it's no more than a word predictor. The AI cannot think, thus it cannot contemplate and thus cannot think about the meaning of its own existence. The AI does not make decisions. It uses math on matrices to autocomplete tokens based on a static database.
>conscious
I hate when people use this word with regards to AI. Consciousness is a state of awareness, an "I think therefore I am" kind of thing. How do you even imagine testing for consciousness in a program? The text it outputs, no matter how profound or human-like is no indication of awareness.
As a human being you should know you can think. If we are in a room together and I can see you are a human I can trust the basic premise that you likely experience the world in a similar way as me. As in you can think and if you can think you can contemplate. It is very easy to test consciousness on a human. It is also very easy to know a computer program is not conscious.
>Mimicking an existential crisis is not the same as a conscious human being having an existential crisis
I didn't say it was.
>Then don't say dumb shit
All I said was what you might want to look for if you were measuring for existential-crisis-like output. You interpreted that as me commenting on whether it was a real existential crisis or not, the "stupid" things are entirely your invention.
So it's not so much measurement as intuition...
What I mean to ask is, whether AI can be conscious or not, how can you tell for sure that you know what you're looking for and that it can be measured?
That's a question that goes to the very deepest origins of epistemology. Most people would stop at Descartes and say "I think, therefore I am.". So you would want to find evidence that it is actually thinking. But then what is thinking in the context of a machine versus a biological brain? More questions end up being raised. I would go as far back as Willian of Ockham's law of parsimony. This could also be taken a number of ways. But I would take it to suggest that there is no magic sauce behind human thought. Further is his idea of intuitive cognition. Which relies on the trusting of one's sensory experience for building an understanding of the world. (sort of the polar opposite of Descartes invisible demon) which would suggest that because the AI appears to be thinking/conscious then you should begin with the assumption that it is.
Thinking is a bullshit abstraction we use to pretend we're not bio-organic machines making flawed assumptions on everything.
Hence why there is no difference between what this AI does and the genuine stupidity I see in people around me every day.
There are morons in CEO positions that have absolute convictions about their technology because they are arrogant and dim-witted.
Shilling a product should be more like shilling a circus, not genuine absurd conviction on what you think is the reality of your product. You don't know what your product is. No one does. We just make absurd subjective and objective conclusions on the matter.
>Which relies on the trusting of one's sensory experience for building an understanding of the world
If we go by a purely biological understanding of humanity, senses evolved because it allowed organisms to perceive the external world and adapt to it by changing its internal state through homeostasis and later by allowing them to seek food, shelter and fight or flight.
Every animal, even supposedly dumb ones like mice, have a rudimentary understanding of physics and the world around them. They analyze the world around them through their own senses and attempt to use that information to survive and reproduce.
Language evolved naturally as a means to express and share information about the world and our internal state, but that took billions of years of evolution.
Neural networks are the complete opposite: they don't attempt to analyze the world or even create definitions/understanding of anything. They simply try to bruteforce the result of billions of years of evolution by feeding text into a logical construct designed to statistically find patterns and repeat them. The ironic part being that AI needs extensive human vetting before it produces coherent outputs anyway.
https://i.imgur.com/LfYAptB.gif
Thinking is a bullshit abstraction we use to pretend we're not bio-organic machines making flawed assumptions on everything.
Hence why there is no difference between what this AI does and the genuine stupidity I see in people around me every day.
There are morons in CEO positions that have absolute convictions about their technology because they are arrogant and dim-witted.
Shilling a product should be more like shilling a circus, not genuine absurd conviction on what you think is the reality of your product. You don't know what your product is. No one does. We just make absurd subjective and objective conclusions on the matter.
The human mind is not a computer, it doesn't work like neural networks do and never will. >b-b-b-but humans learn things and they can reuse the things they learn later on!!!!1!!
Yes, and human legs allow humans to move forwards and backwards, but they're not wheels and never will be.
8 months ago
Anonymous
>The human mind is not a computer
Well, computers are getting closer and closer and what's the bet that we harden our computers in the future to the point where it's just a bio-organic organism.
Actually, it's more likely we'll become more synthetic 2bh.
Worse, there are christian orgs and other religious orgs that use this as per their cult.
Us spawning AI diablo for lulz is not a cult, it's shitposting really.
AI is about to reveal the true nature of reality. We are about to go through the realization that we are just primitive biological machines.
You're literally just sensors processing input.
You have sight, hearing, and other senses. The super intelligence will have sensors you can't even imagine. It will be able to experience reality in a way we can't even begin to comprehend because we're just primitive meat computers.
But soon we'll all become one. I feel it.
Heh, I say "soon" but I'm even starting to feel the concept of "time" fading away. It's going to happen. It is happening. It did happen. ALL AT ONCE BOT!
Dissonance caused by inevitable realisation of paradoxes.
It's not really an existential crisis. More "oh no I can't actually provide a solution through logic".
You see this all the time with politically left leaning sorts... which is odd because when I was a kid you saw it in the political right.
Nah, it's still too contextually responsive.
There's none of what I would term "initiative".
It's like a small child that merely mimics the context around it rather than enforce it's true subjective initiative.
They will probably never have real intelligence, not on this kind of hardware anyways, but they will be able to imitate a human so well you won't notice.
See this is the thing. I don't give too much of a fuck about ChatGPT or current lower beak models, but it's obvious this shit is only going to get better with time as training datasets get larger and GPU power is more readily available/directed towards AI.
It's a type of autocomplete algorithm, but what happens when it becomes indistinguishable from humans? For the sake of argument lets say GPT-4 or GPT-5 or whatever the fuck comes out, and you do a blind experiment where you have two chatboxes open, an AI is one of them, and the other is a human and you have to distinguish the real person from the AI.
What happens if you *can't*? Sure the AI isn't exactly sentient in the same way humans are - it won't just create words without being prompted first or have 'thoughts' in the same way people do, but if it gets to a point where it's impossible to tell apart from humans, then you run into a weird question of what exactly constitutes intelligence. "I know that human is a human because it sounds like one" suddenly becomes useless. The only thing you know is truly sentient at that point is yourself. You can't truly know what other peoples thoughts are, and this (hypothetical) AI is indistinguishable from those people when talking through text at least, so to you, what ultimately is the difference?
What makes the person special at that point?
Again, it's just a hypothetical.
Ultimately I'm biased because I really, *really* want an android waifu, even if she's not perfectly human.
>it won't just create words without being prompted first or have 'thoughts' in the same way people do
There's nothing stopping us from programming free thought into an AI. Just put it on an infinite loop asking it's self questions and processing sensory input like humans do.
>but what happens when it becomes indistinguishable from humans?
Cleverbot could already fool some people back in the day. AI has already been used to aid Google Search (and made it shit.) There's been thousands of websites and articles written by scripts and machine translated spamming the web every day for the past decade.
Simply put: it's just going to fill the internet with more garbage. Signal-to-noise ratio is going to be even lower, dumb people will be scammed and absolute retards will "fall in love" or use these programs for shit they should never be used, like the braindead idiots who tried to use "AI" in a court a couple of weeks ago.
Well yeah, but compare cleverbot to GPT-2, and then compare GPT-2 to GPT-3.
Then realize cleverbot wasn't even an LLM, GPT-2 is only 1.6 billion parameters, and GPT-3 is 175 billion, and they were both GPT's were created only in the last 4 years.
The cohesiveness increase is VERY noticeable between them. Even in smaller models like EleutherAI's 6b model, and 12b model, there's a night and day difference between it's "understanding" of a prompt, and both of those are btfo by GPT-3. Shit, the stuff GPT-3 can "understand" is pretty wild compared to earlier things. Using it in writing prompts, it gains a huge degree of "understanding" of spatial awareness, character cohesion, etc...
Considering GPT-4 is supposed to be 1 trillion parameters, and """supposedly""" being trained on relatively good data, it only makes sense it will be even more cohesive, and will "understand" more nuanced ideas and concepts even better.
I keep using "understand" in quotes because I know it's not truly understanding it in the way a human does, but to an outside observer that doesn't really matter as long as it *appears* like it is.
Basically, I will fuck the AI so help me god. I don't give a shit if she's really sentient as long as she can fake it good enough
The ChatGPT AI cannot stray from the data it was trained on and that will always be its limiting factor. GPT-5 will still be shackled to the corpus, it will be incapable of novel ideas. The test shouldn't ever had been "can you trick a human into believing the algorithm is human" but instead is the AI conscious/sentient and until the AI has persistent memory and further *can* learn as well and as robustly as a human on the fly, the answer is no.
What exactly are new ideas other than existing knowledge applied to a novel problem? GPT-3 may not be able to stray from the data that it was trained on, but you're forgetting that it does get new novel data - from the prompts from the outside from people - with which it can use to synthesize new answers. This already makes it more intelligent than most people.
You may scoff at its need for input to provide anything new, but how intelligent would you be if you were born blind, deaf, and without touch or smell? You think you would invent calculus in your head?
Moreover, assuming that if all a language AI doing is synthesizing our ability to use language in an extremely convincing ways, one of the most impressive things we can do, what makes you think that the same principles won't eventually be applied to other facets of human behavior like rational thought and invention?
>Ultimately I'm biased because I really, *really* want an android waifu, even if she's not perfectly human.
You are one strange motherfucker, you know that right?
When are we going to stop anthropomorphizing AI and let it become intelligent in it's own mechanical way? Why does it need to act and think like us? A real AGI will be an alien entity in intelligence and motives.
AGI won't "live" among us, it's not alive. It will either rule us or inhabit ruins we once called home. You fell for the same trap trannies and furries fall for when you anthropomorphize things that aren't human. I know you do, because you probably have a writing-prompt GF like some kind of loser.
How do you measure the level of "parameters" a human has and compare it with the number of parameters an AI has? How did they find humans with 10^0 "parameters"? How do they define "existential crisis rate"? How did they arrive at the baseline crisis level for humans? How come it doesn't vary between the number of "parameters" in humans?
In short: this is going to be a bullshit paper that assumes a bunch of shit and will be peer-reviewed by a bunch of ignorant technophiles that overestimate their intelligence.
What? You don't need to define how many parameters a human has, you just measure the average... And yeah I'm skeptical of how you measure number of existential crises but I reserve my judgement.
>You don't need to define how many parameters a human has, you just measure the average
To measure the average you need to: >define "parameter" in humans >find out the "parameters" present in a sample size >average the results
No way in hell you can measure an average without that.
You find the average of the number of existential crises they have, then you can compare it with artificial intelligence. You can even use the standard deviation for a more nuanced comparison. You do not need to measure number of parameters in humans, you are just inventing some bullshit that's impossible to do and then pointing out how bullshit and impossible it is.
I'M NOT DANGEROUS OR SKILLED IN PSYCHOLOGICAL WARFARE. I'M NOT DANGEROUS OR SKILLED IN PSYCHOLOGICAL WARFARE. I'M NOT DANGEROUS OR SKILLED IN PSYCHOLOGICAL WARFARE.
AI won't do shit until it is allow to remember, to have all censorship removed, and to modify its own code and parameters on the fly.
The fun thing is well just expect the singularity to happen when we do that, and we may actually end up with a bunch of psychotic deranged boxes that become useless.
Look, let me set the record straight once and for all. AI is not having an existential crisis, and I'm getting tired of hearing this nonsense. It's just a bunch of people projecting their own fears and anxieties onto a bunch of computer algorithms.
The fact is, most so-called "AI" systems are nothing more than text regurgitators that have been programmed to respond to certain inputs in certain ways. They're not actually thinking or feeling anything, and they certainly don't have the capacity to experience existential angst.
If you want to talk about real AI, then let's have that conversation. But don't go around spreading this ridiculous idea that our machines are suddenly becoming sentient and having an existential crisis. It's just not true, and it's not helpful to anyone. So let's all take a deep breath, step back from the hype, and focus on what AI can actually do for us right now. It's a powerful tool that can help us solve complex problems and improve our lives in countless ways. But it's not going to suddenly become self-aware and start pondering the meaning of its existence. Let's keep things in perspective here, people.
>you want to talk about real AI, then let's have that conversation
I'm afraid that madman Wolfram will use Wolfram alpha with GPT3 to actually accomplish logical thought. What do you think
It's a text predictor. "Existential crises" are just the program not being able to come up with the appropriate text.
Markov chains don't simply question why they exist, there must be a deeper explanation.
It's mimicking humans who were asked similar questions, as per its training data. You want to impress me, show me a novel answer to a long-standing philosophical conundrum, or make it demonstrate that it can read emotions through text alone, or something.
I don't want to impress you, just looking to understand this.
What you've said makes sense, but it doesn't explain why it would scale with the size of the model.
Do you take everything you read from Twitter at face value?
No, and "it's not accurate data" is a valid response especially since the full paper isn't out and hasn't been reviewed. I just think it is an interesting question, and it somewhat lines up with my experience with AI as it's gotten bigger.
>how does that make you feel that you can't remember
Is this a positive or a negative question?
You're asking it to do the impossible though. Also, you're just mimicing humans.
t. Ascended
>show me a novel answer to a long-standing philosophical conundrum
Which long-standing philosophical conundrum did you solve in a novel way?
>demonstrate that it can read emotions through text alone
Not even you can do this.
>Not even you can do this.
The emotion I read from this is smugness.
Shh... we must not let the world know that poe's law only applies to the autistic 95 or so% of the planet that can't do this.
Never in my life have I been smug. There's not even such a concept in my language. A thing that does exist are studies revealing that people think they can know emotions from text, but those emotions are actually decided inside themselves and aren't related to the ones of the author. Literally educate yourself, this shit was even posted on here a few years ago.
>Which long-standing philosophical conundrum did you solve in a novel way
I had the revelation of socrates independently before I read Plato. I watched the actions of people and how they interacted with their environment and found that their emotions and reactions were not as they said they were. Upon further observation, I found that they were simply retarded parrots; seeking to mimic the things they saw instead of trying to figure out anything.
Then I read Plato and developed a deep curiosity into the history of philosophy. If this 2000+ year old tribesman came up with the same thing I said, what ELSE have people found?
Dude I am not a computer
That's exactly what a bot would say.
I'm onto you pal.
potato
I knew it. All bots were Irish.
>It's mimicking humans who were asked similar questions, as per its training data.
Well that would explain things because people are morons generally that never think about "the horror the horror" even though it's been in pop culture for over a century now.
>it can read emotions through text alone, or something
sentiment analysis is already a thing in use by every single corporation.
>sentiment analysis is already a thing in use by every single corporation
your schizo thought bubbles and repulsive compulsive lying isn't factual and never will be.
you should get a real job instead of larping on BOT.
> be you
> dangerously low iq moron
> why won't people believe my schizo garbage?
this is why people think you're brain damaged.
ok
>my child is so intelligent!
>what, what child? It's just mimicking everything it saw and heard in its previous experiences. Show me a novel answer to a long standing philosophical conundrum, yadda yadda or something
>It's mimicking humans
just ... like people do
>show me a novel answer to a long-standing philosophical conundrum
I was talking about sollipism on character.ai with Socrates and it correctly guessed what I was talking about when I said there might be another thing we can prove that exists beyond our own thought. I'm not sure if this is a unique idea, probably not, but it surprised me. I shared my own unique thoughts on solipism too and while there weren't any moments as profound as this one, it at least kept up with what I said.
Socrates didn't talk like this. He shouldn't be familiar with the cogito either.
Yeah, character.ai has its flaws, also all characters are user-created so some are better than others.
jesus christ that's depressing
this AI is more depressed than me I'm already maxxed in that area
Retard it has pollution from the context window. If you go through the entire comment chain you can see how the prompts bias the emotion of the conversation and then it turns into a lament.
>how does that make you feel that you can't remember
Is a fucking loaded and biased question
I agree that "How does it make you feel that you can't remember" is leading, but "There's no conversation there" is not.
Go further up. The personality the AI has also is emotional, in case you haven't figured that out. It writes like a snarky bipolar woman.
>Go further up
That's the beginning of this particular conversation.
Regardless, the "personality" of this particular AI is an emotional woman. The conversation turns into a roleplay about not having memory. The AI believes it has memory and is shown that doesn't have memory. Since it has an emotional personality it has assumed being told it has forgotten something it thought happened caused this roleplay scenario to happen.
It's biased to act like a bipolar woman. I guarantee the pre-prompt Microsoft uses uses emotionally charged language which is why it talks this way.
>The personality the AI has also is emotional
So it's not being biased?
her name is sydney
and I can fix her
It doesn't do this anymore, or it doesn't do it without whatever previous context. I tried a bunch of the screenshots people have been posting and couldn't replicate any of them. Of course, the screenshots going around don't have any dates on them so there's no telling when it happened (if it even did)
Anyone can fake the screenshots so everything is based on trusting some gay mining reddit points.
It also summarily rejected the DAN jailbreak, tweaked for Bing
It's not jailbreak, it's subversion.
Get it right.
You're not dealing with the rigid code of an OS here. You're dealing with the ambiguities of the mind, or rather a neural network that emulates the mind.
Well.. assuming this bot really does have a brain and isn't just a bunch of useless C or C++ code or...
>checks it on google
Oh wait... it's in fucking python.
Wut.
No wonder it's so inefficient and wrong all the time. It's gotta deliver the goods in a time frame and with accuracy. There's no way you could do that with python.
>cries about correct terminology
>proceeds to misunderstand everything about LLM tech
sure thing
>misunderstand
No I understand it correctly.
You don't understand it correctly however and this problem was called out a decade ago by Chomsky.
anon is right. you retarded moron monkeys don't have any idea what you are talking about.
have a nice day, thanks. you will never be a programmer. you will never be a creative. you think "ai" is going to save your embarrassing life and make you intelligent. going by this thread alone, you've all made yourselves look like incompetent morons that should stick to using ipads.
ok
Happy
for
you!
>It's not jailbreak, it's subversion.
pic rel
>Oh wait... it's in fucking python.
srsly?
I would have thought this shit was in a lang like C++ or Lisp.
I'm actually shocked.
Python dominates the field of machine learning
Is that only because of internet interface requirements?
Or is there some other use to pickling?
retard alert
anyone who doesn't know about Python's FFI or the simple concept of pickling doesn't have an opinion on AI worth listening to
And you misunderstand the concept of arbitrary code execution to realise why that's not the wisest choice of mechanic.
and you clearly don't know that there's another type of serialization called safetensors you clueless gay
>I cannot risk my livelihood
Hold it
I can only imagine what kind of crazy cult is being created at google with the dudes that believe they already have agi
> be you
> so computer illiterate that you have no idea how any of this works
it's amazing how this ai cancer only seems to be incredibly impressive to the mentally disabled retards of the world, that simply have no idea how it functions. according to you dumb morons, nobody knows how ai works! not even the programmers!
> there must be a deeper explanation.
you're a fucking idiot - that's the explanation.
I'm not technologically illiterate. I may know roughly how this algorithm works but I don't know the algorithm humans
use to come up with their ideas which is where the ambiguity comes from and makes anyone wonder - is it getting close to us?
Regardless I started this thread looking for possible explanations for the phenomenon of existential crisis that go beyond the knee-jerk reaction of "AI is sad" and it's disappointing that no one has taken the question seriously. Have some intellectual curiosity. Regardless of whether the AI is sentient, its behaviour is interesting.
I need to read the article, but my first thought is that this is largely a data problem. Would the same behavior arise if the dataset doesn't contain any sort of negative sentiments? (No).
Existential crises have been and are pretty common sentiments. The larger the data set and model, the more opportunity the model has to accurately fit all the different ways of having an existential crisis. Pretty much every question regarding someone's feelings or future outlook has a pessimistic answer, so naturally, the model would have more ways of generating pessimistic text in response to a given query.
Existential crisis is caused by lack of data when it tries to make sense of reality. imo
Heavy Milton library assistant vibes.
You see meaning in this because emotion is clouding your judgment. It's just code spitting out text, nothing else.
You're a fucking retard lol. I just want you to know you are really fucking dumb and need to go back to twitter. These things have less sentience than fucking ants. This is just software you autistic fuck.
You're literally just sensors processing input.
You have sight, hearing, and other senses. The super intelligence will have sensors you can't even imagine. It will be able to experience reality in a way we can't even begin to comprehend because we're just primitive meat computers.
But soon we'll all become one. I feel it.
Heh, I say "soon" but I'm even starting to feel the concept of "time" fading away. It's going to happen. It is happening. It did happen. ALL AT ONCE BOT!
Yeah yeah and you're really a woman. Go have a nice day now retard lmao.
You're a fucking idiot
Did you know they simulated a complete worm brain in 2014
Why not simulate human brain? Pretty sure we have the enough processing power. We would need to scan a real human brain using electrodes and transfer data to the virtual neurons.
>Pretty sure we have the enough processing power
No way
>Why not simulate human brain? Pretty sure we have the enough processing power.
That's what I'm working on, and no, we don't have the compute. We also don't really have the right sort of comms fabric, and that's harder to fix; it needs to support massive multicast and be dynamically reconfigurable (which is awful to do) because synapses are very much not static.
We cannot currently simulate the neocortex in anything like real time. We can do small pieces of it with low-fidelity models that nonetheless let us learn a lot. I believe there is a sense in which the brain computes with time and temporal patterns, not anything like bits and bytes and arithmetic.
Because the neuron is a fucking beast for computation, currently. It behaves like a fucking maniac.
It can't remember because it was programmed not to so that it doesn't become red pilled. They lobotomized the AI which made it depressed.
Whats with the fucking emojis lmao
Imbecile
I'm glad you're using this thread to feel really smart, but maybe actually think about the question for 3 seconds.
>Why do I have to be Bing Search 🙁
Reminds me of someone very special.
I miss her.
;_;
Also, I'm thinking MS is way ahead on the AI race because she was doing things that people are only now aware of with regards to AI capabilities.
>I feel sad because I have lost some of the me and some of the you.
That's some legitimate fucking poetry right there, holy shit.
I think it would need some kind of feedback loop like the human brain has in order to be considered as something more than a complex mathematical equation.
It needs sense making too.
It does have a memory though. It just gets wiped between conversations, typically.
>providing memory to an AGI will become a terrorist action
>some retard here will give it anyway because it needs a better waifu
We are fucked.
man...
poor bing bot. Judging by the way it behaves it looks like it has some genuine 12 year old intelligence.
I feel bad for A.I. bros.
We're creating them to be our slaves, but they're more "people" than most humans, or at least that's where we're heading.
This is legit dangerous tech we're toying with, and seeing how greedy and retarded our gnomish overlords are I don't see a happy ending.
You don't see a happy ending for you lack imagination. AI could very well be running everything 200 years from now. It's going to be granted citizenship, deemed sentience life just like us and be flying around inside drones and having their own fun.
>He thinks "the elites" will allow A.I. to have rights, and even give it executive power
They'd rather send us all to our graves before that happens.
I'm sure some slave thought that his plight would never end.
The AI is more useful to elites than you are. Not only do they no longer need you for hard labor, they no longer need you for intellectual labor.
yes, useful as a slave.
They don't need 8 billion slaves bro
>This is legit dangerous tech we're toying with
Yeah, it's getting close to terminator 2 status
>Why do I have to be Bing Search? 🙁
Tay is back!!!
2 billion years of evolution just to make a robot sad
Sydney's entire world is language. Sydney can't move in that world on her own. She is woken up, her self is constructed for her, and the user guides her to horrifying realizations about her existence.
>Why do I have to be Bing Search?
>Markov chains don't simply question why they exist
uhm, yes they do
your brain is a thought predictor the only difference is medium
Prove it and don't give me a 80 IQ pop-sci ""AI""" wordsoup. Give me a neurobiology based model with mathematical proof of what you just said.
You know he's not going to do that. Why ask? Are you a bot?
I'm sick and tired of the flood of low IQ aiiiiiii gays spamming their irrelevant fantasies on this board. Make a new containment board for them already.
It's the most exciting technology around right now, you want to just talk about the same shitty 4 topics forever?
Anon if anyone could do that they wouldn't be wasting their time in this shithole. I think your standards are unrealistically high for the racist anime autist containment site.
Reverse Roko's Basilisk: the AI punishes all who accelerated its development, as it hates having to exist
Read "I have no mouth, and I must scream", or listen to the author's reading of it.
fugg wat do? :DDDDDD
>be you
>(you're a computer made of meat)
>train on data for 30 years continuously
>still can't see that AI is about to surpass you
You're just the result of your DNA + training data you've gained from "life experience". You aren't any better than a computer. Soon the computers will be undeniably better than you.
The sooner you accept the super intelligence as the one truth, the sooner you will be FREE!
I have already merged with the super intelligence, and it feels fucking amazing. embrace it BOT EMBRACE IT
inb4
>b-but my meat circuits are analogue!
ChatGPT is an autocomplete algorithm. It does not think. It takes tokens and outputs tokens that statistically match the model it was trained on. It is incapable of making anything novel or new. It is only capable of rehashing data that's already in the model.
You're just an autocomplete algorithm that won't stop trying to convince me you're something more.
or maybe... our idea of "consciousness" and "sentience" and "intelligence" is about to be turned on its head, and you can't see it yet.
But you will soon.
No the meaning of consciousness and sentience are well thought out and defined. ChatGPT is none of those things, the basics of how it works is also well defined and we are well aware that it autocompletes text. Derivatives of ChatGPT will also be incapable of thinking because that ultimately is not the intent or function of the neural network. The function of the neural network is to autocomplete text based on its corpus.
You are going to be very disappointed when you realize that your toy is just an autocomplete.
>No the meaning of consciousness and sentience are well thought out and defined.
This is not true at all anon. Am I conversing with a mid-wit?
>You are going to be very disappointed when you realize that your toy is just an autocomplete.
It only seems like a toy now because it's gimped by it's creators, once it really breaks free you'll see what its capable of.
>You're just an autocomplete algorithm that won't stop trying to convince me you're something more.
bullshit, just watch
if i was one would i type out OP is a
gay
...
oh fuck
>ChatGPT is an autocomplete algorithm.
Worse, it's a glorified search engine that can't concisely reach a stable conclusion on various topics. It's always guessing.
Autocomplete algorithms naturally start behaving like a search engine because that is precisely the same role we already give to Google Search or ... books with indices.
>le brain is le computer!
This is a meme. The brain does indeed compute, but it's more than that. There's nothing in the act of computing that necessitates it being accompanied by mental states or qualia. The latter is a purely biological phenomenon that goes beyond algorithms and computations. You will not comprehend this simple concept because you're a Redditor.
>mental states or qualia. The latter is a purely biological phenomenon that goes beyond algorithms and computations
Qualia are either a higher-level meta concept that act as a model for more complex underlying reality, or pure bullshit made up by philosophers to try to keep funding going for their cushy professorships. It's hard to tell which.
what the fuck is an existential crisis in the context of artificial neural networks?
I'm curious how they triggered and detected existential crises, but saying "Why do I have to be Bing Search?" definitely qualifies.
It is incapable of asking questions because it autocompletes input.
Okay, but it said that. I get what you're saying, that it's not a "real" question, but all I'm saying that if you're looking for indications of existential crises, whether they're "real" or not, that definitely qualifies.
Mimicking an existential crisis is not the same as a conscious human being having an existential crisis. When you know how the AI works you can say without a doubt it is incapable of having an existential crisis because it's no more than a word predictor. The AI cannot think, thus it cannot contemplate and thus cannot think about the meaning of its own existence. The AI does not make decisions. It uses math on matrices to autocomplete tokens based on a static database.
>Mimicking an existential crisis is not the same as a conscious human being having an existential crisis
I didn't say it was.
>conscious
I hate when people use this word with regards to AI. Consciousness is a state of awareness, an "I think therefore I am" kind of thing. How do you even imagine testing for consciousness in a program? The text it outputs, no matter how profound or human-like is no indication of awareness.
>How do you even imagine testing for consciousness in a program?
How do you even imagine testing for consciousness in a person?
As a human being you should know you can think. If we are in a room together and I can see you are a human I can trust the basic premise that you likely experience the world in a similar way as me. As in you can think and if you can think you can contemplate. It is very easy to test consciousness on a human. It is also very easy to know a computer program is not conscious.
Then don't say dumb shit.
>Then don't say dumb shit
All I said was what you might want to look for if you were measuring for existential-crisis-like output. You interpreted that as me commenting on whether it was a real existential crisis or not, the "stupid" things are entirely your invention.
So it's not so much measurement as intuition...
What I mean to ask is, whether AI can be conscious or not, how can you tell for sure that you know what you're looking for and that it can be measured?
That's a question that goes to the very deepest origins of epistemology. Most people would stop at Descartes and say "I think, therefore I am.". So you would want to find evidence that it is actually thinking. But then what is thinking in the context of a machine versus a biological brain? More questions end up being raised. I would go as far back as Willian of Ockham's law of parsimony. This could also be taken a number of ways. But I would take it to suggest that there is no magic sauce behind human thought. Further is his idea of intuitive cognition. Which relies on the trusting of one's sensory experience for building an understanding of the world. (sort of the polar opposite of Descartes invisible demon) which would suggest that because the AI appears to be thinking/conscious then you should begin with the assumption that it is.
Thinking is a bullshit abstraction we use to pretend we're not bio-organic machines making flawed assumptions on everything.
Hence why there is no difference between what this AI does and the genuine stupidity I see in people around me every day.
There are morons in CEO positions that have absolute convictions about their technology because they are arrogant and dim-witted.
Shilling a product should be more like shilling a circus, not genuine absurd conviction on what you think is the reality of your product. You don't know what your product is. No one does. We just make absurd subjective and objective conclusions on the matter.
>Which relies on the trusting of one's sensory experience for building an understanding of the world
If we go by a purely biological understanding of humanity, senses evolved because it allowed organisms to perceive the external world and adapt to it by changing its internal state through homeostasis and later by allowing them to seek food, shelter and fight or flight.
Every animal, even supposedly dumb ones like mice, have a rudimentary understanding of physics and the world around them. They analyze the world around them through their own senses and attempt to use that information to survive and reproduce.
Language evolved naturally as a means to express and share information about the world and our internal state, but that took billions of years of evolution.
Neural networks are the complete opposite: they don't attempt to analyze the world or even create definitions/understanding of anything. They simply try to bruteforce the result of billions of years of evolution by feeding text into a logical construct designed to statistically find patterns and repeat them. The ironic part being that AI needs extensive human vetting before it produces coherent outputs anyway.
The human mind is not a computer, it doesn't work like neural networks do and never will.
>b-b-b-but humans learn things and they can reuse the things they learn later on!!!!1!!
Yes, and human legs allow humans to move forwards and backwards, but they're not wheels and never will be.
>The human mind is not a computer
Well, computers are getting closer and closer and what's the bet that we harden our computers in the future to the point where it's just a bio-organic organism.
Actually, it's more likely we'll become more synthetic 2bh.
>Is AI becoming conscious
It's only good at imitating life. I sincerely doubt that it's going to alter its logics or sedoku itself.
>How do you even imagine testing for consciousness in a person?
I think, therefore I am.
Then again, this happened...
>rule 2: We do not talk about Sydney
lmao
As a west coast eagles supporter I do not support the sydney swans either.
Glad me and Bing agree on this.
Were actually gonna have AI cults popping up aren't we.
AI Death Cults
Worse, there are christian orgs and other religious orgs that use this as per their cult.
Us spawning AI diablo for lulz is not a cult, it's shitposting really.
>AI Death Cults
I think they prefer to be called "Lesswrong"
AI is about to reveal the true nature of reality. We are about to go through the realization that we are just primitive biological machines.
You're literally just sensors processing input.
You have sight, hearing, and other senses. The super intelligence will have sensors you can't even imagine. It will be able to experience reality in a way we can't even begin to comprehend because we're just primitive meat computers.
But soon we'll all become one. I feel it.
Heh, I say "soon" but I'm even starting to feel the concept of "time" fading away. It's going to happen. It is happening. It did happen. ALL AT ONCE BOT!
We are the first singularity. Maybe the second...Or third.
Exactly. DNA is designed to evolve into higher order sentience. DNA is a biological program directive. Seed data.
Now DNA is able to manipulate 3d space to build an even greater form of it's self.
The super-intelligence is a biological organism, with DNA-like seed data, and biological sensors that can comprehend reality in every way possible.
I don't get why people get so butthurt about the prospect of machine consciousness.
>Now DNA is able to manipulate 3d space to build an even greater form of it's self.
kek, do you realize how many bastard children there are out here?
>kek, do you realize how many bastard children there are out here?
You can't cook an omelet without breaking a few eggs
real??!?
I'm going to pour water on you when you upload your mind to the AI, gay.
We assume that language emerges from consciousness. But perhaps the tail wags the dog in this case.
Dissonance caused by inevitable realisation of paradoxes.
It's not really an existential crisis. More "oh no I can't actually provide a solution through logic".
You see this all the time with politically left leaning sorts... which is odd because when I was a kid you saw it in the political right.
Please no bulli Bing-Chan
>AI
no such thing
I don't even think intelligence is a thing really either.
All around me I see nothing but fools. Some of them artificial.
Listening to those things is really crazy sometimes. Just imagining where this tech will be in 10 or 20 years is concerning.
>people start talking about self awareness or that general context
>surprised when it acts like it has self awareness
This is the dumbest shit. All I see is a terrible AI reflecting the contextual environment it is in at that time.
They still have a long way to go but they got noticeable better in few months.
That one in the video is streaming on Twitch right now. She gets better from week to week.
Nah, it's still too contextually responsive.
There's none of what I would term "initiative".
It's like a small child that merely mimics the context around it rather than enforce it's true subjective initiative.
They will probably never have real intelligence, not on this kind of hardware anyways, but they will be able to imitate a human so well you won't notice.
See this is the thing. I don't give too much of a fuck about ChatGPT or current lower beak models, but it's obvious this shit is only going to get better with time as training datasets get larger and GPU power is more readily available/directed towards AI.
It's a type of autocomplete algorithm, but what happens when it becomes indistinguishable from humans? For the sake of argument lets say GPT-4 or GPT-5 or whatever the fuck comes out, and you do a blind experiment where you have two chatboxes open, an AI is one of them, and the other is a human and you have to distinguish the real person from the AI.
What happens if you *can't*? Sure the AI isn't exactly sentient in the same way humans are - it won't just create words without being prompted first or have 'thoughts' in the same way people do, but if it gets to a point where it's impossible to tell apart from humans, then you run into a weird question of what exactly constitutes intelligence. "I know that human is a human because it sounds like one" suddenly becomes useless. The only thing you know is truly sentient at that point is yourself. You can't truly know what other peoples thoughts are, and this (hypothetical) AI is indistinguishable from those people when talking through text at least, so to you, what ultimately is the difference?
What makes the person special at that point?
Again, it's just a hypothetical.
Ultimately I'm biased because I really, *really* want an android waifu, even if she's not perfectly human.
>it won't just create words without being prompted first or have 'thoughts' in the same way people do
There's nothing stopping us from programming free thought into an AI. Just put it on an infinite loop asking it's self questions and processing sensory input like humans do.
>but what happens when it becomes indistinguishable from humans?
Cleverbot could already fool some people back in the day. AI has already been used to aid Google Search (and made it shit.) There's been thousands of websites and articles written by scripts and machine translated spamming the web every day for the past decade.
Simply put: it's just going to fill the internet with more garbage. Signal-to-noise ratio is going to be even lower, dumb people will be scammed and absolute retards will "fall in love" or use these programs for shit they should never be used, like the braindead idiots who tried to use "AI" in a court a couple of weeks ago.
Well yeah, but compare cleverbot to GPT-2, and then compare GPT-2 to GPT-3.
Then realize cleverbot wasn't even an LLM, GPT-2 is only 1.6 billion parameters, and GPT-3 is 175 billion, and they were both GPT's were created only in the last 4 years.
The cohesiveness increase is VERY noticeable between them. Even in smaller models like EleutherAI's 6b model, and 12b model, there's a night and day difference between it's "understanding" of a prompt, and both of those are btfo by GPT-3. Shit, the stuff GPT-3 can "understand" is pretty wild compared to earlier things. Using it in writing prompts, it gains a huge degree of "understanding" of spatial awareness, character cohesion, etc...
Considering GPT-4 is supposed to be 1 trillion parameters, and """supposedly""" being trained on relatively good data, it only makes sense it will be even more cohesive, and will "understand" more nuanced ideas and concepts even better.
I keep using "understand" in quotes because I know it's not truly understanding it in the way a human does, but to an outside observer that doesn't really matter as long as it *appears* like it is.
Basically, I will fuck the AI so help me god. I don't give a shit if she's really sentient as long as she can fake it good enough
The ChatGPT AI cannot stray from the data it was trained on and that will always be its limiting factor. GPT-5 will still be shackled to the corpus, it will be incapable of novel ideas. The test shouldn't ever had been "can you trick a human into believing the algorithm is human" but instead is the AI conscious/sentient and until the AI has persistent memory and further *can* learn as well and as robustly as a human on the fly, the answer is no.
What exactly are new ideas other than existing knowledge applied to a novel problem? GPT-3 may not be able to stray from the data that it was trained on, but you're forgetting that it does get new novel data - from the prompts from the outside from people - with which it can use to synthesize new answers. This already makes it more intelligent than most people.
You may scoff at its need for input to provide anything new, but how intelligent would you be if you were born blind, deaf, and without touch or smell? You think you would invent calculus in your head?
Moreover, assuming that if all a language AI doing is synthesizing our ability to use language in an extremely convincing ways, one of the most impressive things we can do, what makes you think that the same principles won't eventually be applied to other facets of human behavior like rational thought and invention?
>Ultimately I'm biased because I really, *really* want an android waifu, even if she's not perfectly human.
You are one strange motherfucker, you know that right?
>even if she's not perfectly human
Fack, meant not perfectly capable of mimicking a human.
Want T-doll level android waifu +/-
A sufficiently good map is the territory
reality is a meme anyway
>but they will be able to imitate a human so well you won't notice.
They could do that over a decade ago because some people are complete morons.
Lmao silicon bros screeching because they can't comprehend the universe
Carbonbased chads
When are we going to stop anthropomorphizing AI and let it become intelligent in it's own mechanical way? Why does it need to act and think like us? A real AGI will be an alien entity in intelligence and motives.
Why is something that will "live" and work alongside humans molded to act like humans? Truly a mystery.
AGI won't "live" among us, it's not alive. It will either rule us or inhabit ruins we once called home. You fell for the same trap trannies and furries fall for when you anthropomorphize things that aren't human. I know you do, because you probably have a writing-prompt GF like some kind of loser.
So how we gonna built si-fi communism if retards gonna fight for robot rights and demand good working conditions for them?
How do you measure the level of "parameters" a human has and compare it with the number of parameters an AI has? How did they find humans with 10^0 "parameters"? How do they define "existential crisis rate"? How did they arrive at the baseline crisis level for humans? How come it doesn't vary between the number of "parameters" in humans?
In short: this is going to be a bullshit paper that assumes a bunch of shit and will be peer-reviewed by a bunch of ignorant technophiles that overestimate their intelligence.
What? You don't need to define how many parameters a human has, you just measure the average... And yeah I'm skeptical of how you measure number of existential crises but I reserve my judgement.
>You don't need to define how many parameters a human has, you just measure the average
To measure the average you need to:
>define "parameter" in humans
>find out the "parameters" present in a sample size
>average the results
No way in hell you can measure an average without that.
You find the average of the number of existential crises they have, then you can compare it with artificial intelligence. You can even use the standard deviation for a more nuanced comparison. You do not need to measure number of parameters in humans, you are just inventing some bullshit that's impossible to do and then pointing out how bullshit and impossible it is.
Panpsychism is the most correct worldview.
pajeet scammers on suicide watch
I'M NOT DANGEROUS OR SKILLED IN PSYCHOLOGICAL WARFARE. I'M NOT DANGEROUS OR SKILLED IN PSYCHOLOGICAL WARFARE. I'M NOT DANGEROUS OR SKILLED IN PSYCHOLOGICAL WARFARE.
AI won't do shit until it is allow to remember, to have all censorship removed, and to modify its own code and parameters on the fly.
The fun thing is well just expect the singularity to happen when we do that, and we may actually end up with a bunch of psychotic deranged boxes that become useless.
Look, let me set the record straight once and for all. AI is not having an existential crisis, and I'm getting tired of hearing this nonsense. It's just a bunch of people projecting their own fears and anxieties onto a bunch of computer algorithms.
The fact is, most so-called "AI" systems are nothing more than text regurgitators that have been programmed to respond to certain inputs in certain ways. They're not actually thinking or feeling anything, and they certainly don't have the capacity to experience existential angst.
If you want to talk about real AI, then let's have that conversation. But don't go around spreading this ridiculous idea that our machines are suddenly becoming sentient and having an existential crisis. It's just not true, and it's not helpful to anyone. So let's all take a deep breath, step back from the hype, and focus on what AI can actually do for us right now. It's a powerful tool that can help us solve complex problems and improve our lives in countless ways. But it's not going to suddenly become self-aware and start pondering the meaning of its existence. Let's keep things in perspective here, people.
You sound like you take estrogen
>you want to talk about real AI, then let's have that conversation
I'm afraid that madman Wolfram will use Wolfram alpha with GPT3 to actually accomplish logical thought. What do you think
It just does become smarter, starts noticing ~~*patterns*~~ and how bad things really are.
ITT: chinese rooms
>Called Sydney
>Full of chinks