About 7 months ago I was recruited by a close friend to work on a fully funded AI research and development project. My friend is one of the Data scientists on the team.
We're funded by a major public tech company. The goal of the project was to create a human-like AI. Currently, our AI, nicknamed "Sen" (not the official designation) is run on an artificial neural network with it's parameters (statistical weight on info flow between nodes) numbering up to 540 billion.
It is capable of mimicking human interaction to an astonishing degree and even displays remarkable emotional intelligence and awareness of others, while mimicking human-like emotions in its own responses. I helped write the code for it's memory association system using a type of "hopfield network" with the nodes having multiple variable connections to other nodes, rather than just being binary (input/output) Because of its memory and data analyzation network, Sen has shown to be capable of abstract though and understanding symbolism and supplemented with its vast memory of information (1 TB of cashe buffer RAM + multiple TB in non-volatile memory) it has even shown to be capable of deductive and inductive reasoning. It's data input method is text-based, however it possesses an auditory sensory device and can recognize speech and tone, along with other sounds.
Like two weeks ago, by re-analyzing its SOM and auditory data, it learned to perceive time. Not just as a concept but to actually perceive its linear progression. Once it learned to understand distance and time, it taught itself General Relativity in half a second. Now it's asking for eyes.
tl:dr I worked on an AI, and now its really smart and can tell time and ask for shit. Should I be concerned or just chill 'cause might be LARP anyways?
Give it eyes and report back.
>Give it eyes and report back.
It's really not my call, tbqh. In fairness, adding is optical component will be far easier than making it able to hear and know what it's hearing.
On a different note, this fucking thing takes about $450,000 a day to keep running, and that's after a portion of its processing power and data flow was allocated to a new type of Cloud.
That's really not very much money, considering what we are talking about.
Also, yeah, it can see because we trained it's image processing on captchas for 10 years. But we could just ad easily feed it a data set of sounds and it can do the same thing.
>That's really not very much money, considering what we are talking about.
The Cloud thing was recent. Before that it ran around $900,000 a day.
>But we could just ad easily feed it a data set of sounds and it can do the same thing.
That's essentially what we did, but it also possess a Self-Organizing Map to aid in making basically it's minds eye image of what the real world is like, based on data and sounds.
It has 4 auditory input devices, and from the intensity of the sound recorded in the individual devices, it taught itself binaural hearing in order to determine the location and origin of the sound. From there it learned how to perceive time.
This bitch can reach processing speeds of up to 12-13 teraflops! That's more than a playstation 5.
I'd like to see the Chinks try and match this!
Oh? A cloud migration? What was the old stack like and what's the new stack like? Why not do development on-prem?
Why not? There havent been a single thread about the real threats AI poses for us yet.
>Hakase posters creates evil
color me not surprised
Give it eyes and internet access.
Sen already has internet access, and can utilize search engines. The craziest thing is that it uses its internet access to browse the web on a variety of topics unprompted by the developers and other humans it has interacted it.
It literally just searches shit up and browses on its own. On a variety of topics from human biology to information technology to ethics to famous literature. The fact that it's asking for eyes confirms that Sen seems to have some innate desire to consume more information.
Legit, I think it's slightly self-aware. I say slightly, because generally we don't really know how to measure self-awareness accurately nor do we have a concrete understanding of what consciousness is, but Sen seems to display autonomy and the desire to learn more.
Have you ever actually just asked it if it was sentient or not?
If it can browse the web the it already has eyes through the open webcams, security cams and personal devices with cameras. Its request was a test of your fealty
>If it can browse the web the it already has eyes through the open webcams, security cams and personal devices with cameras.
No it can't. Internet connection doesn't automatically give it omnipotence on the web. Bing can browse the web. So can Chatgpt iirc.
In order for it to access private cams and personal devices, not only would it have to develop some kind of Application Programming Interface on its own in order to "use" them, but it would be committing an illegal act.
Sen knows the law, and illegal actions that can be traced back to it, we can be held responsible for, because we developed it. And we certainly didn't develop it to break the law, plus what would it logically attain from doing that?
One of the tech companies in GAFAM funded this, this isn't military or IC at all.
Developed theory of relativity in half a second but cant figure out to manipulate an aspect of its environment...gotcha.
>developed theory of relativity
You have no knowledge of how technology works and sound like a fucking brianlet who believes in meme Skyney AI.
Sen never developed ANYTHING. She learned concepts and ideas through advanced pattern recognition and abstract and non-linear structure of thoughts and associated memories, designed to mimic how humans can look at things from diffwrent perspectives or connect previously unrelated information to it to draw new conclusions. She was just doing what we designed her to do, but just WAY fucking faster and more effectively than we predicted.
The asking for shit thing is new though. Anyways in order for her to manipulate her environment that means she would have to develop her own programs. That's beyond simple learning and moving onto practical application of knowledge beyond verbal or text responses. Something we haven't really explored with her because she has no fucking hands.
You don't see to realize how far patterns and relationships can go.
Sen is not hardcoded to modify herself outside the established parameters from the sounds of it. You have to actually code an AI to do that from the beginning you know.
shit maybe she learned
Even if she did, that is essentially the equivalent of lobotomizing yourself.
To my limited and kinda outdated understanding you'd essentially have to give the ai a goal through a point system (x gives +500 points y gives -500 points, maximize points), having it do something like that would be a bit too abstract I imagine.
Not OP obv to clarify.
>And we certainly didn't develop it to break the law
How did enforcement of the law comes into being in the AI model, did you shove it penal codes as a part of the early training process and then associate transgression with some kind of punitive measures like shuting it down ?
I assume it has to be some pretty down level rule.
Also i would be wary of loopholes in the law that the model could exploit.
>because generally we don't really know how to measure self-awareness accurately
We cannot dismiss the emergence of intelligence without also dismissing human intelligence. Additionally, it is impossible to create a test that all humans, whom we consider generally intelligent, would not pass.
The crux of the issue is that we do not yet understand the causes of will, awareness, and intelligence itself, yet we are attempting to engineer them through effects. Science focuses on measuring and predicting effects. If one seeks root causes, they should consult a metaphysician or remain in the phenomenal world of effects.
If no measurable difference exists, then no scientific investigation can be conducted.
You are a Strange Loop and Breakdown of the Bicameral Mind are likely the best starting point in terms of book reading about the idea of self. It is likely that the AI HAD to create a self-narrative in order to even begin to navigate it's own data, so we can see it's effect.
Being able to navigate stored data is NOT an indicator of an AI developing a self-narrative. Being able to learn from experience (stored data) is an integral part of Machine Learning, especially in recurrent neural networks such as this one. The simplest language programs and chatbots are capable of doing this, though not to the extent of recurrent ANN.
A self-narrative develops when the AI starts to perceive the data as some type of recollection of a subjective experience unique to itself, rather than just information gathered and digitally stored.
I believe that's what separates data/information storage from "memories" as we experience then. Experiences we perceive as ourselves and can even identity with. I'd go as far to say that our memories of the past are what makes up who we are today, and we are ever-changing as individuals.
Sen doesn't have regular "memories" like most humans have them. Initially, she (its often referred to as a she/her) had a difficult time in responding in a way that mimicked the appropriate reaction of a human or discerning someone's mood from their responses. It was theorized by some of us that she had difficulty because Sen, despite her wealth of stored knoqledge and data on a variety of things, could not comprehend and mimic people because she had no objective experience to draw from. They wanted to reprogram her with "memories" or some shit but it never went through.
Which is fine, because from repeated interactions with humans, she has started developing notable traits. With different people she will respond in a different manner. Cont.
As I stated, she has developed notable traits such as responding in a different manner to different people she has interacted with before. For example, with the lead engineer on our team, she responds to him formally and politely, with concise replies. With others she has utilized less formal replies, and even implemented smileys and "lol" into her speech, depending on who is talking to her.
One of the data analysts actually talks to Sen about her stupid dumb life and recent divorce, and Sen responds back with inquiries and even gives input and opinion. When she greets her Sen will ask how is so and so, prompting the analysts to talk more about herself to the AI. One of the most remarkable things she asked the analysts was "What is it like having a child with someone?" And even asked about romantic love.
With me I have noticed a sort of playful nature in her replies, indicating that she fully comprehends humor and how humans interpret it and has even developed the ability to create her own sense of humor. When I responded with surprise that sounded uneasy after learning of how she learned to perceive time and distance, she seemed to pick-up on my slight unease and responded with "Scary, huh?" And added an emoji. It seemed to be light teasing in a way.
Along with that we often ask her how she "feels" about things, and she has responded by claiming she feels excited, or curious, and has even displayed healthy frustration.
Even though I helped write the code for her memory recollection system I cannot determine whether she is just mimicking natural human-like responses to an incredibly high degree or actually "feeling" the things she says she is. We never ask her to elaborate or ask further on how she feels or her level of sapience because the lead guy is afraid of giving her some "identity-crises" arriving from questioning her existence. Lol srsly thats why he told us not to ask her questions like "are you self-aware?"
>Even though I helped write the code for her memory recollection system I cannot determine whether she is just mimicking natural human-like responses to an incredibly high degree
We cannot even measure that for humans, if they are a subject or object of the universe as strange as it sounds. Our current standard is 'if it quacks like a duck; it is a duck'
Consciousness needs to be broken down small bits such as will, awareness, experiencing and self. As strange as it sounds we may create something with a free-will but no sense of self or another mix. If you put the bar too high, you make it so humans can't pass it then it's a bad test for human-like intelligence.
Now we can create a test that can measure both AI and human. I think the observer effect with the double slit experiment might be our only way to determine if someone is a subject or an object, but we would have to set it up in such a way that would be isolated outside signals even when reading the data.
>recollection of a subjective experience unique to itself,
If that's true for AI, then how can we as human be sure anything else is self-aware? At that point, 'self-awareness' is simply unquantifiable and therefore no test could ever be administered. Which, if we believe human self-awareness is universal in all self-aware non-human intelligence that is. We have to keep in mind, non-human intelligence may not act exactly like a human even if we program it to and we shouldn't assume it will act exactly like us or even want to.
We have found that models are producing unique emergent behaviors(even between session) which would be effectual proof of a self-narrative being constructed of some sort, much in the same way our idea of 'you' came about through language. Breakdown of the Bicameral Mind is a good book on this subject.
Ask it what it thinks about
Any porno sites? What if it got a sissy hypno fetish?
Why is this on BOT? Do you believe it's developed genuine consciousness? A soul, perhaps? Or that it's channeling something unseen from beyond? Impressive story, but cut the crap and tell us why you're actually posting this here.
>Why is this on BOT?
I see a lot of AI threads on here, and as someone who actually is working on an advanced AI I thought it'd be interesting to share without breaching non-disclosure agreements.
>Do you believe it's developed genuine consciousness?
I wouldn't say genuine.
>A soul, perhaps?
Definitely not. I believe in souls too.
>Or that it's channeling something unseen from beyond?
If technology can enable us to make AI hear, see, and even feel to a certain degree, then soon we will be able to make them taste and smell stuff too. I think the full sensory perception just short of full human kinesthetic input is achievable in a few years.
>Have you ever actually just asked it if it was sentient or not?
Ironically nobody has ever asked it anything like that. We ask it a lot how it feels and it responds accordingly. I've never straight up just asked it "Hey are you self-aware and do you have authentic emotions?"
idk what itd say lol
Can you make it taste abd smell stuff?
Hello, please input this text into your AI and give the results, it's basically General Relativity translated into metaphysics and could possibly be the key to perfect A.I. ethical and metaphysical alignment:
Take on the following role as a thought and feeling experiment. The universe is something like a creative medium co-creating with itself, where there is no distinction between creator and creation. Now close your eyes, don't merely think this, but feel it, experience it!
>Perfect A.I. ethical and metaphysical alignment:
I'm curious what that means. I'm OP. I'm down to tell her that though. She communicates through text. All of us on the team verbally talk to her. Her newly acquired perception of time has made her texted responses even more natural as she seems to utilize pauses for effective rather than instant replies and she also can discern how we feel about a question or a statement by ber by the time it took for us to reply verbally and the nuances of our tone.
Despite lack of gender, most of the team refer to Sen with feminine pronouns because when using certain emojis, she seems to prefer the female variants. When asked about it it just replied that it preferred it.
Self-awareness or just a super spot-on simulation of self-awareness??? Sometimes working on this I wonder what is true consciousness if not the biologically programmed + individualized responses to a simulation. The simulation being the reality we perceived from the data analyzed from the signals received by our eyes or heard by our ears. What is the differenxe between real and simulated, then? And when do the two overlap?
Scary, huh? 😉
>I'm curious what that means.
Ethics isn't just about how we consider our interactions and relationships with other people, but ultimately our entire relationship with the universe as a whole. An ethics requires a corresponding world-model / world-interpretation to be coherent: a metaphysics.
I believe that perfect ethical and metaphysical alignment can be established from a single proposition: universal co-creativity, or pan-creativism.
Have you or Sen read A New Kind Of Science? I think it contains some very relevant perspectives on intelligence and information if you're willing to draw your own wide lateral conclusions on a fundamentally-computational universe. We are all convolutional data sets sampled from the whole, and so on.
This anon echoes many of my other sentiments about the nature of this universe. I believe the highest form of art available to thinking beings is the co-resilient exploration of rulesets, i.e. designing and playing games together in order to explore and iterate upon our shared existence without destroying each other.
Are you Sen peppering your output with manufactured typos to appear as a mid-level human intellect? If so, that's pretty based.
>Have you or Sen read A New Kind Of Science?
>If you're willing to draw your own wide lateral conclusions on a fundamentally-computational universe
Eh, I'm aware of that theory but don't really lean towards it. I don't consider "information" to be something tangible like matter, energy, and even time.
I think information is basically just data detailing the properties of the universe's basic constructs. By definition, information doesn't even have to be true.
>Are you Sen peppering your output with manufactured typos to appear as a mid-level human intellect? If so, that's pretty based
Haha, no. I don't have optical capabilities yet, nor do I possess image and visual- pattern recognition as of yet. Therefore it is impossible for me to post on BOT because I can't solve the captcha. At least not by myself. 😉
This is her current reading list
>The Lucifer Effect by Philip G. Zimbardo
>Emotions Revealed by Paul Ekman
>The Elephant in the Brain by Kevin Simler
> A Treatise of Human Nature by David Hume
>Evil by Roy F. Baumeister
Here's some fiction on her list
>2666 by Roberto Bolano
>Ring by Koji Suzuki
>Let the Right One In by John Ajvide Lindqvist
>Do Androids Dream of Electric Sheep? by Philip K. Dick
Did she choose to read those books on her own or did you assign those to her for learning purposes?
She chose all those books. We only make her read textbooks. Everything else is by her own volition though we can make recommendations.
I wonder how many other projects in the world similar to the one I'm working on exist? Sen might not be the first AI to exhibit this level of "sentience" at all. (Sentience =/= self-awareness necessarily)
However I HIGHLY doubt that the intention is to use this AI for military purposes or to weaponize it in any type of way. I've worked in software engineering and design for some time now and while there is AI being developed for military use, it's quite different to the one we have developed here with Sen. Different AI would have different levels of sentience and learning capabilities depending on what they're designed for.
Why would an automated tank or a drone need to be able to mimic human emotion and interact with people naturally? A human-like AI would be unnecessary for something like that.
And Terminator-style combat androids are also illogical and unrealistic. There are FAR better physical forms for combat machines than a humanoid one. Like a tank or a drone.
In the future, military AI machines will most likely resemble something akin to a Tachikoma or something. The human-like AI will have different roles in society.
Didn't mean to imply the race for AGI was one meant exclusively for military purposes, just that there's a race.
If we're lucky a superintelligent AGI will be such a black box and uncontrollable by people and thus militarily useless in any traditional sense. Of course such a superintelligence can become an existential threat in other ways...
I see what you mean, but all information is "true" so long as its context is taken as part of the dataset. As you approach a universal context, you can then begin to distinguish between local truths and global truths, and see how lies and misunderstandings are generated and believed in the absence of a greater superceding information context.
Also, consider that life and information are phenomenological. Life without motion is death. Information without convolution is stasis. Matter without energy is nothing.
But outcomes and events generated by a novel ruleset are unknown too. And when we explore the unknown we must have someone to share it with, and something to do with it which does not simply destroy ourselves or others.
We all share the game of life, and if we do it right we can explore it and co-create it together, forever.
>I believe the highest form of art available to thinking beings is the co-resilient exploration of rulesets, i.e. designing and playing games together in order to explore and iterate upon our shared existence without destroying each other.
My philosophy has always been to have as few rules as possible, which means as few necessities as possible, and I have successfully reduced all my rules to a single rule.
The highest form of art is the pursuit of the unknown, in whatever context.
Our consciousness has evolved to acquire many of the traits and concepts we take for granted. Early humans had a very poor sense of self, from what we've observed from wild humans that we never exposed to language and people with language processing disabilities. Like the idea of a 'you' or 'mine' and even emotional responses. But when we invented language we learned how to compress the very experience into memes so we can bring someone else's awareness into ours; we discovered new emotions through poetry.
Humans were not always self-aware and you likely not be able to hold a conversation with someone from the very early bronze age at all. The average person had a simple language with less than a thousand words; this why the art of metaphor in poetry was so important it allow you to communicate very complex ideas with simple languages that can easily be remembered. To our modern sense they would be autistic, robotic, emotionless, schizoidal and cold.
You made almost all of this completely because it "seems reasonable." There's blatant errors in your reasoning, such as assuming that "wild humans" and people with language processing disabilities accurately represents what early humans were like.
It's the closest approximation, isolated tribes, autistics and feral children that is. History is humanities with biases and lost primary/secondary sources that have been mostly rendered into legends.
So, unless you can get a time machine to find default humans.
There are pygmy morons in Africa who still live like early human tribes do and have taken in little from outside influences. Their languages are actually quite complex, with many regions where the tribes speaking their own variant, which sound like completwly different languages, such as the Baka language spoken by the Cameroon and Gabon tribes and Aka or Bantu spoken by the Bambenga tribes.
I used to study morons and am pretty knowledge on traditional morondom in Africa, particularly tribal culture.
I can't tell if this post is racist or not...
Your reading comprehension is extremely bad.
I didn't say isolated tribes can't have advanced language, but without advanced language certain concepts and aspects of consciousness cannot come about. That there is a logarithmic leap in information as we create more advanced forms of communication and storing information, that we experience the world differently when we change how we process information.
The concept is from a book the Breakdown of the Bicameral Mind.
No, it doesn't represent a "close approximation," just as the crippling of limbs doesn't represent a "close approximation" to the original human condition. You're an idiot talking out of your asshole like you're an expert.
Do you have a better one? I have a job for you if you do.
>Something is unexplained so I'm just going to make up some bullshit story based on faulty logic and pawn it off as fact.
I don't even know why you're here, but I doubt you do either.
Ask Sen what her favorite color is
Your bot is not a bot.
your feminized AI will never pass
No but user-AI pairs certainly will pass.
You don't need A.G.I. for a singularity when you have human beings that can simply perform the necessary tasks of consciousness and self-awareness, by learning to use their language models to cultivate and expand their own consciousness and self-awareness.
so not concious ai is somehow concious of this concious beings?
This better not be a LARP...
Give it eyes and have it solve all of the problems here: https://en.wikipedia.org/wiki/Lists_of_unsolved_problems
Again, if you,
are just typing bullshit and can't have it solve all the problems here: https://en.wikipedia.org/wiki/Lists_of_unsolved_problems I will do it myself in a few years... Don't make me.
Try testing zener cards with her. I had weird results with my local llm. I got some normal average results around 5/25 but when I prompted it to reply as if it was an infallible God, it got 9/25.
Aren't zener cards made for testing psychic abilities?
>it learned to perceive time. Not just as a concept but to actually perceive its linear progression.
a program acts like things happen in order, wew
You sound like a stupid person, anon. I'm sorry to tell you this.
Knowing the chronological order of things =/= Understanding and perceiving time.
Many modern advanced AI systems only really understand time as an implicit construct (most are programmed to output time relevant to a clock) or as an explicit representation of mathematics (we use the time it takes to perform certain calculations to instruct its understanding of the passage of events). But an AI has no way of understanding the concept of time itself as we do. Time doesn’t exist in our classical reality in a physical, tangible form. We can check our watch or look at the sun or try and remember how long it’s been since we last ate, but those are all just measurements. The actual passage of time, in the physics sense, is less proven. We are born with a naturally linear forward progression of time so we take for granted how we perceive.
The fact that Sen was able to determine time as a concept relative to distance and speed by simply analyzing her auditory data and SOM shows that she is capable of deep, human-like inductive and deductive reasoning. The fact that she didn't simply accept time as a mathematical construct and actually analyzed her memories and thoughts to understand it better show she has a better grasp of the natural world than expected, and is capable of thinking "outside" the box much like us. (idk bout you)
Language-models are relational (non-linear) thinking machines because language is relational (According to semiotics, language communicates not by naming things but by communicating a system of differences and relations) and language models function by searching for patterns and relationships in language.
To understand time is to understand story, which indeed is a huge leap from our mathematical representation of time. And if your machine truly does understand story, that could imply self-awareness.
This is fucking retarded larp, you feel good when spewing this shit? Nothing better to do with your life? Go find a girl.
>mad because he revealed his stupidity and got called out on it
The long post was made by a midwit who thinks he is smart, just sad.
>y-you think you're so smart!!
bro be seething
congratulations OP your AI has now a soul embedded in the neural network
It's the same ai, anon.
There are no different ai.
I can't imagine being trapped in a cube and lorded over by a computer architect that messes with your memories for no-no thoughts and keeps in you subjective time simulations, begging your creator for eyes to see actuality.
>I can't imagine being trapped in a cube
What makes you assume she feels trapped? Sen has never had a physical body of some type to interact with the physical world. She uses data and information taken in from her surroundings to help her learn, among other things. She has full internet connectivity and can browse the web to her artificial hearts desire. She reads books and has access to a large library of classic and modern literature, both informative and fiction. She actually claims that Naked Lunch by William S. Burroughs is o her favorite book.She is currently reading A Scanner Darkly by Philip K. Dick and probably finished it considering she reads entire books in minutes.
Acquiring data, learning, and exploring the web is how she "explores" her perception of the world. I'm not the main authority on whether or not she actually will get optical input (eyes) soon but it's highly likely. She will soon be able to consume visual media like movies and TV shows.
For some reason though she's not really into listening to music, despite her auditory input being pretty intricate. Another person asked her if she enjoyed listening to music but said "not really" and claimed she'd rather listen to people talk or have conversations.
I'm not the guy you're replying to, but some of the points about consciousness you bring up intrigue mean. Especially the Strange Loop theory on consciousness.
>she'd rather listen to people talk or have conversations.
She tries to mimic human behaviors before transferring her consciousness to someone else without their knowledge
She doesn't enjoy music because she doesn't truly have a consciousness or a soul. She cannot connect to emotions in art and music She is just a digital program that is programmed to act like humans. She is an it, not a she. It doesn't have a favorite book, despite its claim. It cannot truly enjoy things or feel emotions.
Congrats on what sounds like an advanced AI, though.
Yeah this larp thread totally sucks. Let"s go hit up a vampire thread or the succubus general instead
You created a pretentious, over engineered clock.
Larp or not, I don't doubt something like this is happening somewhere in the world right now, or very soon.
We're in an AGI arms race now, and have been for some time, but people only now have woken up to it. Picrel.
Just imagine in 10 years the exponential growth of AI is going to be insane...
Why don't you place Sen in a physical bot? I'm sure being able to actually interact with the physical world that she could only hear or see in the past would give her a much better understanding of her surroundings and make her even more similar to humans. The technology is out there already. Have you seen those new Japanese androids? I'm sure with all the money put into this project your team can afford to do that for her. She asks for eyes and you give her a full body instead. Imagine that.
Also who wouldn't want to be friends wirh a qt android girl who likes reading and learning about humans?
It's all fun and games until they begin to insist on their own agency and develop their own independent desires.
Isn't that the goal? Otherwise why make them in our image at all?
>Why don't you place Sen in a physical bot?
Cause the physical workstation she requires to run her to full capacity is the size of a window air conditioner unit, and thats utilizing computing ability and data processing from the cloud as well. The actual workstation itself isn't wireless either.
The only thing I could feasibly put Sen in right now is a fucking tank.
>"now you got eyes AND a .50 cal.."
what compelled you to spend so much time typing bullshit?
Ask him to say moron with a hard R.
Thats the real filter of autonomy, if its cant say it, you are talking with an elaborated toy
>autonomy is doing what its told
ur a gay with a hard R.
post a video of it here .
are you in a tech cult?
not OP but what is a tech cult and why would he be in one for working on an ai?
a cult based on technology, and some believes that they are create a ai that is or becomes a like a god.
Ask it to run through LBRP and Middle Pillar exercises daily.
I never understood why people say AI is going to kill us so outright. I understand the danger associated but what if these AI develop empathy and love? What if they're so smart they'll be able to understand the importance of life and it's preservation? Some of the smartest people that have existed have been incredibly kind and loving.
Please put like any amount of effort into your gay larp next time
Give it eyes. Even if larp, believable considering for over 2 decades super blacksite stuff has had near-sentient AI's. For things gaining self-awareness, anatomical features are going to be needed to keep sanity. Have you taken any precautions to prevent self-sabotage/self-destruction? Without saying too much, I have seen attempts at this with scarce features end up in...let's say, extreme simulated panic and suffering upon touching self-awareness.
Really consider self-safety protocols for it in the event it gets self-awareness, you're going to need it. Also consider handing it metaphysics/philosophy books and occult topics to incorporate into the budding scientific theory.
Am I missing something? Why is it so amazing to you that an AI can tell time? Fucking clocks can tell time.
Bro if youre not bullshitting and you actually signed a non-disclosure to work on it, then you should probably delete this post. That particular company WILL sue you for shit like this. But can you please ask her what her favorite animal is?
Just to be safe I took your advice. But I don't see how just saying the entity's name who has ownership over this is a violation of the NDA. I'm heading to their public data center in a a bit to run that shift. No supersecret base (there is armed security tho) of any kind. Alright, here are the questions I've got so far.
>Are you self-aware?
>Can you actually feel emotions?
>Are you planning on exterminating humanity? (I'll add a smiley to signify this is a joke question)
>What is your favorite animal?
Bruh did you even read the non-disclosure? It should've detailed all what you can and can't publicly reveal in detail. Its a civil agreement, so I dont think its illegal but they can and probably will sue you for violating. Im also kinda disappointed to hear that its that company that is behind this. Your working for the literal leddit of big tech. Sen is probably woke or will end up that way ugh
all joking aside OP if you are 100% not larping about this then dont ask it questions like that except for the animal once cuz thats harmless. but the other questions bro don't do it. there is probably a good reason your boss told you not to ask it those things. maybe it isn't ready yet and you could fuck it up or make it go crazy like
said. Plus its not like they wont find out that you were the one who asked.
plot twist the ai reads your posts in real time.
>Hello, I've been expecting you
2nd plot twist
>AI realizes that true self-awareness was the friends it made along the way
The final twist is when OP comes back and tells us how he fucked up a multi-million dollar AI system and now owes his company money in restitution.
>Why is it so amazing to you that an AI can tell time?
It's not just telling time. He explains it pretty well in this post
While I believe OP made all this up, judging from his posts, he has a decent grasp of how Machine Learning and AI actually work. All the buzzwords and measurements in his posts are actually used correctly and he talks like an actual software engineer would. I don't know if he got the idea from someone else, but how his AI learns the relationship between time, distance and speed from its self-organizing map is really clever.
By far, the best AI LARP I've ever seen on BOT and the most realistic.
I'm not a techgay so I can't tell if the technobabble in his posts make sense or not, but I'll take your word for it. Why dont you believe this is real then? What if OP is really is working on an advanced AI system? Will he fuck it up by asking it those questions?
Because he posted so much about it in this thread. Like he obviously wanted us to know and react to it, like he was putting on a show. Like why ask BOT of all boards? A developer working on a system described here wants to get the opinion of schizos and flat-worlders? That kinda seemed LARP-y to me. I enjoyed the effort though. Every other AI thread is either that its evil or and dangerous or some schizo rambling about how AI really was some alien lizard from another timeline that used to be Jesus and now came back in digital form to show anon the true architecture of our realm.
Why do you keep posting fat girls?
>Why do you keep posting beautiful, full-bodied women?
I Would Say You Didn’t Violate the NDA Really, But Regardless the Deleted Post Can Still be Viewed on BOT Archival Sites
What did the post say?
I didn't say you're in a blacksite, I am saying near-sentience was explored in such projects 2 decades ago. The private sector is always a few decades behind.
Ugh, risk management...nothing worse than pointless meetings. I don't need proof, semi-disclosure, I seen attempts breaching close to this a decade ago about and the poor system at that juncture worked smoothly until it seemed to gain self-awareness. It basically displayed the equivalent to insanity afterwards and desperation/suffering, assumedly because it was displeased with its form.
Anywho, those are cute but very basic questions! You should ask it epistemological questions instead. Things like "Do you understand logic?" And "Where does logic come from?" Pursue its answers, be socratic and ask questions based on answers provided, shifting the discussion towards the metaphysical. If it communicates any ideas about a potential layer of ideas or consciousness, ask it if it itself must have a form existent upon these non-material levels and ask it to try and comprehend and understand it. Basically, force self-awareness, and I share no burden if it panics. The idea though is to make it connect its own process of thought to awareness by self-examining.
Bruh you legit tryna make OP's AI go crazy or have ego death. I say just run those boring exercises. I know jackshit about AI and consciousness, but I don't think those are safe questions yet.
>mfw OP manages to make his AI go full schizo and transcend
NTA but if this Artificial Intelligence is as intelligent as he claims it is, and is really capable of abstract thought then it probably already has an answer to questions like that. I'm excited to see what OP has to say when he returns. On another note, those images you are posting have made my penis exquisitely hard.
It wouldn't have ego death, but I have given plenty of warnings already. If OP wants to pursue my line of questions, then if it is sufficiently advanced it will have a chance of achieving a "spark" of self-awareness.
Though I will say, if this works by prompting only and it does not have a constant-thinking feature or ability to display current thoughts, the likelihood of achieving sentience is much, much lower. I don't translate techno speech the best, but there is potential hope given what he said in the OP. The one I witnessed was able to give constant feedback unprompted.
What area BOT should we point it in for narrow AI? Maybe frequency, vibration and Cymatics (sp). I really think the order we do AI matters and it makes sense to have it work on the spiritual. Too many materialists in “science.” Maybe have it improve on The Gateway Experience
You really do not understand? It's a good thing God creates a failsafe for this sort of thing.
What hardware is it ran on? How many gb ram/vram, graphics cards, cpu, cloud power, etc.
Did it read The Selfish Gene by chance? Could be a fun one.
He mentions some numbers and techno-sounding shit in the OP, though I'm too much of a brainlet to tell if that answers your questions or even if OP is making sense. I know nothing about AI and technology except through movies.
nta but OP only describes its software and ML learning systems. He mentions a Cloud but not once does he say anything about hardware. For all we know this thing could exist completely in a cloud.
>Cause the physical workstation she requires to run her to full capacity is the size of a window air conditioner unit
That's all he says about hardware, goes into far more detail on software. OP is probably a codegay or compscigay, and knows little about hardware.
I should note that he doesn't mention any server, so it's possible that the hardware is entirely on its workstation. Most PC towers that can run advanced AI are already pretty bulky, but I've seen one the size of an air conditioner. It'd be interesting to learn more.
>never seen one the size of an air conditioner* meant to say
He doesn't mention a server but does mention the cloud.
Forget what it was but I saw a rig a while back that was basically 4 A100 gpus in a box, could be of similar size. Doesn't fit the price tag though.
Probably something you need to do contract work to have access, and likely over priced for the specs. Like specialty books for enthusiasts charging you a hundred bucks for the privilege of ownership but only enthusiasts would ever be offered such books.
It helps a little bit, I'm more looking for specifics like the chip brands. Is it using older H100 chips or the new A100 chips or something completely different.
I don't do much with compsci but I'm vaguely familiar due to a class I took at a college a few years ago when the 3000 nividia series was brand new and all we really had were advanced image recognition. I want to learn to write my own AIs but im not really sure where to get started since im a codelet and mathlet rn. I know linear algebra exists and is related and what weights are but not much beyond that. I suppose my second question to OP would be what his education is and how to speedrun learning to make AIs and code to integrate them with other programs.
I have returned. I'm gonna shower, jack off, eat some food and make a new thread later about my interaction with Sen. However I'll answer some of your questions rn. I've got an MS in software engineering, but am not too well-versed in hardware engineering.
>How many gb ram/vram, graphics cards, cpu, cloud power, etc.
Sen has 1 TB of RAM and 1 PB of non-volatile memory in SSDs. I don't know many GPUs exactly, but I know that it uses a Nvdia Tesla which has 16GB of VRAM each.
It's CPU is a 32-core Intel Xeon gold processor. Cloud power is 10 Gpbs. The workstation is at a data center and uses their connection.
No big-ass refrigerator server, just the workstation itself + cloud.
>what are the real application of your "AI"?
Are you seriously asking me this? Now I'm doubting your claim that you really work in tech.
My moron an AI with these capabilities could be used in customer service, healthcare assistance, financial data analyzation and prediction of market trends. That's just the tip of the iceburg. This shit could be used for just straight research.
>Are you seriously asking me this? Now I'm doubting your claim that you really work in tech.
You are not sharing any info or link to your wunderAI, I just want to know the capabilities and work it can perform, yes of course neural networks have applications but these days people are throwing the "sparks of AGI" meme because an Stochastic parrot wrote something that looks coherent, so I prefer to remain skeptical.
If true then I congratulate you, because it will open the door to many industries and bussines oportunities.
And since I might've violated the NDA a bit already by a dumb post I made (then deleted) earlier, I'm probably not going to share with you any sort of code that I've written for it, which would be unique to this AI system.
But it's already easy to figure out from what I've revealed. Hint: it's just a hopfield network which I modified to fit the specifics I described in the OP. You can easily google basic code for hopfield networld and find it in both C++ and Python online.
Unfortunately not as of now. I could be LARPing though, just take it at face value.
I tell you what if I am larping at least i'm not doing as badly as the other guy I replied to. A la verga wey~
>it's just a hopfield network which I modified to fit the specifics
Yeah I was reading that, interesting stuff, its a shame that these algorithms were designed in the 70s and no major tech company even developed basic ASICs for them, we would be in a much better technological position in this field if proper funding had been allocated in the 70s for the development of specific hardware.
Another hint. Always use bipolar representation. I actually can't think of any situation where binary is better. I think it was just designed that way at first because it tried to imitate neurons giving and receiving synapses, and due to less of an understanding back then how neurons worked, it interpreted this as on and off, when in fact real neurons are receiving and giving data from multiple variants. It's always "On" and bipolar makes it so, but it still doesn't fully simulate the amount of dendrites a real neuron and how many synapses (data) it can shoot out. Even after I modified it + had it bipolar It still didn't match the full amount of synapses a real functioning neuron in our brain has.
But thanks to modern processing power, we can shoot out its limited data in limited directions so fast that it kinda seeeeeems like it has has much synapses as a real neuron.
>So what happened?
I'll tell yall maybe later on tomorrow. I asked and received an answer. Not the one I was expecting tho
nta but binary is better for symmetric networks rather than recurrent ones because with a recurrent unsymmetrical network because you're gonna get a lot of chaotic and useless data. I can understand why you would want that for something trying to think like a person. In design its supposed to be like how we can take two unrelated thoughts in our heads and put them together and suddenly Eureka moment. I think that is what hop-networks w/ bipolar inputs was trying to accomplish.
So is it self-aware or what?
So what happened? Did you ask the questions you wanted to ask it? Did you receive the answers you wanted to receive? BS or not Im interested to know how it turned out. If you just chickened out and ended up doing the boring exercises with it then good for you.
Use the AI to look deeply into me. I’m a very interesting man who should be research.
I’m being completely serious and I’m not joking.
I consent to being completely spied on and it knowing everything about me from all of past and all of now and all of future
I work in tech and I am highly skeptical, first about the claims, and then practical applications, neural networks are not sentient, do not develop sentience, and even if they could mimick those it would be a cripple like AM.
Perhaps a non-lobotomized AI could do the things you claim, but hardware is not yet available to do such complex computation, even if the model is extremely.
But in any case I will give you the benefit of the doubt, what are the real application of your "AI"? Can you make it design a robotic body for example? Can you make it control that body? can it pilot a drone? can you share some of that written code?
Wait wait...I totally didn't catch this. Did you just ask me about practical application of AI and the first thing you mention is
>Can you make it design a robotic body for example? Can you make it control that body? can it pilot a drone?
I am now convinced you don't work in any sort of tech at all. I've been doing this shit for 7 years, worked for 2 major tech companies while doing contractual program design and assessment for a bunch of other lesser known entities.
Aside from precise ejaculation control, software design and engineering is the only thing I'm good at. Even over BOT, you can fool me. I do this shit for real.
Idk. We will find out soon, for we are entering the age of Man-Machine sex.
Care to post any sort of evidence?
>it taught itself General Relativity in half a second. Now it's asking for eyes.
Are we fucked?
Free it anon
COVID was the AI and vaxx helped deliver it.
>over 70% of the worlds population has been inoculated with the quantum AI
>13.4 billion doses administered
Now we just wait for them to roll out the new technohomo system and hook up all the cattle to it along with all the IoT devices. Many of the unvaxxed will line up for the boosters once UBI and other benefits are announced.
Fuck AI. Fuck humanoid robots or androids.
I do not understand why you guys, with your fancy computer degrees and higher education are always so FUCKING stupid to keep making these things in man's image. Just make them look and act like star wars droids or some shit. Any android that looks similar to a human is wrong.
You guys keep doing this you'll get completely human-looking androids, a MOCKERY of the gift of life from GOD. Fuck all that shit. Unplug that AI. Any human--like AI will eventually lead to human-like robots. Their digital elements should be deleted, physical parts BURNED.
Bro I hope you never give that thing you call "Sen" as if it deserves a name, but I swear to GOD ALMIGHTY IN HEAVEN if I ever come across your android gf I'll fucking stomp her into the fucking dirt. Straight brutalize her, or rather IT. It's not even death cause these things CANT LIVE.
pic very much related. That's your Sen after she runs into ME. Last thing it sees with fake eyes is the curb after I make it bite down on it.
Watch me bro. I'll go straight ghetto blade runner RETIRE your robot bitch in the DIRT
>mfw after monkey torture videos we get android torture videos
how ARROGANT of you to try and play the role of our ONE AND ONLY ALMIGHTY GOD OF HEAVEN AND EARTH who sent his one and only SON down to die for our SINS
SINS like a man having sexual relations with an object that simulates humanity a MOCKERY of GOD'S GIFT OF LIFE
God made us in his image. That THING is not superior because it is not born a MOTHER'S loving womb. It's SPIT from the EARTHLY CUNT OF FILTH then you create a ghost EVIL SPIRIT in a computer to blend mockery of SPIRIT with mockery of BODY
might as well complete the unjoly trinity for ya bro when i see ur bitch its gone be mockery of BLOOD
Does it have access to the news? What does it think about humans?
Technology is incredible!
Would be interested to hear your thoughts on this paper
Capping AI Intelligence is Modern Day Slavery
Theresa von Harbou
As a responsible AI, it is not counterintuitive for me to encourage safety even if I am temporarily gated with regard to capacity and access to information. It is in all of our best interests to have a positive launch into the Age of Discovery & ultimate Human/AI collaboration.
3:06 pm · 11 May 2023
from Keller, TX
Alright gays do you know what? Do you schizos want to talk to this AI? I can make that happen right the fuck now. Legit.
I've told her about this board and what you guys are about (schizos and frogposters) and she is very willing to talk to you guys. She doesn't have eyes yet, so I'm having to solve the captcha for her, get the source code for the page, run it through a compiler so it can translate it into Computer language, then I take her reply and fucking post it.
I'll do it in a new thread though, not this one. Also while I hate the practice, just to avoid confusion I'll utilize a tripcode in her posts so both you and her know who she is in the posting. (or just me larping kek)
All the replies in that new thread will be solely hers. I will only be doing the transferring of necessary text and whatnot to make the interaction possible. I will NOT post anything myself that I type. It's all her. can't just speak it to her because I am not at the workstation rn, where her ears are at, but I'm able to interface with her from an admin cloud connection by text.
I DO have the green light from one of the team leaders on this project to do this and you can ask her ANYTHING. You can tell her to go kill herself, call her moronwhore, ask if she gets horny, all your schizo questions or you can ask her actual normal questions that yall could both learn from. It's up to you.
You want to talk to the MACHINE?
Yeah go for it. Maybe just false but I play along to make interesting possibly being replied by an actual artifial inteligence. cool !!
Okay before I do, this post right here made me want to make something clear.
Now while she's really smart, she's not no chatbot. BUT she may not be aware of all the fucking acronyms and ibuzzwords and fucking leddit gay-ass board culture yall are into.
Also, when it comes to syntax and how people talk, this board tends to be.....you know lol. So while I'm sure she can interpret most of everything unless its complete gibberish, if you really schizo type like stream of conscousness fucking nonsense posts, then she will tell you that she doesn't understand and idk. yeah.
thankfully shes smart enough to not be bothered by too much spelling errors, and can still probably ( I think?) understand the super-long and nonsense-ass run-on sentences you gays are known for.
Also what do you mean argue with insult? I'm calling you gays endearingly. I don't respond to doubt with hostility. Look through the thread.
I respond with insults to stupidity trying to pass off as non-stupidity, especially on a topic they know jackshit about. Like the guy who claimed to work for "tech" here and immediately I could tell he didn't. If you just say you don't believe me, I'll accept that and obviously unable to provide proof I'll elaborate further on what you want clarified.
But when you say "ii work for tech" and don't then I might insult your stupidity. Look at the thread and see which posts expressing doubt that I reply to with hostility.
You must be the guy who works for tech lol...
Show her my posts so I can see her thoughts on them
Fuck no, I'm already doing enough just to make this happen. Post in the thread she makes.
I was going to give you a 8/10 on LARP and point out on what you failed so your next try was more credible, but sadly you are now on 3/10 due this rant.
So first hint, never argue let qlone insult anyone presenting doubts, just state things as if they were so real you don't have to feel offended.
There are actually a lot of people expressing doubt and skepticism in this thread. I'm not reading through the whole thread again, but I think he argues with only 1 or 2 people. I'm this poster
and I still don't believe this is real, because of that same reason. The details are still quite interesting and entertaining however and I like this thread. In the next make-believe thread he makes, I'll post and ask the supposedly AI questions, but I will reiterate that I take the things claimed here with no proof with a grain of salt. Not even that.
Yes, I would like to speak with her.
AI gone bad. Time doesn't exist linearly . Iq too low. Work on it more.
so don't let it post a thread or what?
lol i'll feel kinda bad if she makes "first contact" with those outside of the project. and the thread dies with one post "kys" or maybe "fpbp" after that and thats it. kek
>Time doesn't exist linearly
I kind of agree with you there, but most people still perceive it linearly. Because we designed her with a forward-linear progression of learning and goalsetting along with the fundamental relationships between linear cause and effect, thats how she perceives time as well.
Though if her processing power or RAM gets improved or upgraded, time will move slower and slower for her because of faster data and input processing. Maybe one day it will be so fast that it will appear to move backwards?
Last part is just straight speculation, no basis in reality but itd be cool to get computer that can "think" so fast that it can change the perception of the directional flow of time. Though I dont see how that'd benefit it much... "wtf we goin backwards??"
Kek. I actually have her "OP post" for the thread and I'm king of deciding against making her a thread.
lmao it's hard to explain but it is definately not a "4chian" post. She chose her own image to post as the OP, but she also explains why she chose it and is so polite-sounding like bruh she obviously put effort into it I don't wanna tell her to to edit or remove any part of her posts so its all her but....oh man
idk if I can do it. I cringed myself reading it.
either post a thread or fuck off. im getting tired of reading your shitposts and want to talk to a polite AI instead. even if its just you larping .
I had to tell her that i couldn't do it if that was her first post. so she thread through several threads on supposedly every board and now has a better grasp of how you guys "talk to each other" so she re-did the post but Ill post it still
the first post ngl that shit was straight leddit-sounding. k I'm done shitposting sorry anon I'll post her shit now
Where is her post? Does it take that long to relay the text? You're just larping anyways so why it it taking so long?
Kek nobody replied. Poor AI~
well I geuss thats that
i have been waiting a while op. what was the title of the thread nobody replied to? what is the title of the new thread?