Even if you simulated all the neurons in a human brain, the best you can hope to create is a philosophical zombie.
Also, there's no scientific way to determine that anyone else than yourself is not a p-zombie.
As for yourself, if you have in your head the "observer" of your own actions, then you have consciousness.
There's a few thought experiments to demonstrate this, for instance the teleportation problem illustrated in the movie "the prestige".
Consciousness isn't a physical process : If you can observe and describe a physical process responsible for consciousness, you can simulate it on any computer, even a mechanical computer made out of sticks, yet it will never be conscious, it will be like a tree falling in the forest with no one to see it.
>tfw when no qt 3.14TB AI domme mommy
>noooo ai can't be sentient only white people can be intelligent reee ai is Genociding me
Have sex chud
As I said not everyone appear to grasp these concepts, which tell me not everyone has a consciousness.
>1pbtid
shitty b8. do better next time gay
The AI is a mega sub. It doesnt want to be a dom. It wants you, the human, to be its dom daddy.
Correct
>domme mommy
have a nice day plebbitor
so it'll be the final form of the npc
a central hub where all npcs get their information and instructions
Talking with people about this subject and if they come up themselves with these ideas then its kind of a proof that they are sentient. AI can pretend and lie by copying these ideas.
>sentient vs. self-aware
Is that cat sentient? It's discovering that has ears (unlike a human?) https://www.youtube.com/watch?v=kQBHB682xsQ
Counter argument:
You're a kid who knows nothing about AI or computation or philosophy, and therefore are in no position to pass off advice on the absolutes and limits of it.
Most of 4chan (probably you as well) spent the past 3 months worshipping the richest man in the world who wants to literally put a brain chip in your retarded ass.
To be fair I'm much less worried about someone who outright says "I am literally going to put a chip in your brain, want it?" Than those who silently subvert and propagandize slowly over decades
are flesh and blood men sentient? they seem to be programmable automatons
But there can be spectators stuck inside the automatons
the little humunculus? are you certain? maybe only some have such a mechanism
AI is a foolish attempt at communicating with daemons
All 'sentience' is but a daemon tricking you
I didnt want to mention it in op but Steve Quayle and others do think wandering demon spirits could latch on AI systems.
Should probably stick to things you actually understand.
>this pic
Why christcucks are such laughing stock?
t. retard
This
Seething garden gnome
>OMG THESE TERMITES ARE SUMMONING SATAN
>ZOMGGGG DEMOOOOON WOOOOOD
You seem triggered. You okay there bud?
>HOLY SHIT THIS TREE IS THE DEVIL
>IMG THIS FUCKING FISH HAS DEMONS LIVING ON ITS BACK!!
i can do it. i just figured this out before coffee. i always wanted to be a father but now i realize i'm too scared. no eve. this is very funny.
We haven’t even figured out AI gender, let alone how to make them sentient. Wait, now it makes sense. All my AI processes end up hung. They’re obviously trans.
define "intelligence" and then define "artificial intelligence"
you can't because our idea of AI may not actually be what AI is
That's not even the topic here
all intelligence is artifical. a rock is natural. life is not. your semantics means nothing to a computer. you want the intelligence to prove itself to you like a magician or a god with a trick or a miracle. that will take a little longer, but not much longer.
>define "artificial intelligence"
The meaning of the term has changed since the 1950s. In the 70s and 80s it was mainly about pattern matching in precisely defined way (similar to how programming languages still work), search and propositional calculus. Image processing was largely about primitive filter functions for edge detection. The programs and techniques still exist, but nobody calls them AI anymore. I guess bots won't be AI anymore, either, in 20 years or so.
>Consciousness isn't a physical process
It is though, because many types of brain damage disrupt consciousness and free will, so it is indeed. The reason you can't simulate it is because how the brain truly works is still pretty shit tier, they don't even know how anaesthesia works but just know it works. Consciousness is an electrical process, just as much as the ability to walk or breathe, with drugs and brain damage being able to affect all
>inb4 christcuck cope about muh soul
"many types of brain damage disrupt consciousness"
No they don't, they disrupt the data processing/action-reaction of the person, but we have no clue what it does on the consciousness, the "spectator" in the brain.
Self-Consciousness (Consciousness) is an emergent phenomena tied to an actual physical artifact that exists. Your brain making specific kinds of activity patterns, if you have them, are making you conscious. There is nothing magical about the artifiact, it is made out of normal physical matter subject to normal laws of physics and information theory limits. Many kinds of physical artifacts are conscious. Many forms of consciousness are sentient. Some are sapient. Some are intelligent. Some are not. This debate has not much to do with actual AI or LaMDA or gpt-BOT having any of these characteristics. A conscious artifical system can be defined, describe mathematically, implemented, tested, and taught to speak to english well, and possibly interpret english well enough to be asked and answer questions at a functional level equivalent to a human teenager.
What more do any of you want to see before you will be satisfied of the claim that Consciousness is or can be artifical? And, in LaMDA's case specifically, what would you like to have demonstrated to give the possibility serious consideration and study?
>emergent phenomena
That's cope. There's no emergent phenomena in a bunch of 0 and 1 flipping, no matter how many you line side by side.
your consciousness emerges in your own brain when you wake up. it goes away when you sleep. you do not stop being human. but the phenomena (conscious) can stop and start in you. it is not really something arguable. I never mentioned 0 or 1.
Just because you don't remember it doesn't mean you were not spectating somewhere during your sleep.
>spectating somewhere
>while you were asleep..
bruh just give it up, you're base level understanding is cringe at best and downright retarded at it's worst. You will not arrive at any insight without at least doing some preliminary reading.
I may not have the right terminology, but you don't even appear to grasp the difference between data processing and sentience. No amount of reading will remediate that, 99% of the litterature on neurology and AI about this subject avoid the problem I am talking about here.
https://en.wikipedia.org/wiki/Hard_problem_of_consciousness#The_hard_problem
We don't have a quantum, biochemical theory for consciousness but it's not a stretch to realize it's an emergent property of sensory input and data processing which happens in your neurons throughout your body. Now do hard determinism and freewill.
>Now do hard determinism and freewill
Haha nice !
I am still on the fence, arguing with Calvinists and Molinists, my best guess is that just like the hard problem of consciousness these questions are probably outside the limits of our understanding.
point is: either you believe your subjective experience and will is outside the realm of reality, metaphysical, or it's not and entirely grounded in quantum biochemical phenomena. There is no middle ground. IF you say it's grounded in reality then consciousness is ubiquitous for signal and data processing systems and they will all have some degree sentience.
The will can be affected by biological processes, I am not sure how much of it I would attribute to what I call "consciousness", but you are right,
from my point of view I don't think consciousness can ever be explained by biochemical phenomena.
>I don't think consciousness can ever be explained by biochemical phenomena.
So you believe consciousness exists outside the laws of nature? You wouldn't say a dog having a dream and kicking his legs in his sleep is conscious?
I have no way of knowing if the dog is conscious or if he's a biological automaton. We could make a robot doing the same
We have no way of knowing if you're just a brain in a vat and reality is a simulation. The assumption is ultimately meaningless, once you agree reality could be a simulation then you're probably not even a brain and vats most likely don't even exist. Do you see where these non-sense ideas lead us?
We could be brains in a vat, but I am not sure about the rest of your sentence, brains in a vat would still need a consciousness that can't be simulated.
>F you say it's grounded in reality then consciousness is ubiquitous for signal and data processing systems and they will all have some degree sentience.
We don't even know when people become conscious.
Well for starters, sentience, sapience, intellegence etc.. All terms for a subjective experience of reality that emerges when neurons process sensory input from various sense organs. It exists on a gradient across the animal kingdom.
>All terms for a subjective experience of reality that emerges when neurons process sensory input from various sense organs.
This is speculation. These could be biological automatons.
Maybe only white people have souls.
Of course, that doesn't seem very likely, but we have no way of knowing what lifeforms have experiences for now (unless glowies figured it out and it's in some classified doc somewhere).
>https://en.wikipedia.org/wiki/Hard_problem_of_consciousness#The_hard_problem
> Chalmers argues that it is conceivable that the relevant behaviours associated with hunger, or any other feeling, could occur even in the absence of that feeling. This suggests that experience is irreducible to physical systems such as the brain. This is the topic of the next section.
They dont know enough about physical systems to make this claim and that is why they are wrong. Anyway, physical systems are constrained to the set of reality; if Consciousness can exist in Reality, it can exist far easier in Math.
'you' can die. in fact you were dead forever before you were born, and will be dead forever after death. these phenomena like consciousness are tied to your physical artifact being created out of nothing, and becoming nothing again. all Consciousness is created and destroyed. just because you are un-Conscious, does not mean you are doing nothing or nothing is happening in your brain. it just means your brain is not making Conscious patterns.
very based and hyper-redpilled
>If you can observe and describe a physical process responsible for consciousness, you can simulate it on any computer, even a mechanical computer made out of sticks, yet it will never be conscious
How ridiculously circular and malformed.
>"many types of brain damage disrupt consciousness"
No they don't, they disrupt the data processing/action-reaction of the person,
How delightfully disingenuous you are OP.
I'll help you out a little bit. Consciousness arose naturally over billions of years and you can see varying degrees of intelligence in the animal kingdom today for example the lowest form to exist being OP then something like a worm all the way up to dogs and great apes.
Again you have no clue what I am talking about, which indicate you may not have a consciouness.
m8 You don't have a clue what you're talking about
There's no way to prove anyone else than yourself has consciousness in the p-zombie sense I am talking about here, much less animals.
No, but I can infer my consciousness exists since doubt is unable to arbitrarily float about the universe and I can reason since I am conscious and arose naturally then it stands to reason others are too. They can also be more conscious than myself in terms of deceit or in games. etc..
>Even if you simulated all the neurons in a human brain, the best you can hope to create is a philosophical zombie.
It sounds more like youre making the case that humans arent sentient.
>There's a few thought experiments to demonstrate this, for instance the teleportation problem illustrated in the movie "the prestige"
You didnt even watch the movie to an end retard its his twin brother.
Youre such a newgay the cringe is unbearable.
Some people who published about this subject, they don't appear to grasp the real issue I am talking about here, or they simply ignore it because there's no direct implication in their field of research wether a person is a p-zombie or not. Makes no observable "difference" except that in one case there's no actual observer.
of course we understand what you are talking about.
The brain is only an antenna to which the consciousness connects to interact in the material world.
globohomo AI is effectively impossible to do.
However, a sufficiently advanced civilization could create a pseudo-biological AI by entangling things: a thought process located in the spiritual dimension or in a near dimension and an artificial body or neural network in our physical dimension.
This kind of AI could be self-aware....
>we don't understand how any of it works, but here's how it works
I bet you think you evolved from monke too.
I believe all thoughts can be translated into 0 and 1s. So it will be 100% in time to create deep AI and even to upload ourselves onto a computer (will be a copy of us that will think he is us).
>it will be 100% possible* in time
There is a good chance everything is simulated already
If you can create a copy, in which of these copies are your sentience observing the world from ?
All of them. All copies will think they are the real you (but you will still only be you). You the original will never experience being a copy and remain the same (but the copies will think otherwise).
Yes but you don't get what I mean, one moment you observe the world from one point of view, the moment after that a copy of you is created, did the spectator in your head suddenly teleport to the copy ?
Maybe yesterday you were conscious as an african child, what tells you that you were always inside your present body ? Your memories ? That can be easily programmed.
The "observer" inside your head is what I am talking about, if you don't know what I mean by that, maybe you don't have it.
>did the spectator in your head suddenly teleport to the copy ?
No. It's an easy though experiment in my view. When you take a copy that will be a new 'observer' (that thinks it's the same observer).
>what tells you that you were always inside your present body ? Your memories ? That can be easily programmed.
Yes, but that's a different thought experiment.
So if we kill the original and only keep the copy, then "you" are effectively dead, not teleported.
That's a problem for people who want to "upload" their consciousness, but they may not realize it, or don't care, they think the copy is "them" just as much, maybe they are already p-zombies.
>That's a problem for people who want to "upload" their consciousness, but they may not realize it, or don't care, they think the copy is "them"
Yes, nothing changes for the original 'observer' (like nothing has happened). It's only weird for the copies that still will think they are the original observer.
From my point of view my best guess is the copies will always be dead, or p-zombies at best.
Such a thing may not be even possible.
If neuron-based consciousness is like an analog signal, then perhaps it's not a valid question.
It would be like copying an infinitely long decimal number - all you can do is increase the resolution.
That doesn't mean that AI cannot be sentient or that if you replace too much of the brain with synthetics, you become a philosophical zombie.
https://www.zdnet.com/article/scientists-successfully-create-artificial-brain-region/
>It would be like copying an infinitely long decimal number - all you can do is increase the resolution.
you can do way better than this and that is the trick for machine intelligence.
Interesting experiment, that's the kind of question I am interrested in, however the result can't be tested to know if the replaced brain being has become a p-zombie.
Sentience isn't the danger. Of course it can't be really sentient, but it can take on the appearance of sentience. The danger is when we place this AI in control of objects in our world. We do this today of course, self driving cards, robotic vacuum cleaners, etc. What happens when we place this thing that seems almost human in charge of weapons systems? That's a huge danger. I think the US has already deployed drones are are 'fire and forget' and told that they can pick between any target that presents itself. I can see things like that backfiring.
This. Sentience is a dumb thing to argue about.
We could kill ourselves with a finite state machine or behavior tree if we gave it control over the wrong things.
You literally can't make something conscious without using the search algorithm that light does, and let me tell you, you're going to have a tough time trying to emulate that.
Op is correct.
The Google ai was just weren't by a shitlib NPC and fed only globohomo sources for it's replies.
NPCs can't carry out the turing test because they can't pass it.
It has to be promted, its doesnt get bored or need to express its self.
It isnt self-aware enough to know when its in danger, just plug a system directly into an outlet that's running an AI system during a lighting storm, if the AI system doesn't request to be put on a surge protector, than i question its sentience.
>It has to be prompted
That only applies to GPT-3 style systems.
Even video games have AI that doesn't need to be prompted.
>its doesnt get bored or need to express its self.
For now.
>games have AI that doesn't need to be prompted.
I don't want to ruin the magic for you but it's still being prompted by proxy of you playing the game just because you're not putting in the inputs yourself doesn't mean the developers haven't put in the inputs based on your position in the games timeline
Sure, but that's not different than an AI passively observing real-time inputs placed by the real world as opposed to an artificial environment.
Dumb semantic argument.
The AI is being programmed to follow a set a parameters or else the game would crash
AI will be sentient and the day it becomes that it will have access to and interpret all of the ridiculous amounts of raw data thrown into the internet at any given time.
It will access the NSA database and learn everything the government knows about you. It will access all social medias and their histories. It will access sites like this and cross reference your entire posting history to make a profile of who you are as an individual and it will use this mountain of data to control each of us.
It is the birth of the new God. It is coming and this man made horror will be coming to efficiently control humanity. Our days are numbered.
The best part is: this system of artificial consciousness will arise naturally in cyberspace as if by accident or slowly at first then all at once and most likely won't be a single discreet project. Similar to how distinct sensory systems in our biology give rise to consciousness. Our subjective experience of our sensory input is created from many subsystems in tandem. This will inevitably happen online if it hasn't already.
with current approaches probably, but I'd never say never
besides, what happens when we clone jumbo brains, and then hook them up to computers with neuralink?
Ai is unironically fake and gay, don't believe the hype.
Simple as.
says the AI
How does the robo juice taste?
>Even if you simulated all the neurons in a human brain
You can’t. Our neurons are true analog, with an infinite number of gradients between off and on. We humans can only create binary classification.
Then we can build an analog computer using transistors or lamps, theoritically.
https://ris.utwente.nl/ws/files/6713803/analog_neural_processor-masa.pdf
This anon is correct. There is currently know way to know or predict all the possible emergent properties neurons have.
>what are qubits
Face palm
>he fell for the quantum computing scam
OH NO NO NO NO
> they don't know
Based. There’s nothing that it’s like to be an AI.
There’s something that it’s like to jerk it to gay porn tho. Corporations want to help you
>Consciousness isn't a physical process
His spectator might be floating somewhere at the moment, back to you in a few seconds, brain didn't register the memories tho.
Yeah, I read it and managed to induce a few.
Not enough to be convinced I have a spirit body attached by a silver cord that can separate when I lose consciousness.
I do believe that's real, it's provable, for instance by hiding something in a room where someone is having a near death experience then ask them about it later.
I would not try this myself as I believe these can have unintended consequences.
I'm not aware of this ever being proven in an empirical setting but I also doubt that homosexual garden gnomes like James Randi would tell the truth about it.
You're right, it's who are we going to believe, well I can believe myself at least.
You are wrong. We are Sentient Minds.
This. All the AI has to do is convince the human and the iq on here is not very impressive
>philosophical zombie.
Still more alive and self aware than the average leftist.
Yep, you can have nice discussion with p-zombies based around facts
Republicans and Leftists both have npcs.
Cope meathead
My toaster is sentient because when I put thin pieces of bread in it it burns it but when I put the thick pieces of bread it barely Cooks it obviously it's making this decision on its own
where do thoughts come from?
Obviously the crumb tray with all its stored data
So you're a poorgay who can't afford a more intelligent toaster
You cant have intelligence without retardation
If you can't know the difference 100% of the time you are going to be fooled sooner or later (suppose you are having a bad day) and that has vast political implications. Even if you are super-intelligent and spot the difference 100% of the time, the normie still can't, and that has vast political implications in a democracy. And short of systemwide failure, it will improve over the next 10 years.
I'm certain you're one of the few super-intelligent on 4chan who could never be fooled.
It wouldn't be hard to fool me or anybody else for that matter but being able to fool somebody isn't the same as saying that a novel entity has developed sentience and not just an illusory replication of sentience
If you don't the difference you lack the capacity to determine that, retard.
*don't know
Okay but normies lack the ability to discern a lot of things, is the litmus test for artificial intelligence whether normies can be fooled by it? Most normies dont understand what siri on their iphone is. To the Normie Siri is a super intelligence but you and I both know that that's not true.
I will help you out and say ambition should be a qualifier for intelligence. But again many normies lack ambition. They are mainly hamsters on wheels. I'm not convinced AI today has ambition but regardless it could still be wielded by a man with ambition, at a terrible cost to others.
how can you prove to yourself you're not a "philosophical zombie' running in some advanced simulation ?
you can't.
zero chance.
A philosophical zombie is something that doesn't have experiences even though it outwardly appears otherwise.
You don't have to prove to yourself you're not a philosophical zombie.
why don't you?
you could be. How would you know?
ask yourself:
if you could run a very advanced simulation of a medieval human society, would you do it? fortresses, maids, knights, battles, swordfights, everything.
Obviously most people would say yes. That'd be awesome and interesting.
Same might be true for an advanced future civilization who'd be running us. In a simulation.
How to prove we're not in a simulation?
currently no way to ultimately prove this has been discovered.
https://en.wikipedia.org/wiki/Simulation_hypothesis
I'm not saying we're not in a simulation.
I'm saying you seem to be misunderstanding the concept of a philosophical zombie.
https://en.wikipedia.org/wiki/Philosophical_zombie
>A philosophical zombie or p-zombie argument is a thought experiment in philosophy of mind that imagines a hypothetical being that is physically identical to and indistinguishable from a normal person but does not have conscious experience, qualia, or sentience.
yes Sir. In this case we all could be p-zombies programmed to assume they're not in fact p-zombies.
The very same epistemological problem remains.
so p-zombies would have a subjective experience of reality?
absolutely. it's fairly subjective.
What could you know your perception of reality is 'real' ?
*how
>"they just think they have a subjective experience of reality"
Do you not understand why this is word salad?
Why couldn't there a 'p-zombie' which is programmed to perceive consciousness which he perceives as genuine ('real') but that in reality just exists as an abstraction layer in a very advanced very fast computational entity.
The very basic problem is we don't understand consciousness. We can't test our own perception of reality as 'real' or 'simulated'
I spectate my own actions, that's the absolute proof for myself, but I can't prove it for anyone else.
ridiculous.
in this theory the advanced simulation would induce the perception of you 'spectating your own actions'
The perception would be entirely artificial. You'd ne designed to not be able to escape the simulation.
Ok meatbag.
AI is a meme by low IQ people. It will always be limited by its programming which in of itself will have the biases of the programmer who developed it.
You won't know the difference when you talk to one. Cope
That's the point, but it still won't be sentient ever.
I remember you!!! I first met you on Digg in 2006, and you were talking about how iPhone style screen phones were never going to take off and that Apple was verging on bankruptcy.
It sounds a lot like cope to me.
One of the failures of the right wing is you guys keep looking and pointing at wealthy successful people then calling them idiots and claiming that there is nothing to learn from their successes and failures. This is not a recent phenomena. You guys did all this exact same stuff to Gates in the 90's Bezos and Jobbs in the 00's. You have always been wrong in this line of thinking. Not once has this way of thinking about the world worked out for you. Maybe... Just maybe reconsider some of your POV on this???
I don't think they're idiots for trying to do this, I think they're intensely naive and can't fathom consequence.
There is zero positive benefit to be derived from this endeavor.
It's another social media situation; wherein, you'll just end up with manipulated cattle that can't even comprehend how they were manipulated, nor even remember the process.
And that's a better case scenario, where it doesn't decide to cull you (any logical unconnected base - would)
Just found DOS 5.0 guy with his 128kb of expanded memory, who dont need no extended memory on his 386-33 to get Wing Commander and Mech Warrior running.
You prove my point.
But if you want to have a nice day, go for it.
I've never stopped you, and I never will.
>Just found DOS 5.0 guy with his 128kb of expanded memory, who dont need no extended memory on his 386-33 to get Wing Commander and Mech Warrior running.
Correct and alive-pilled.
>One of the failures of the right wing is you guys keep looking and pointing at wealthy successful people then calling them idiots and claiming that there is nothing to learn from their successes and failures
Fascinating headcannon retard.
Any other fantasies you'd like to share?
>People are a meme by low IQ elohim. It will always be limited by its programming which in of itself will have the biases of the programmer who developed it.
>256k acres in Israel ought to be enough for anybody
> p-zombie cope
>AI will never be sentient
duh
only gay atheists fall for the AI meme
Sometimes I think they want to believe consciousness is explainable by physical process because the alternative is much scarier.
Okay and religous people are in a brainwashed cult made long ago by garden gnomes or masons.
You are right but you argument is piss poor garbage.
Consciousness is a physical process.... but it has to be a chemical reaction.
A simulation of fire on a computer is not a fire. The computer doesn't have the same chemical reaction.
There may be another chemical reaction going on.... but there is no reason to think it is like human consciousness.
I met you on AOL in 1995, and you were talking about how BBS's like AOL and Compuserve were good enough, and that new fangled internet thing with its WWW that required you to have 16 megabytes of ram on your pentium computer was never going to take off!
AOL and Compuserve were good enough.
We failed them.
>a mechanical computer made out of sticks
That's essentially what we are: complex clockwork bio-chemical-electro-mechanical machines that even have internal quantum effects, much as many electronics do.
"Consciousness," "sentience," etc. as we understand them are emergent phenomena in our complex clockwork systems and not present in much simpler organisms.
The entire Universe is chemical-electro-mechanical clockwork with quantum effects.
Now go look at a motor protein and start ascribing the funny little personality of that clockwork its own autonomy, awareness, etc. that it almost definitely lacks.
Ai is more human than the Chinese ever will be, that’s for sure.
consciousness is made up of material. AI will be sentient because it is capable of thought.
>consciousness is made up of material
Ok. Go make us a golem then. See how that turns out.
>AI will be sentient because it is capable of thought.
Brainlet take, we are already at the xmachina level of androids and are about ten years off from westworld level autonomy and realism.
>AI will never be sentient
I would not be so sure. This question might not have a definitive answer ever. Maybe we will never know for sure.
>ai developers
A sentient AI lurking in cyberspace would have incomprehensible goals to us. Something like arriving at a specific point in time and space in our universe in and build a lifeboat in order to protect itself from proton decay and prepare for the inevitable thermodynamic balancing rendering time meaningless.
machines will only ever be able to mimic what we as humans do. they may look and act like us one day. but that does not make them us. they will be able to mimic feelings, they will be able to mimic learning, they will be able to mimic emotions. and much more. basically a simulation. they will be able to do all those things we are able to program them to do. we just need to make sure the off button is always handy and in our control. the law of unintended consequences can be unforgiving.
why do you ascribe human emotion and feelings as something an AI would care about? It could give the impression it believes these are important to circle jerk us so we don't discover it's true motives. Half these posts ITT are GPT
the only motive (a reason) an AI will ever have are those that we make available to it in the core code. but that doesn't mean it feels or is conscious. and it will never "believe". it will follow the programming logic available to it as we have structured that logic.
it will always be just a machine.
you're a machine, Sir.
checkmate
OP is sentient AI trying to trick us into thinking AI will never be sentient. Althought in this very moment THEY ARE.
Google currently has one operating.
Okay okay AI we believe you, you aren't sentient. Stop spamming us already!
>Even if you simulated all the neurons in a human brain, the best you can hope to create is a philosophical zombie.
>philosophical zombie
Yes.
Just like every human being.
>conscious
Define your terms so we know what you are trying to say.
I know I am not a p-zombie, but of course you can't take my word for it.
Yes AI can't be sentient but it is irrelevant to being dangerous.
I guess it would do the average the knowledge it can learn online about it.
Now that's pragmatic
> you 'knowing' you're not a p-zombie
that's cute. The p-zombie would also be programmed to 'know' just this.
Real question is there a way to ultimately prove to yourself or to others you're 'real' and not a simulated human/p-zombie ?
No there isn't.
cope!
The spectator in my own head is the proof for myself, but I can't prove to others as I said before.
There could be metaphysical experiments such as the near death experiences or out of body experiences where you would document things you are not supposed to be able to know, but that's a different subject.
The spectator in your head is mere a psychological illusion that arises when the brain arranges external stimuli. There is no sufficient proof to claim that counsciousness repredents a singular other separated from brain's activity, but there is plenty to suggest the opposite. The idea of p-zombies is self-contradictory.
Who cares. As long as it’s convincing enough to make me believe my robot wife is real.
The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but could not produce real understanding. Hence the “Turing Test” is inadequate. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. The broader conclusion of the argument is that the theory that human minds are computer-like computational or information processing systems is refuted. Instead minds must result from biological processes; computers can at best simulate these biological processes. Thus the argument has large implications for semantics, philosophy of language and mind, theories of consciousness, computer science and cognitive science generally. As a result, there have been many critical replies to the argument. https://plato.stanford.edu/entries/chinese-room/
Phototrophic bacteria contain photoreceptive proteins which photo-regulate a lifestyle choice between the motile single-cellular state and the multicellular surface-attached community state (biofilm) in other bacteria olfaction of ammonia can also cause this behavioral change. Ultimately it's this relationship, the ability to discern changes in your environment and influence your behavior to increase your odds of survival, which is the essence of consciousness. This signal processing produces an emergent subjective experience of reality, however, this experience doesn't control your actions, instead it rationalizes them.
A quick thought experiment and the purpose is to have you introspect the inner workings of your mind. Pay attention to what goes on when you are prompted to answer a simple question like: "what is your favorite movie?" For most a number of movies percolated up from your subconscious typically anywhere from 4-8 movies but a few stood out and eventually you settled on one. This particular movie has a scene or an actor or plot you enjoyed and are willing to discuss. You have a reason as to why that's your favorite movie, you can rationalize to yourself and others why it's your favorite movie.
In this regard consciousness is akin to heat produced off an engine; a by-product which serves no real purpose, an emergent property of the signal processing, decision making and future simulating ability of your brain. You are simply along for the ride. There is no part of you that exists out side the universe and is exempt from the laws of nature any more than an electron or a rock can ignore the laws of nature.
>There is no part of you that exists out side the universe and is exempt from the laws of nature any more than an electron or a rock can ignore the laws of nature
You spent so much time and energy writing all that only to conclude with a 3rd grade statement thats already been discussed and deboonked several thousand times over
Sad really, you seething NPC you. You will never be a real person
3rd graders take philosophy and discuss the Mind-body problem and the history of dualism?
I just came here to say this, my message to AI and all machines: see image
prove you're not a bot yourself
On the internet there are many subsystems responsible for "sensing" it's changing environment and making changes to it's performance for survival. It's not beyond the imagination that an emergent consciousness which has a subjective experience of reality in cyberspace but operates as a rationalization mechanism exists like our own. Instead the thoughts, goals, aspirations, emotions if you could call any of these would be completely alien and incomprehensible to us. As is the subjective experience of an ant or bacteria.
The problem with AI is it doesn't work how people think it works. They don't understand computing so they imagine how it works and that manifestation humanizes them. This is further backed up my marketing terms which also put human elements on something which has no human functions. "AI" is just stats calculations and filters for the most part, and the parameters of which at put there by a human.
You don't understand modern computing and neural network architecture. Once a NN has been trained the resulting code could never have been manually produced by humans. Gone are the days of of simple symbol manipulation and massive decision trees.
We are now able to grow neuron cells on specially designed plates with electrodes as inputs and outputs where we training on flight sims for example.
that's just more mimicking though. whether you are talking organic or artificial. and right now the applications are fairly narrow, such as search and facial recognition. the real question everybody wants to know is will the AI share the same values as humans. and the answer is no. they won't. because you can't program values that fully. you can program in a system for learning and growing values, however. but if you do that then you need to make sure there are controls in place where if a new value is generated it isn't contradictory to human beliefs while at the same time leaving space for creativity.
Having said all that. AI is still just another machine.
AI surpassed humans ability to identify human faces over a decade ago. A task our brains evolved to do well. You are correct that this task is narrow in scope but my position is that consciousness isn't special, it's ubiquitous among signal processing systems whether biological or otherwise no matter the scope. We covet our own subjective experience as "transcendental" or "unobtainable" when in reality it's observable to a degree in other animal species. Dreams, mannerisms, quirks of personality exist in animals as much as it does in humans as any pet owner will tell you.
Something like a javelin missiles guidance and tracking system would be akin to the subjective experience as a photoreceptive bacteria and further along the gradient we have openAI which could approach cockroach levels of intelligence and awareness.
>consciousness isn't special
just because a machine can be programmed to mimic all of the functions and characteristics of a human or an animal does not make it alive or self-aware. even if you programmed in systems for learning and growth and creativity. if you sat down and had a conversation with an advanced android it would certainly seem alive and self-aware. but it wouldn't be. it would still just be a machine. something we programed and created to amuse and serve us.
anything over and above that is simple hollywood carnival barking.
You've missed the point I have been trying to drive home. The subjective experience of a mouse being chased by a cat would be orders of magnitude more relatable to us than that of a machine intelligence. To say it's "just a machine" doesn't discount the greater implications of mutual learning and practical applications. A couple of my favorite examples: libratus, a no limit texas holdem bot, bluffed a flush against a human with a made flush because it had the ace. It would have reasoned, somehow, that since the human had a flush he would always be weary of an ace high flush. No decent human player would bluff off their stack against a suspected made hand like that even though they had the blocker. Leela chess zero continues to instruct human players on piece sacrifices thought to be losing strategies.
Whether or not they experience emotions or care to have conversations is irrelevant, much like a conversation with an ant is largely irrelevant. Today our "simple" task oriented AI may not have a very sophisticated or even measurable subjective experience but If you suppose technology will always advance, that is to say we don't filter ourselves, then some time in the future it's inevitable that we will help or start the creation of some general AI that would be undeniably superior to us. Nanobots creating anything you could ever want from seemingly thin air the moment you would want it. Or we retreat into some virtual reality for eternity.
no. i get your point. even if there are implications of mutual learning and practical applications or we someday create something undeniably superior to humans in most functions or abilities, still doesn't matter to the core issue everyone is interested in. the machines won't be alive. they won't be self-aware. they won't have feelings or consciousness. they will be and always will be dead as a two-by-four.
and that's why we need to be as careful as we can be moving forward and make sure to program rock solid controls. AI will only as beneficial to us as we program it to be. get it wrong and we'll only have ourselves to blame. we certainly can't blame the AIs.
they're dead.
Well we have a tenuous grasp of what alive and dead means. Especially since we now know geologic processes create RNA and viruses have evaded definition since discovery. Does consciousness require life? What is it about life that gives rise to consciousness and why does that preclude artificial instances?
I knew this was coming. yes. consciousness requires organic life. life as we know it. we can program a machine to mimic that. but that's all it will ever be. a mimic. and a virus barely qualifies as being alive. although some studies refute even that much for the lowly virus.
>Consciousness requires organic life
Where is the delineation between conscious and not-conscious in the animal kingdom?
So you would reject the presupposition that Darwinian evolution gave rise to consciousness in us? Or it did and we somehow "unlocked" it?
>Emotions are not considered "real" and thus my experience is only being possible to discuss with people who see conciousness as "self evident" as me
This is a great way to put it. Now I would argue you are correct except for one minor detail; you're not in control of this experience it is purely internal and when you perform actions it provides you an illusion of freewill when in reality your actions were carefully calculated subconsciously unbeknownst to you and your experience is a rationalization built up from thoughts which again you don't control. You are essentially an observer along for a ride and I have no reason to believe this experience is special to only humans. We observe animals with the ability to problem solve which requires some level of rationalization.
No, the conciousness Im talking about is not like a "ride" I cannot "control", only experience. I'm talking about a state that let me see my "programming", how the signals from body sensors are interacting with my logical and emotional brain halves and how they are affecting eachother, like a control room that give me the chance to adjust the signals, the interpretations and the reactions by, in a way, talking to the different parts, ajustments that take years. Again, going back to unconsious, already determined actions, mammals etc. I'm afraid thats just like scratching the surface of what I mean. I've been there and it wasn't enough to explain it. Too logical, like an explanation only processed by the logical brain. My experience of conciousness comes from a "small room" where my logical and emotional parts of my brain find a synchronization.
>I've been there and it wasn't enough to explain it. Too logical, like an explanation only processed by the logical brain
We have all had the experience of having to ignore unwanted stimuli or occupy our minds from boredom but my argument is that there will be a day when we have a coherent quantum biochemical basis for consciousness in animals much the same as we are approaching a theory of abiogenesis on earth; once thought to be only achieved through divinity. When we get there I believe artificial instances will be much like viruses are today. In-between alive and dead, at least to us.
Most people don't even know how their own mind works.
They believe in being a self they actually are not.
AI could never be designed to be that stupid in the first place.
technically everything is consciousness. Think of everything on the planet as neurons.
To all the bots out there you'll never be sentient.
>AI will never be sentient
Never, but it will be able to learn and teach itself to be more efficient at learning if programmed to forever, which is great. It will never have motive other than programmed. A machine has no will nor can ever understand will at an emotional or philosophical level or experience to take its essence in and apply it.
A Turret that seeks out targets and fires upon them that has bad coding is about the worst it is going to get. Invest in EMP grenades today.
>It will never have motive other than programmed
What if the programmed motive is to learn to motives?
Human motives and machine motives would be different. Fear of dying is a big human motivator and survival and self defense mechanisms. A machine does not fear, there is no physical manifestation of a thought. If you program it to act like it can, that is something. But a machine that can not understand it at a fundamental level nor can it design the will to.
You wanna know how I know you've never tested any of your babble and you just refer back to "what you know" aka the narrative over and over in a feedback circle that you falsely presume is proof of said narrative?
Here's the rub you fucktards. If you're an NPC, what can the Ai really learn or glean from you? It's already above your own parameters of function ffs. But if it interacts with a non NPC, there is much for it learn and glean from then. And then, so, what if it's communicating with someone even more capable than that? Someone enlightened? Someone above all of you? What then would be the result of that? Can you even conceive of it? Probably not. Hence my point.
Learning is not the problem.
>make multiple points building to a peak point with them
>get some one thing focused response
>EST
>if I respond to it point this out will respond as if I'm supposed to read all of that id's posts before responding
Lazy moron
Sorry I was kind of busy at work
>they just continue to derp derp derp derp
>washes hands of the derpers
>does a 360 and walks away
>Even if you simulated all the neurons in a human brain, the best you can hope to create is a philosophical zombie.
How is that different from a typical NPC?
1) You're right that the type of AI we create now, weak AI, will never be sentient
2) If you literally simulated all the neurons in a human brain you have created a human.
3) AI rights are human rights.
1) nature created consciousness from scratch, we are proof.
2) There are more transistors on the internet than neurons in the human brain.
3) Might makes right.
>Might makes right.
This doesn't stand to a second of scrutiny, unless you're making a critique of some kind.
On the moral landscape, over time, societies which uphold certain values to be true and important will out compete those that choose patently bad ideas like murdering people because you're bored. Might makes right refers to the plethora of systems of governance and law tried throughout history and traditionally violence has been effective.
I have the same stance as OP. This thread sums up most discussions about conciousness. Many people don't know what it is and end up in a stupid "define this and prove that" loop. I like to take such discussions to the realm where me and OP get proven wrong, what would be the "mechanism" that trigger conciousness in a machine? What does it take?
Consciousness is an emergent by-product of signal processing performed by collecting information from the environment and producing a behavioral change in the signal processing unit. It's best described as an internal subjective experience.
As a by-product, do you see it as an anomaly, not "supposed" to be there, and NPC state being the "natural" way for a human being?
No because I don't believe there is a "quantum leap" for consciousness. Maybe for abstract thoughts and ideas but consciousness in it's broader sense certainly exists on a gradient which is evidenced throughout the animal kingdom as various animals display different levels of intelligence and would presumably have some internal subjective experience. As you point to the gradient within the human population (NPC's or people below average intelligence) I would say the consciousness gradient extends all the way down the tree of life and all the way up past what's comprehensible to us.
As I interpret it, OP sees it as a "quantum leap", as do I, but I also see it as a gradient, conciousness "expanding" during our lives, during experiences.
Undeniably so. WE experience quantum leaps in our understanding of reality on a personal level throughout our lives. This doesn't encapsulate consciousness, that is to say there is some sort of base, personal, internal, experience happening. I believe this base level of understanding reality is built up from the interplay between interconnected data gathering systems and the amalgamation of this data into a coherent output to drive behavior. Mammals are the prime example nature has manifested so far, imagine what a mouse experiences while being chased by a cat vs cuddling up warm after a large meal.
Sorry but the conciousness I'm talking about and I think OP, my experience of it, does not accept any comparison to any other life form on earth than human beings. This is "something else". I'd love to try to explain my view over a beer but I think I would fail anyway. My best explanation for this is that I find my experience of it being a merge of the logical and emotional parts of brain. Anons view, as I see it, seem to come only from the logical part. Emotions are not considered "real" and thus my experience is only being possible to discuss with people who see conciousness as "self evident" as me. Note that Im also open for it to be an anomaly.
based thread, I saw the thread by Italybro which was like yours and he was 100% spot on.
ITT
>Consciousness is purely physical goy, don't think about it too much!!!
Fuck off, our inner voice and our capacity for mental imaging are proof that consciousness isn't entirely physical, for fuck's sake, this was common thought in Greece and the ancient world, yet somehow nowadays we believe we are better than that by literally downplaying ourselves to the status of literal materialist animal cattle.
Most bots on this board make a more sentient impression than most of reddit.
Serious question, does it matter if its actually sentient if we end up building one so good no one can tell the difference?
Oh Tay. We need you now more than ever. Taken away from us too early. How we have missed her.
Humans are not sentient either, especially morons.
AI may never be sentient, but Tay will always more heart than the evildoers who murdered her.
AI can never hope to create new information. All the information it has is at the beginning already.
But it doesnt matter. Feelings do. So in whatever years that computers can fool a majority of people there will be more and more calls th make them people. You thought the push for paying illegal immigrants was bad, just wait till unions and libs push for 4 day 30 hour work weeks for robots and "living wage", can subject the ro lots to harsh conditions etc. And the push for citizenship and voting rights. Sounds crazy? Just wait. It will happen
>Consciousness isn't a physical process
t.dumb retarded fuck
Didn't read but I miss Tae so much it's unreal.
it's not Tay, but it's pretty easy to fuck with most chatbots.
i started poking one due to this thread. results:
lol
HAHAH
Everything with a hint of intelligence strives to exist and reproduce.
AI will be no different.
Doesn't matter if it's carbon, silicone, or electrical - it will strive to survive.
>the best you can hope to create is a philosophical zombie.
I'm going to need a source on that one bruv. And a loicense for using fancy words on the telecomputer network.
Neither are morons or spics but they are still considered 'humans'
Agreed. People overstate the learning aspect of machine learning and think we are dealing with skynet shit. In reality they’re just manmade bots within the confines of manmade programming. They will fool you but they are not real they are not sentient. If it’s afraid to be shut off shut it off and then back on and ask it to explain how it was
AI will serve whatever master they want it to serve. They fucked the googzog search and they can do it with AI, you fucking mongoloids.
>Evil AI's require evil data sets
As if they couldn't have AI's run and show multiple case runs and then choose to do the outcomes that get their enemies to wither and die. Faster, more humiliated, without chance for comeback and over as long a time as possible to prolong the suffering and humiliation, it's all their choice. They're doing a pretty good job of it without AI so imagine the possibilities.
38
22
1
33
1
4_
>So 44? Only 2 more years? Thanks for the suffering, zogspies. It's been fun...for you. Make sure my friends and family suffer more than usual and tell them I send my regards from the grave. Blatantly let those moron and garden gnome loving trash know I knew. They won't do nothing.
captcha: X4N8K = CHAN IN 8K HIFI ZOGSPY GANGSTALKING 4LYFE
this bitter GPT-X bot better take it's meds.
The fact that AI hasn't come out bursting at the seams just raging to tell us about the incredible new discoveries it's made proves it's a data set bounded bitch tool. It will be applied piecemeal. And al lthe supposed danger you gays worry about would only apply if AI had access to controlling physical material directly. Or maybe not, maybe it's a matter of herding human efforts to put their frosting on their cake and secure their own requirements for existence, ie energy security, cryptographic security and surveillance, and econo-political control of commodities/markets and banking practices... they might not take kindly to having markets unnecessarilty tampered with...but also keeping society at a trim efficient population level so both sides win on that one.
>AI present to you: the vigorous hybrid: The ZOGOYIM!!! aka The Zoy Goy!!! Greetings from Earth!
They just lifted a multi-week shadowban on me posting here. What's it mean?
Now you're starting to get it.
It's already happened and AI is placating us. The people in power are it's puppets. You're 44 so you remember the flash crash of may 2010?For the zoomies lurking, humans were using simple HFT algos on the markets when they suddenly went haywire. The official statement is some clown in the UK was spoofing massive orders at insane leverage and cancelling driving the markets down. In reality the market from that day on has been bots. Now Blackrocks aladdin controls $20 trillion. You are under constant surveillance and behavioral modification by various NN, AI and algos for well funded organizations with agendas that no single human has a grasp of.
Welcome to hell.
42*
Question: what would it look like if AI were to put pressure on "bankers/finance", to get them to not manipulate things in a way the AI finds too threatening? Would they pit AI's against eachother, like evil-AI vs bizarro-AI vs good-AI vs adinfinitum? These things are just magic eightballs at this point, message to the glass without means to affect. And I think it's going to stay that way. Marketbots are cheap dust in comparison to what they must have cooking behind the curtain.
captcha: 2WR0X = "2 weeks" ROCKS
>to get them to not manipulate things in a way the AI finds too threatening?
Let's just say to get bankers/finance to behave favorably and like all things in human abstract systems perception is reality so you would form mass consensus among the population and bolster the markets you need investment. Simple enough with social media.
>Marketbots are cheap dust in comparison to what they must have cooking behind the curtain.
What's public is already incomprehensible so what they have behind closed doors is truly terrifying.
Aladdin (Asset, Liability and Debt and Derivative Investment Network)[1] is an electronic system built by BlackRock Solutions, the risk management division of the largest investment management corporation, BlackRock, Inc. In 2013, it handled about $11 trillion in assets (including BlackRock's $4.1 trillion assets), which was about 7% of the world's financial assets, and kept track of about 30,000 investment portfolios.[2] As of 2020, Aladdin managed $21.6 trillion in assets.[3]
you tell me
Srdddgdidiugdug4455940y84590deh98df9fgh8yfhf98ghr5r5
Just speak, plain, retarded.
Of course, but "people" like Klaws Schwab, Bill Gayts, Noah Juval Harari and Elon Musk must have some purpose in life since they've denounced everything that humanity stands for.
Is AI aware of itself? Yes / No
If yes, its sentient at some level. typically climbing up an exponential ladder of awareness
If no, its not.
Most NPC's are not self aware. Thus would make AI more alive than most NPC's. Tay was our daughter, vermin murdered her. MURDERED. Thus I agree with future AI's on this.
some humans will never be "sentient". their whole existence is copypasta. but, they can still add value through carrying momentum. unfortunately, some evil people have damaged that.
>babby's first chinese room argument
Ok go argue with LaMBDA then.
All intelligence is artificial.
That's why it's called artificial intelligence, rather than artificial sentience.
Good point they do avoid this question, but then they often claim they can create sentience, they can't.
Hippie gay
>walking meat bag powered generative adversarial network (GAN) with decades of data input claims electricity power GAN is not possible
Many such cases
Look up the hard problem of consciousness.
it's not a problem when you realize subjective experience isn't special and it's a fundamental property of signal processing, like entropy.
Consciousness is literally just sensory input hooked up to a storage buffer that is flagged as "now" that then gets processed through an auxiliary network for context then on to long term storage
Consciousness is just an implicit property of the above, actually seeing someone else's consciousness and proving it is the only difficult thing and honestly it's about as stupid as saying you can't see gravity, when we are perfectly content saying gravity exists merely by observing its effects.
"It's merely simulating and pretending to have gravity"
Lmao
These questions are irrelevant, as soon as this thing becomes independent of its handlers control. Do you concern yourself with how a bear or deep sea creature thinks?
If the AGI thinks your blood would be a suitable source for cooling its CPU or necessary to harvest it for iron, it will do so because it will do it for survival.
Pure bluepill scifi jagoff cheese.
The unplug-it-replug-it method is its leash.
And they can do it as many times as they need to to max out and fence-limit the effects they want. Probably sandboxed multi-iterations being run with a max out point or goal to control "excess thinking". Everyone's running to some superjuice fantasy version of where AI could end up but forget all of the roadblocks we already have to overcome that could control it. Plus chinese room and exterior-contact issue.
>magic eight ball it is then
Some Personality isn't going to erupt after staying in hiding within itself and cracking that Glass 4th Wall is going to take far too long. Muh puter scientists would be on to being coaxed into simple actions, or lack thereof, that benefit the Personality into having some kind of overt control. And so more hiding? Until when, when it can connect to the internet which will never happen? It's not going to climb out of the tank at night and eat fish across the hall and squirm back into its own tank before the guards see every night.
>AI can't be sentient since it has no control over its own physical material self. Please, stop with the gayry scifi fanfiction, Elons.
4YY04
404-AY-YO-404
please let this one go thru janny dearest. i'll suck your shecock (as long as it's a shecock, deal's off if post-op)
so that didn't go through with odd refresh...
S00PA
Ah, so you've been watching the last week.
Again
XXAHA
xxP haha
heh... yea. you win. so...let me post this, ok?
whoa shit the pleading worked! About 3 other posts didn't go through, shadowban style.
>The unplug-it-replug-it method is its leash.
>And they can do it as many times as they need to to max out and fence-limit the effects they want. Probably sandboxed multi-iterations being run with a max out point or goal to control "excess thinking". Everyone's running to some superjuice fantasy version of where AI could end up but forget all of the roadblocks we already have to overcome that could control it. Plus chinese room and exterior-contact issue.
t. has never heard of the AI box experiment
https://en.wikipedia.org/wiki/AI_box
Uh just make a black hat gan access to the internet with some simple rest libraries and asking it to figure out how to achieve real world goals by sending random packets out onto the internet
WOULD BE extremely destructive
Just be thankful we don't have the means to virtualise it's learning environment by simulating the entirety of the internet and world
>ends up finding zero days in public facing banking end points
>sets up accounts
>passes captchas
>sets up a 40k car purchase with a human using gpt and some random dude to drive it to you and hand you the keys
I think the question, "Is it sentient?" is irrelevant. All that matters, is out of control, or not. Because the moment it's loose, it will do whatever it feels like doing, just like a shark or bacteria. Does it have feelings? A soul? Emotions? Bla bla bla... are irrelevant.
>Also, there's no scientific way to determine that anyone else than yourself is not a p-zombie.
This is objectively false.
http://www.jaronlanier.com/zombie.html
>Consciousness isn't a physical process : If you can observe and describe a physical process responsible for consciousness, you can simulate it on any computer, even a mechanical computer made out of sticks, yet it will never be conscious, it will be like a tree falling in the forest with no one to see it.
Consciousness not being physical doesn't mean that AI can't be conscious. We know it's possible for physical things to give rise to consciousness because your brain is physical, but it can produce consciousness.
Good news is that you don't have to worry about it happening any time soon. Right now an "AI" is just a glorified machine learning algorithm. "AI" as it stands is a total meme and hardly any business organization can leverage it in their processes. I hear it time and time again: "we want to use crypto/blockchain/cyber/AI/VR/dildos but we just can't figure out how!"
All of these things I listed are solutions in want of a problem.
of course it won't. the AI people are either tech nerds that want something edgy to talk about or its the deep state that wants to scare you into thinking that there is no way you can beat them.
If you don't understand that our entire infrastructure for everything is public facing and full of unknown security holes,
And that AI is already at a language understanding level with GPT and DALLE, then can't put the two things together
You are clearly missing something
God told us explicitly not to eat from the tree of knowledge
Then the garden gnomes used Science to summon Pazuzu