>I don't understand
AI engineers don't understand either, they just throw shit at the machine and let machine learning figure things out through reinforcement
Like sure i get how it parses the different hues on the pixels and creates vectors and shit to differentiate between close and far away objects
But how does it know what object is ''bad'' and what is ''good'', like if a value is supposed to be positive or negative? Do you just brute force it?
>see >know >observe >masturbate
you apply qualities to a glorified spreadsheet that it does not have.
The model doesn't "see" anything, nor does it "know" anything
It isn't conscious and it doesn't do anything special.
It simply applies a series of matrices over a set of pixels.
The dimensions and values specified in the matrices are guided to a useful state through a process of trial and error, called training, or """learning""".
Understand that in a sufficiently deep neural network, abstraction arises automatically.
Any sufficiently complex abstraction can represent any computation.
>compares vectors
performs matrix multiplication >judge what it's processing
no >can't actually love me
it can make you think it loves you
For an AI to be truly intelligent, it needs to have both long term memory, be capable of retaining what it's learned over a long period of time, yet still be plastic enough to learn new concepts
there are no well known or accessible models which currently do this, but work is being done by IBM (TrueNorth), Intel (Loihi), and others
>For an AI to be truly intelligent it needs to fit some autistic fixated arbitrary schema humans have come up with in the absence of ai
lol
2 months ago
Anonymous
>arbitrary
yeah good luck sounding smart with the memory of a goldfish and catastrophically forgetting how to walk after trying to learn how to run
gay
>For an AI to be truly intelligent, it needs to have both long term memory, be capable of retaining what it's learned over a long period of time, yet still be plastic enough to learn new concepts
How is this not literally already the case? The problem right now is we can't make memory hardware that's anywhere near as efficient as the human brain.
>For an AI to be truly intelligent
Define "intelligent".
Now define "truly intelligent".
What you mean is human-like intelligent, where we are far from. To have human-like capabilities an AI would need an intention. That means it would need to be able to come up with something itself.
All AI to this day always needs human input to do anything. It is always reacting, never acting on its own. In terms of intention we are not even on the level of lower mammals with our AI.
2 months ago
Anonymous
I'm talking bare minimums
nobody is going to claim they know exactly how intelligence works, else they'd have solved it by now
I pointed out the three most pressing issues with the current generation of models
there are probably dozens more
2 months ago
Anonymous
>nobody is going to claim they know exactly how intelligence works, else they'd have solved it by now
The problem is we don't even know what intelligence is. There is no single generally accepted definition of "intelligence". A toaster can be "intelligent" when it is intelligent enough to detect how toasted the bread inside is. There are circuits with an opamp that manage to maintain a certain angle of a servo motor depending on which angle it is attached to, how much pressure is added to get it out of that angle etc. It also can be called intelligent despite it just consisting of a single circuit with an opamp and a few resistors and capacitors.
2 months ago
Anonymous
>we don't even know what intelligence is
well not quite
The industry is pretty set on the idea that the direction to go is ONE model which can be generalized to solve any given problem.
It's a concrete goal with a concrete solution, and it's a good starting point.
https://arxiv.org/abs/2208.11173
In the introduction, and in "Roadmap to an AI Prototype" these guys outline their research goals w.r.t. a model that is increasingly more generalizable to many problems, and increasingly more independent from researcher interference.
2 months ago
Anonymous
that is still ONE definition of intelligence and far from what humans are capable of. That problem solving intelligence still only reacts to prompts and to create it one just needs to put enough effort in training and provide enough storage space to keep all the training data.
2 months ago
Anonymous
We know, but everyone refuses to accept that there is more to us than the physical world allows us to see. You think we can detect everything in the universe with our limited bodies? Materialists are absolutely coping. None of their theories make sense. “the brain is all there is, no soul”. Meanwhile, they can’t even begin to explain how the brain would experience itself. What consciousness is. How any of it works AT ALL. Same goes for space. “LOL HERES OUR THEORY FOR SPACE”, “WAIT A MINUTE, THAT DOESNT WORK, GUESS WHAT, THERES THIS MATTER WE CANT SEE, MEASURE, DETECT, CAPTURE, OR INTERACT WITH, WE CALL IT DARK MATTER LOL AND IT MAKES THE MATHS WORK WITH OUR ORIGINAL THEORY LOL”.
Do you people not see that we live in a universe of which we know absolutely nothing about.
2 months ago
Anonymous
>I can't explain something therefore gods
2 months ago
Anonymous
I will not go your schizo route because that's not what I was talking about when I said that there is no single generally accepted definition for "intelligence". An ant colony is intelligent, a human writing a book is intelligent too. A camera that detects light condition and faces and adapts its settings accordingly is intelligent.
If we want human-like intelligence from an AI however, the AI needs to simulate basic needs, drives, at least basic animalish feelings like anger, fear, repulsion, attraction, attention, boredom.
>For an AI to be truly intelligent, it needs to have both long term memory, be capable of retaining what it's learned over a long period of time, yet still be plastic enough to learn new concepts
We're working on it, bro. We've made impressive strides lately, even if AGI turns out to be a long way off. I WANT TO HABEEB.
>>Does this mean my AI wife can't actually love me? >my brother in christ, "she" is a bunch of battlefield targeting algorithms given female form to reduce combat stress
Unlike the process by which babies learn about the world, right? gay.
So it's just an algorithm that comapres vectors? It can't actually judge what it's processing? Does this mean my AI wife can't actually love me?
You're just a series of chemical reactions, your entire life was started by events that proceeded you and shaped every facet of your life, you're just a dumb mechanical process acting out a (from your perspective) long chain of reactions (from the universe's perspective, instantaneous).
OR
You're a human being and you feel love. And your robot wife feels love too.
Both choices are as equally correct as believing you have free will and that you are "conscious".
yes, unlike, LARPer
The part which I skipped over and you apparently know nothing about, is that the design of the training process and the loss function is a core part of the model, and if you'd ever written one you'd know why the current generation of models aren't conscious
retard
>For an AI to be truly intelligent, it needs to have both long term memory, be capable of retaining what it's learned over a long period of time, yet still be plastic enough to learn new concepts
How is this not literally already the case? The problem right now is we can't make memory hardware that's anywhere near as efficient as the human brain.
Current AI techniques are more than enough to drum up investor money because they deliver actual, useful results.
Superior memory hardware exists.
They're called memristors.
You can perform compute in-memory.
All of this stuff is stuck in the lab. Firstly, because GPUs are more economical, and secondly because there aren't yet any use cases this more efficient hardware solve better than GPUs.
>But how does it know what object is ''bad'' and what is ''good'', like if a value is supposed to be positive or negative? Do you just brute force it?
What do you think Captcha was? Nowadays they just pay nigs or poos $2 an hour to do it on a massive scale
>I don't understand
AI engineers don't understand either, they just throw shit at the machine and let machine learning figure things out through reinforcement
Can you explain how we can "see" by having a bunch of structures in our eyes get hit by photons, create an electro-chemical signal that travels along a biological wire to protein synthesizing machine made of a dense collection of chemical logic gates?
I mean the answer is philosophical, once you've finished describing the literal physical processes as they exist in the world, you're on to topics like semiotics and "what is meaning".
The answer is obvious to anyone who thinks about it seriously.
We have a soul which experiences what is detected by out physical bodies. The brain cannot experience itself. Like you just said, how can we “see” if we’re nothing more than just an electro chemical signal. The obvious answer is that we cannot. We have no fucking idea how our consciousness works, but it is clear we are not just “electro chemical signals bro”.
It's worse than that. We think we have a distinguishable entity from others, but really it's quite obvious on a quantum scale that that's a facade and you're live with an organism that is trying it's hardest to set up the facade of different organisms and different souls/people in this universe.
In reality, every thing's fucking Silent Hill. It's just that humans and animals have been built good enough to not make it seem that way. And we're even perpetuating that construct because we do not like facing this horrific reality.
But at the same time, almost like a paradox, singularity is just as much of a delusion as mutually exclusive souls.
I think this is because our linguistic constructs are simply not capable to understand the thing in itself. We just don't know. We assume things and pretend we know because that's the best we can do.
In fact it seems like it's the best God or the Universe can do.
Despite all this, personally, I believe vigilance against any cult like talk when discussing this subject matter on these terms, because it's dangerous and leads to dumb assumptions that lead to people getting massacred.
Seeing the issues with Google made me realise that we should distance that natured language from talking the details about AI. We need to look into detail constructions in psychology and psychiatry perhaps instead.
>read string of data >programmer programs that this equates to X characteristic or thing >makes an algorithm to train itself how to do this repeatedly >attenuate it's results with people's interaction with it
>sample physical process using mouth >pre-programmed instinct tells me it tastes "good" or "bad" >this set of pre-programmed instructions are so arbitrary in their implementation that not every fleshbot, not even every fleshbot from the same factory, likes the taste of the same things >parents tell me you can't eat that for breakfast, you can only eat that dessert >decide I can only eat it that for dessert, anyone that eats that for breakfast is being weird
For me it was different. >parents tell me you can't eat that for breakfast, you can only eat it for dessert >decided to crack into the alcohol and liquor cabinet when they're not looking out of anger and frustration
The neuronal network is producing an output. The input is random noise, represented by random numbers. Now the neuronal network is trained to change these numbers until the output looks like an example image. The example image is represented by an array of tokens, which again are just numbers representing the pixel stream in the picture.
If the network produced a good output, the training data (connection between the neurons that produced the output) is saved. This hast to be done with as much training data as possible.
Once this training is done, it can be used to either detect material similar to the training data in an input (face recognition), or (after every training picture was provided with a caption) it can be combined with a language model like GPT to produce something that's similar to the training data.
Pattern recognition amped up to 11
Like sure i get how it parses the different hues on the pixels and creates vectors and shit to differentiate between close and far away objects
But how does it know what object is ''bad'' and what is ''good'', like if a value is supposed to be positive or negative? Do you just brute force it?
>see
>know
>observe
>masturbate
you apply qualities to a glorified spreadsheet that it does not have.
The model doesn't "see" anything, nor does it "know" anything
It isn't conscious and it doesn't do anything special.
It simply applies a series of matrices over a set of pixels.
The dimensions and values specified in the matrices are guided to a useful state through a process of trial and error, called training, or """learning""".
Understand that in a sufficiently deep neural network, abstraction arises automatically.
Any sufficiently complex abstraction can represent any computation.
So it's just an algorithm that comapres vectors? It can't actually judge what it's processing? Does this mean my AI wife can't actually love me?
>compares vectors
performs matrix multiplication
>judge what it's processing
no
>can't actually love me
it can make you think it loves you
For an AI to be truly intelligent, it needs to have both long term memory, be capable of retaining what it's learned over a long period of time, yet still be plastic enough to learn new concepts
there are no well known or accessible models which currently do this, but work is being done by IBM (TrueNorth), Intel (Loihi), and others
>For an AI to be truly intelligent it needs to fit some autistic fixated arbitrary schema humans have come up with in the absence of ai
lol
>arbitrary
yeah good luck sounding smart with the memory of a goldfish and catastrophically forgetting how to walk after trying to learn how to run
gay
>For an AI to be truly intelligent, it needs to have both long term memory, be capable of retaining what it's learned over a long period of time, yet still be plastic enough to learn new concepts
How is this not literally already the case? The problem right now is we can't make memory hardware that's anywhere near as efficient as the human brain.
>For an AI to be truly intelligent
Define "intelligent".
Now define "truly intelligent".
What you mean is human-like intelligent, where we are far from. To have human-like capabilities an AI would need an intention. That means it would need to be able to come up with something itself.
All AI to this day always needs human input to do anything. It is always reacting, never acting on its own. In terms of intention we are not even on the level of lower mammals with our AI.
I'm talking bare minimums
nobody is going to claim they know exactly how intelligence works, else they'd have solved it by now
I pointed out the three most pressing issues with the current generation of models
there are probably dozens more
>nobody is going to claim they know exactly how intelligence works, else they'd have solved it by now
The problem is we don't even know what intelligence is. There is no single generally accepted definition of "intelligence". A toaster can be "intelligent" when it is intelligent enough to detect how toasted the bread inside is. There are circuits with an opamp that manage to maintain a certain angle of a servo motor depending on which angle it is attached to, how much pressure is added to get it out of that angle etc. It also can be called intelligent despite it just consisting of a single circuit with an opamp and a few resistors and capacitors.
>we don't even know what intelligence is
well not quite
The industry is pretty set on the idea that the direction to go is ONE model which can be generalized to solve any given problem.
It's a concrete goal with a concrete solution, and it's a good starting point.
https://arxiv.org/abs/2208.11173
In the introduction, and in "Roadmap to an AI Prototype" these guys outline their research goals w.r.t. a model that is increasingly more generalizable to many problems, and increasingly more independent from researcher interference.
that is still ONE definition of intelligence and far from what humans are capable of. That problem solving intelligence still only reacts to prompts and to create it one just needs to put enough effort in training and provide enough storage space to keep all the training data.
We know, but everyone refuses to accept that there is more to us than the physical world allows us to see. You think we can detect everything in the universe with our limited bodies? Materialists are absolutely coping. None of their theories make sense. “the brain is all there is, no soul”. Meanwhile, they can’t even begin to explain how the brain would experience itself. What consciousness is. How any of it works AT ALL. Same goes for space. “LOL HERES OUR THEORY FOR SPACE”, “WAIT A MINUTE, THAT DOESNT WORK, GUESS WHAT, THERES THIS MATTER WE CANT SEE, MEASURE, DETECT, CAPTURE, OR INTERACT WITH, WE CALL IT DARK MATTER LOL AND IT MAKES THE MATHS WORK WITH OUR ORIGINAL THEORY LOL”.
Do you people not see that we live in a universe of which we know absolutely nothing about.
>I can't explain something therefore gods
I will not go your schizo route because that's not what I was talking about when I said that there is no single generally accepted definition for "intelligence". An ant colony is intelligent, a human writing a book is intelligent too. A camera that detects light condition and faces and adapts its settings accordingly is intelligent.
If we want human-like intelligence from an AI however, the AI needs to simulate basic needs, drives, at least basic animalish feelings like anger, fear, repulsion, attraction, attention, boredom.
>For an AI to be truly intelligent, it needs to have both long term memory, be capable of retaining what it's learned over a long period of time, yet still be plastic enough to learn new concepts
We're working on it, bro. We've made impressive strides lately, even if AGI turns out to be a long way off. I WANT TO HABEEB.
>>Does this mean my AI wife can't actually love me?
>my brother in christ, "she" is a bunch of battlefield targeting algorithms given female form to reduce combat stress
>Y 80 VN
GATO GATO GATO CUTE
Unlike the process by which babies learn about the world, right? gay.
You're just a series of chemical reactions, your entire life was started by events that proceeded you and shaped every facet of your life, you're just a dumb mechanical process acting out a (from your perspective) long chain of reactions (from the universe's perspective, instantaneous).
OR
You're a human being and you feel love. And your robot wife feels love too.
Both choices are as equally correct as believing you have free will and that you are "conscious".
yes, unlike, LARPer
The part which I skipped over and you apparently know nothing about, is that the design of the training process and the loss function is a core part of the model, and if you'd ever written one you'd know why the current generation of models aren't conscious
retard
Current AI techniques are more than enough to drum up investor money because they deliver actual, useful results.
Superior memory hardware exists.
They're called memristors.
You can perform compute in-memory.
All of this stuff is stuck in the lab. Firstly, because GPUs are more economical, and secondly because there aren't yet any use cases this more efficient hardware solve better than GPUs.
>But how does it know what object is ''bad'' and what is ''good'', like if a value is supposed to be positive or negative? Do you just brute force it?
What do you think Captcha was? Nowadays they just pay nigs or poos $2 an hour to do it on a massive scale
cats inside computers
>I don't understand
AI engineers don't understand either, they just throw shit at the machine and let machine learning figure things out through reinforcement
Directed evolution (R)(TM) Pfizer
No that's "gain-of-function" for protection racket purposes.
by seeing dis nuts
It doesn't "see" anything.
universal approximation theorem
Can you explain how we can "see" by having a bunch of structures in our eyes get hit by photons, create an electro-chemical signal that travels along a biological wire to protein synthesizing machine made of a dense collection of chemical logic gates?
I mean the answer is philosophical, once you've finished describing the literal physical processes as they exist in the world, you're on to topics like semiotics and "what is meaning".
The answer is obvious to anyone who thinks about it seriously.
We have a soul which experiences what is detected by out physical bodies. The brain cannot experience itself. Like you just said, how can we “see” if we’re nothing more than just an electro chemical signal. The obvious answer is that we cannot. We have no fucking idea how our consciousness works, but it is clear we are not just “electro chemical signals bro”.
It's worse than that. We think we have a distinguishable entity from others, but really it's quite obvious on a quantum scale that that's a facade and you're live with an organism that is trying it's hardest to set up the facade of different organisms and different souls/people in this universe.
In reality, every thing's fucking Silent Hill. It's just that humans and animals have been built good enough to not make it seem that way. And we're even perpetuating that construct because we do not like facing this horrific reality.
But at the same time, almost like a paradox, singularity is just as much of a delusion as mutually exclusive souls.
I think this is because our linguistic constructs are simply not capable to understand the thing in itself. We just don't know. We assume things and pretend we know because that's the best we can do.
In fact it seems like it's the best God or the Universe can do.
Despite all this, personally, I believe vigilance against any cult like talk when discussing this subject matter on these terms, because it's dangerous and leads to dumb assumptions that lead to people getting massacred.
Seeing the issues with Google made me realise that we should distance that natured language from talking the details about AI. We need to look into detail constructions in psychology and psychiatry perhaps instead.
>read string of data
>programmer programs that this equates to X characteristic or thing
>makes an algorithm to train itself how to do this repeatedly
>attenuate it's results with people's interaction with it
>sample physical process using mouth
>pre-programmed instinct tells me it tastes "good" or "bad"
>this set of pre-programmed instructions are so arbitrary in their implementation that not every fleshbot, not even every fleshbot from the same factory, likes the taste of the same things
>parents tell me you can't eat that for breakfast, you can only eat that dessert
>decide I can only eat it that for dessert, anyone that eats that for breakfast is being weird
For me it was different.
>parents tell me you can't eat that for breakfast, you can only eat it for dessert
>decided to crack into the alcohol and liquor cabinet when they're not looking out of anger and frustration
I was a terrible child.
The neuronal network is producing an output. The input is random noise, represented by random numbers. Now the neuronal network is trained to change these numbers until the output looks like an example image. The example image is represented by an array of tokens, which again are just numbers representing the pixel stream in the picture.
If the network produced a good output, the training data (connection between the neurons that produced the output) is saved. This hast to be done with as much training data as possible.
Once this training is done, it can be used to either detect material similar to the training data in an input (face recognition), or (after every training picture was provided with a caption) it can be combined with a language model like GPT to produce something that's similar to the training data.
I don't have an answer but you might enjoy learning about the whole vision pathway and visual cortex. I think it's broken down into 5 layers