How the fuck does an AI ''see'' by using pixels from a photo and linear algebra.

How the frick does an AI ''see'' by using pixels from a photo and linear algebra.

I don't understand.

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 1 year ago
    Anonymous

    Pattern recognition amped up to 11

    • 1 year ago
      MercurySession

      https://i.imgur.com/tN7fH6S.png

      cats inside computers

    • 1 year ago
      Anonymous

      >I don't understand
      AI engineers don't understand either, they just throw shit at the machine and let machine learning figure things out through reinforcement

      Like sure i get how it parses the different hues on the pixels and creates vectors and shit to differentiate between close and far away objects

      But how does it know what object is ''bad'' and what is ''good'', like if a value is supposed to be positive or negative? Do you just brute force it?

      • 1 year ago
        Anonymous

        >see
        >know
        >observe
        >jerk off
        you apply qualities to a glorified spreadsheet that it does not have.
        The model doesn't "see" anything, nor does it "know" anything
        It isn't conscious and it doesn't do anything special.

        It simply applies a series of matrices over a set of pixels.
        The dimensions and values specified in the matrices are guided to a useful state through a process of trial and error, called training, or """learning""".

        Understand that in a sufficiently deep neural network, abstraction arises automatically.
        Any sufficiently complex abstraction can represent any computation.

        • 1 year ago
          Anonymous

          So it's just an algorithm that comapres vectors? It can't actually judge what it's processing? Does this mean my AI wife can't actually love me?

          • 1 year ago
            Anonymous

            >compares vectors
            performs matrix multiplication
            >judge what it's processing
            no
            >can't actually love me
            it can make you think it loves you

            For an AI to be truly intelligent, it needs to have both long term memory, be capable of retaining what it's learned over a long period of time, yet still be plastic enough to learn new concepts

            there are no well known or accessible models which currently do this, but work is being done by IBM (TrueNorth), Intel (Loihi), and others

            • 1 year ago
              Anonymous

              >For an AI to be truly intelligent it needs to fit some autistic fixated arbitrary schema humans have come up with in the absence of ai
              lol

              • 1 year ago
                Anonymous

                >arbitrary
                yeah good luck sounding smart with the memory of a goldfish and catastrophically forgetting how to walk after trying to learn how to run
                gay

            • 1 year ago
              Anonymous

              >For an AI to be truly intelligent, it needs to have both long term memory, be capable of retaining what it's learned over a long period of time, yet still be plastic enough to learn new concepts

              How is this not literally already the case? The problem right now is we can't make memory hardware that's anywhere near as efficient as the human brain.

            • 1 year ago
              Anonymous

              >For an AI to be truly intelligent
              Define "intelligent".
              Now define "truly intelligent".
              What you mean is human-like intelligent, where we are far from. To have human-like capabilities an AI would need an intention. That means it would need to be able to come up with something itself.

              All AI to this day always needs human input to do anything. It is always reacting, never acting on its own. In terms of intention we are not even on the level of lower mammals with our AI.

              • 1 year ago
                Anonymous

                I'm talking bare minimums
                nobody is going to claim they know exactly how intelligence works, else they'd have solved it by now
                I pointed out the three most pressing issues with the current generation of models
                there are probably dozens more

              • 1 year ago
                Anonymous

                >nobody is going to claim they know exactly how intelligence works, else they'd have solved it by now
                The problem is we don't even know what intelligence is. There is no single generally accepted definition of "intelligence". A toaster can be "intelligent" when it is intelligent enough to detect how toasted the bread inside is. There are circuits with an opamp that manage to maintain a certain angle of a servo motor depending on which angle it is attached to, how much pressure is added to get it out of that angle etc. It also can be called intelligent despite it just consisting of a single circuit with an opamp and a few resistors and capacitors.

              • 1 year ago
                Anonymous

                >we don't even know what intelligence is
                well not quite
                The industry is pretty set on the idea that the direction to go is ONE model which can be generalized to solve any given problem.
                It's a concrete goal with a concrete solution, and it's a good starting point.

                https://arxiv.org/abs/2208.11173
                In the introduction, and in "Roadmap to an AI Prototype" these guys outline their research goals w.r.t. a model that is increasingly more generalizable to many problems, and increasingly more independent from researcher interference.

              • 1 year ago
                Anonymous

                that is still ONE definition of intelligence and far from what humans are capable of. That problem solving intelligence still only reacts to prompts and to create it one just needs to put enough effort in training and provide enough storage space to keep all the training data.

              • 1 year ago
                Anonymous

                We know, but everyone refuses to accept that there is more to us than the physical world allows us to see. You think we can detect everything in the universe with our limited bodies? Materialists are absolutely coping. None of their theories make sense. “the brain is all there is, no soul”. Meanwhile, they can’t even begin to explain how the brain would experience itself. What consciousness is. How any of it works AT ALL. Same goes for space. “LOL HERES OUR THEORY FOR SPACE”, “WAIT A MINUTE, THAT DOESNT WORK, GUESS WHAT, THERES THIS MATTER WE CANT SEE, MEASURE, DETECT, CAPTURE, OR INTERACT WITH, WE CALL IT DARK MATTER LOL AND IT MAKES THE MATHS WORK WITH OUR ORIGINAL THEORY LOL”.

                Do you people not see that we live in a universe of which we know absolutely nothing about.

              • 1 year ago
                Anonymous

                >I can't explain something therefore gods

              • 1 year ago
                Anonymous

                I will not go your schizo route because that's not what I was talking about when I said that there is no single generally accepted definition for "intelligence". An ant colony is intelligent, a human writing a book is intelligent too. A camera that detects light condition and faces and adapts its settings accordingly is intelligent.
                If we want human-like intelligence from an AI however, the AI needs to simulate basic needs, drives, at least basic animalish feelings like anger, fear, repulsion, attraction, attention, boredom.

            • 1 year ago
              Anonymous

              >For an AI to be truly intelligent, it needs to have both long term memory, be capable of retaining what it's learned over a long period of time, yet still be plastic enough to learn new concepts
              We're working on it, bro. We've made impressive strides lately, even if AGI turns out to be a long way off. I WANT TO HABEEB.

          • 1 year ago
            MercurySession

            >>Does this mean my AI wife can't actually love me?
            >my brother in christ, "she" is a bunch of battlefield targeting algorithms given female form to reduce combat stress

            >Y 80 VN

            • 1 year ago
              Anonymous

              GATO GATO GATO CUTE

        • 1 year ago
          Anonymous

          Unlike the process by which babies learn about the world, right? homosexual.

          So it's just an algorithm that comapres vectors? It can't actually judge what it's processing? Does this mean my AI wife can't actually love me?

          You're just a series of chemical reactions, your entire life was started by events that proceeded you and shaped every facet of your life, you're just a dumb mechanical process acting out a (from your perspective) long chain of reactions (from the universe's perspective, instantaneous).

          OR

          You're a human being and you feel love. And your robot wife feels love too.

          Both choices are as equally correct as believing you have free will and that you are "conscious".

          • 1 year ago
            Anonymous

            yes, unlike, LARPer
            The part which I skipped over and you apparently know nothing about, is that the design of the training process and the loss function is a core part of the model, and if you'd ever written one you'd know why the current generation of models aren't conscious
            moron

            >For an AI to be truly intelligent, it needs to have both long term memory, be capable of retaining what it's learned over a long period of time, yet still be plastic enough to learn new concepts

            How is this not literally already the case? The problem right now is we can't make memory hardware that's anywhere near as efficient as the human brain.

            Current AI techniques are more than enough to drum up investor money because they deliver actual, useful results.
            Superior memory hardware exists.
            They're called memristors.
            You can perform compute in-memory.
            All of this stuff is stuck in the lab. Firstly, because GPUs are more economical, and secondly because there aren't yet any use cases this more efficient hardware solve better than GPUs.

      • 1 year ago
        Anonymous

        >But how does it know what object is ''bad'' and what is ''good'', like if a value is supposed to be positive or negative? Do you just brute force it?
        What do you think Captcha was? Nowadays they just pay nigs or poos $2 an hour to do it on a massive scale

  2. 1 year ago
    Anonymous

    cats inside computers

  3. 1 year ago
    Anonymous

    >I don't understand
    AI engineers don't understand either, they just throw shit at the machine and let machine learning figure things out through reinforcement

    • 1 year ago
      Anonymous

      Directed evolution (R)(TM) Pfizer

      • 1 year ago
        Anonymous

        No that's "gain-of-function" for protection racket purposes.

  4. 1 year ago
    Anonymous

    by seeing dis nuts

  5. 1 year ago
    Anonymous

    It doesn't "see" anything.

  6. 1 year ago
    Anonymous

    universal approximation theorem

  7. 1 year ago
    Anonymous

    Can you explain how we can "see" by having a bunch of structures in our eyes get hit by photons, create an electro-chemical signal that travels along a biological wire to protein synthesizing machine made of a dense collection of chemical logic gates?

    I mean the answer is philosophical, once you've finished describing the literal physical processes as they exist in the world, you're on to topics like semiotics and "what is meaning".

    • 1 year ago
      Anonymous

      The answer is obvious to anyone who thinks about it seriously.

      We have a soul which experiences what is detected by out physical bodies. The brain cannot experience itself. Like you just said, how can we “see” if we’re nothing more than just an electro chemical signal. The obvious answer is that we cannot. We have no fricking idea how our consciousness works, but it is clear we are not just “electro chemical signals bro”.

      • 1 year ago
        Anonymous

        It's worse than that. We think we have a distinguishable entity from others, but really it's quite obvious on a quantum scale that that's a facade and you're live with an organism that is trying it's hardest to set up the facade of different organisms and different souls/people in this universe.
        In reality, every thing's fricking Silent Hill. It's just that humans and animals have been built good enough to not make it seem that way. And we're even perpetuating that construct because we do not like facing this horrific reality.

        But at the same time, almost like a paradox, singularity is just as much of a delusion as mutually exclusive souls.
        I think this is because our linguistic constructs are simply not capable to understand the thing in itself. We just don't know. We assume things and pretend we know because that's the best we can do.
        In fact it seems like it's the best God or the Universe can do.

        Despite all this, personally, I believe vigilance against any cult like talk when discussing this subject matter on these terms, because it's dangerous and leads to dumb assumptions that lead to people getting massacred.
        Seeing the issues with Google made me realise that we should distance that natured language from talking the details about AI. We need to look into detail constructions in psychology and psychiatry perhaps instead.

  8. 1 year ago
    Anonymous

    >read string of data
    >programmer programs that this equates to X characteristic or thing
    >makes an algorithm to train itself how to do this repeatedly
    >attenuate it's results with people's interaction with it

    • 1 year ago
      Anonymous

      >sample physical process using mouth
      >pre-programmed instinct tells me it tastes "good" or "bad"
      >this set of pre-programmed instructions are so arbitrary in their implementation that not every fleshbot, not even every fleshbot from the same factory, likes the taste of the same things
      >parents tell me you can't eat that for breakfast, you can only eat that dessert
      >decide I can only eat it that for dessert, anyone that eats that for breakfast is being weird

      • 1 year ago
        Anonymous

        For me it was different.
        >parents tell me you can't eat that for breakfast, you can only eat it for dessert
        >decided to crack into the alcohol and liquor cabinet when they're not looking out of anger and frustration

        I was a terrible child.

  9. 1 year ago
    Anonymous

    The neuronal network is producing an output. The input is random noise, represented by random numbers. Now the neuronal network is trained to change these numbers until the output looks like an example image. The example image is represented by an array of tokens, which again are just numbers representing the pixel stream in the picture.
    If the network produced a good output, the training data (connection between the neurons that produced the output) is saved. This hast to be done with as much training data as possible.
    Once this training is done, it can be used to either detect material similar to the training data in an input (face recognition), or (after every training picture was provided with a caption) it can be combined with a language model like GPT to produce something that's similar to the training data.

  10. 1 year ago
    Anonymous

    I don't have an answer but you might enjoy learning about the whole vision pathway and visual cortex. I think it's broken down into 5 layers

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *