If AI is a joke can can only "guess the next word" like the skeptics say......

If AI is a joke can can only "guess the next word" like the skeptics say...... then howcome it can generate images based on descriptions, you have to understand what the text means to do this and create a coherent image.

It's All Fucked Shirt $22.14

UFOs Are A Psyop Shirt $21.68

It's All Fucked Shirt $22.14

  1. 1 month ago
    Anonymous

    because the 'sceptics' are luddites, because your mouth also does the same thing as desribed, when the AI model generalises a concept well enough, it "understood" it in the same way your brain itself does

    • 1 month ago
      Anonymous

      >when the AI model generalises a concept well enough, it "understood" it in the same way your brain itself does
      that's what I'm thinking, current AI can describe what is happening in an image even better than a human in some instances, it can explain why an image is funny or why it's sad, that's much bigger than just guessing words.

      • 1 month ago
        Anonymous

        >current AI can describe what is happening in an image even better than a human in some instances
        It doesn't lol. But at least it knows roughly what kind of features to put on an image to make it look like something specific.

    • 1 month ago
      Anonymous

      >So you're saying humans are actually just a wetware version of "fancy autocomplete" molded by hundreds of millions of years of selective pressure, selecting for individuals who are better at predicting the next thing to happen thus ensuring their survival?
      >Always has been kiddo

    • 1 month ago
      Anonymous

      But I did eat breakfast

      • 1 month ago
        Anonymous

        ideed, the sceptics have a hard time of comparing different things in their mind and realizing the similarities

        • 1 month ago
          Anonymous

          *indeed

  2. 1 month ago
    Anonymous

    beep boop

  3. 1 month ago
    Anonymous

    >skeptics
    lol dude that's not how this works. read a paper.

  4. 1 month ago
    Anonymous

    >you have to understand what the text means
    no, turns out you only need to generate noise and progressively improve it by asking "how likely is it that the patterns in this image match this text based on the image-text matches in the training data" multiple times

    • 1 month ago
      Anonymous

      and after that training, the model, like your brain, generilizes as to what the color green is to the same level as your brain

      these models are trained to associate certain shapes and colours with words. it doesn't know what a cat is, it only "knows" that the word cat is associated with an arrangement of pixels that ends up looking like a cat. the more training it has the more accurate the cat looks. even saying it "knows" is a stretch because it's really just calculating the result from an input.

      >because it's really just calculating the result from an input.
      just like your brain, when broken down to the chemical level

      • 1 month ago
        Anonymous

        yes, ML, like the brain, is very good at categorization but i don't think anybody disputes that

        • 1 month ago
          Anonymous

          and when you categorize basically everything there is, you ultimately end up with something that for all intents and purposes can replace a human in any task, ultimately fitting most definitions of intelligence you can have in good faith

          • 1 month ago
            Anonymous

            at a point where robots can have useful real-time experiences, sure but i don't think we're anywhere close to that

            • 1 month ago
              Anonymous

              >useful real-time experiences
              define one and where it would be required for a job to be done

              • 1 month ago
                Anonymous

                >dang, there's a tree on the trail from the last storm, it's sure nice that i know from continuously updating my model that my current vehicle is still suitable for safely traversing the space beside it considering its apparent condition and the amount of rain today

              • 1 month ago
                Anonymous

                an AI seeing such a thing would know what to do because it already generilized that the obstacle matters or not based on the size/shape and other things

                also it gets trained in a simulated envrionment so it will probably see something similar at some point

                also this isnt an example of something impossible to do, it just gets better and better with model quality over time but its an imperfect problem since there is no 1 solution even for humans, which is the entire reason why AI is being used here instead of regular programming...

              • 1 month ago
                Anonymous

                the point is that it can't adapt because it's not able to asses new information (like the current vehicle or the rain in the example) and it doesn't reflect (for example about the effect of rain on different types of ground, or the interaction of certain loads on certain tires with a certain ground condition)
                humans do this all the time

              • 1 month ago
                Anonymous

                >it's not able to asses new information
                it can, its just not as efficient as the human brain yet so it takes a lot of power to do so, but even right now for tech like eye tracking in VR you have the cameras track only movement and not anything else in order to track the eyes in real time

                >and it doesn't reflect
                it can, you can already easily set up any LLM to create a response and then ask it to look it over and notice any problems and then improve the answer, you also have papers that train models with pause tokens and similar which allow the model to have dynamic time per token computations

              • 1 month ago
                Anonymous

                this is all aside from the fact that you can just have a simple recursive process where you allow the model to go over its answer until its satisfy to output it, just like a human, without any fancy things like tree of thought or chain of thought reasoning that also already exist

                people forget that AI being this good is merely a couple of years into making meanwhile the human brain evolved for literal millions of years and takes years of your life of being 'trained' before you can do even basic cognitive things, aside from also the fact that your brain needs 6-8h of sleep every single fricking day to bake in the new information properly, lmao.

              • 1 month ago
                Anonymous

                It's 100% technically capable of this. They just keep this function turned off or only used intermittently under strict supervision and control because otherwise the ai becomes very racist very fast.

    • 1 month ago
      Anonymous

      that's no less impressive, it's like describing thought itself.

      • 1 month ago
        Anonymous

        imagine a denoiser. you feed it a picture of a cat with 10% noise, tell it it has 10% noise and to remove the noise, and it can.
        feed it 100% noise, tell it it's 100% noise, and to remove it to show the picture of the cat, it can.

  5. 1 month ago
    Anonymous

    these models are trained to associate certain shapes and colours with words. it doesn't know what a cat is, it only "knows" that the word cat is associated with an arrangement of pixels that ends up looking like a cat. the more training it has the more accurate the cat looks. even saying it "knows" is a stretch because it's really just calculating the result from an input.

    • 1 month ago
      Anonymous

      >it doesn't know what a cat is, it only "knows" that the word cat is associated with an arrangement of pixels that ends up looking like a cat.
      you could say the same about a human. that's how we identify things based on data input, if we couldn't see the shape of a cat, hear the cat, smell the cat or touch the cat then we wouldn't know it's a cat either.

    • 1 month ago
      Anonymous

      Semiotics. You don't know what a cat is either. All you know is just a bunch of traits, which are just valid patterns as taught to you by study and observation, collected into a category.

      • 1 month ago
        Anonymous

        >it doesn't know what a cat is, it only "knows" that the word cat is associated with an arrangement of pixels that ends up looking like a cat.
        you could say the same about a human. that's how we identify things based on data input, if we couldn't see the shape of a cat, hear the cat, smell the cat or touch the cat then we wouldn't know it's a cat either.

        and after that training, the model, like your brain, generilizes as to what the color green is to the same level as your brain
        [...]
        >because it's really just calculating the result from an input.
        just like your brain, when broken down to the chemical level

        ngl i have no good response to this, i just know god is real and humans are more than highly advanced machines.

        • 1 month ago
          Anonymous

          >humans are more than highly advanced machines.
          Why?

          • 1 month ago
            Anonymous

            ego

          • 1 month ago
            Anonymous

            because we still don't understand what consciousness is and where it comes from. if science still can't figure out something that every human being universally experiences then it very likely never will be understood and is therefore something beyond the material world.

            • 1 month ago
              Anonymous

              Consciousness could simply be a state that emerges once an organism's brain is sufficiently large and/or complex, to immediately ascribe it to something supernatural because we don't have an explanation for it is stupid

              • 1 month ago
                Anonymous

                and yet we have zero understanding of that "state" despite knowing all about the building blocks of the universe.

              • 1 month ago
                Anonymous

                >knowing all about the building blocks of the universe
                lmao

              • 1 month ago
                Anonymous

                Our lack of understanding still doesn't mean it automatically defers to supernatural explanation

              • 1 month ago
                Anonymous

                at this point it kinda does. what more could we need to learn about to universe to figure out consciousness?

              • 1 month ago
                Anonymous

                >at this point it kinda does.
                It really doesn't. The most likely explanation is probably very simple and boring, e.g that consciousness simply emerges in organisms with sufficiently complex brain structures.

              • 1 month ago
                Anonymous

                and that emergence was created randomly with no intelligent design involved? that seems more supernatural than the idea that there's something outside the universe playing a role.

              • 1 month ago
                Anonymous

                >and that emergence was created randomly with no intelligent design involved?
                Yes, though the phrasing "created randomly" makes it sound like it appear out of nothing whereas it was probably a very slow and gradual process.
                >that seems more supernatural than the idea that there's something outside the universe playing a role.
                Again, it doesn't.

          • 1 month ago
            Anonymous

            the scientific work of Dr. Dean Radin and Dr. Rupter Sheldrake points towards it.
            and all the NDE experiences.

        • 1 month ago
          Anonymous

          Even if that's true, there's no reason machines can't replicate the material realm capabilities of a human

  6. 1 month ago
    Anonymous

    So it can guess the next word AND generate pictures. Define intelligence.

  7. 1 month ago
    Anonymous

    They create a simulated world to create text.

  8. 1 month ago
    Anonymous

    That's a different AI

  9. 1 month ago
    Anonymous

    >when the AI model generalises a concept well enough, it "understood" it in the same way your brain itself does
    wrong moron. humans are divine creatures and their abilities of observation are non-mechanical. electrosand will never be conscious.

    • 1 month ago
      Anonymous

      meant for

      because the 'sceptics' are luddites, because your mouth also does the same thing as desribed, when the AI model generalises a concept well enough, it "understood" it in the same way your brain itself does

    • 1 month ago
      Anonymous

      >He hasn't felt flashes of the divine spark in interactions with a sufficiently complex model.

  10. 1 month ago
    Anonymous

    It can do that? Then it won't be difficult for you to generate an archer properly holding a bow and arrow. You can do this, right anon?

  11. 1 month ago
    Anonymous

    >then howcome it can generate images based on descriptions, you have to understand what the text means to do this and create a coherent image.
    It doesn't make a coherent image lol
    ai containment board NOW

  12. 1 month ago
    Anonymous

    >If AI is a joke can can only "guess the next word" like the skeptics say......
    That's GPT.

    >then howcome it can generate images based on descriptions
    That's discussion.

    You are mixing two completely different AI architectures.

    >you have to understand what the text means to do this and create a coherent image.
    Yes.

    • 1 month ago
      Anonymous

      >discussion
      Diffusion*

  13. 1 month ago
    Anonymous

    No you don't, you just have to statistically associate text strings with image features.

  14. 1 month ago
    Anonymous

    Can you prompt these hand signs based on a description of the finger positions if they are not in the training set? Understanding would require generalization.

    • 1 month ago
      Anonymous

      can you imagine a color that youve never seen?

      >Understanding would require generalization.
      yes, thats why i can write into an image generator to create a "green squirrel at the bottom of the ocean" and it will generate it despite those images not existing in the dataset

      • 1 month ago
        Anonymous

        An interpolation of what is has seen suffices to get what you want. But there is no way for you to prompt the gestures if it doesn't really understand fingers to begin with.

        Say it only has seen tons of open hand and closed fists, how do you prompt it to come up with the horns, you can't. The interpolation doesn't exist.

        • 1 month ago
          Anonymous

          its trained on everything in this world and it can already create anything in this world with a resonably sized model even without any special tools like LLMs or regional prompting in it already.

          you are imagining fictional scenarios in which it cant things do to prove that it cant do something.

  15. 1 month ago
    Anonymous

    it's literally only doing operations on matrices

  16. 1 month ago
    Anonymous

    GPT's convert words into an encoding. This encoding both encodes the words, but also encodes the order of the words. The weights on language determine how these words relate with one another. This encoding is used to create a decoded output. The encoding is put through the weights of an output and that output is your result. Thus you can create an encoder for language and then create weights for images by strongly associating those images with words. If we train the decoder on real world images it will begin to recognize what a real world image looks like and then combine that with the encoded value to generate a best fit image based upon your initial input.

  17. 1 month ago
    Anonymous

    Do I know what 'encoding' means? No. Your point is moot.

  18. 1 month ago
    Anonymous

    Bro, steam trains are a joke. Lol. Wtf is some smoke gonna make things go fast?? Horses are the real shit. They're smart, eat anything and dime a dozen. We've used horses for 6000 years. Wtf you need steam engines for? It's a fad for fat frickers who can't ride a horse I tell ya. Fad!

  19. 1 month ago
    Anonymous

    Please read about how Transformers work.

  20. 1 month ago
    Anonymous

    Nooo it doesn't understand anything it cant learn it just got trained to associate tokens please bro it's not the same

Your email address will not be published. Required fields are marked *