Kneel to DALLE3

It's over, sdcels.

CRIME Shirt $21.68

UFOs Are A Psyop Shirt $21.68

CRIME Shirt $21.68

  1. 8 months ago
    Anonymous

    >The exact same picture, but from a different point of view
    ...ack !

  2. 8 months ago
    Anonymous

    You obviously haven't spent a lot of time looking at art because the combination of overwrought shading + awful anatomy is as glaringly awful as it is woefully common.

    Also that joke's not funny.

    • 8 months ago
      Anonymous

      Not the anatomy of the anthropomorphic avocado and sentient spoon!

      • 8 months ago
        Anonymous

        Ask me how I can tell you're a weeb.

        • 8 months ago
          Anonymous

          I won't.

        • 8 months ago
          Anonymous

          because he's on BOT?

    • 8 months ago
      Anonymous

      horrible post, hope its bait

  3. 8 months ago
    Anonymous

    still awful at hands
    >DALLE4 WILL FIX IT, 2 MORE WEEKS

    • 8 months ago
      Anonymous

      midjourney already solved it, why can't OpenAI?

  4. 8 months ago
    Anonymous

    It's of no use to me if I can't run it locally.

  5. 8 months ago
    Anonymous

    Yeah but i only use SD for wanks.

  6. 8 months ago
    Anonymous

    That's not how an avocado pit works, what the frick is that brown half sphere

    • 8 months ago
      Anonymous

      Try feeding that prompt verbatim into Stable Diffusion, Midjourney, or any other competitor and see if you can get something better.

    • 8 months ago
      Anonymous

      >GRRRRRRRR WHY IS IT NOT EXACT!!!!!!!!!!!
      ur skin is brown

  7. 8 months ago
    Anonymous

    Text is nice, but open source is not far behind on that. Hands, feet and dynamic poses with correct limb bends are very hard for any current system. In SD you need bunch of extra steps to produce dynamic poses with correct hands and feet and even then it's hardly perfect and is even harder for non-humanoids.

    I still think real artists would use SD, because it gives more control, allows you to use it in combination with your own handmade art and is completely free and unmonitored.

  8. 8 months ago
    Anonymous

    In terms of graphical fidelity, it's worse than midjourney

    However in terms of interpreting prompts correctly it's better and might be the end of illustrators at this point.

  9. 8 months ago
    Anonymous

    openAI lies about everything and hardcodes their models to give fixed results. we say the same thing happen with dalle2 where supposedly it could do all this shit but turned out to be ass

  10. 8 months ago
    Anonymous

    Focus on Safety Improvements

    Preventing Explicit Content: OpenAI claims robust new safeguards against inappropriate images.
    Input Classifiers and Blocklists: Tools identify risky words and blocks public figures, so nothing new from the usual CGPT censorship.
    Lawsuits Over Copying:DALL-E competitors faced suits alleging use of copyrighted art.
    Opt-Out for Artists' Work: Artists can now request their art be blocked from AI copying.
    Avoiding Artist Mimicry: DALL-E 3 won't recreate specific artists' styles when named.

    • 8 months ago
      Anonymous

      A focus on safety
      Like previous versions, we’ve taken steps to limit DALL·E 3’s ability to generate violent, adult, or hateful content.

      Preventing harmful generations
      DALL·E 3 has mitigations to decline requests that ask for a public figure by name. We improved safety performance in risk areas like generation of public figures and harmful biases related to visual over/under-representation, in partnership with red teamers—domain experts who stress-test the model—to help inform our risk assessment and mitigation efforts in areas like propaganda and misinformation.
      Internal testing
      We’re also researching the best ways to help people identify when an image was created with AI. We’re experimenting with a provenance classifier—a new internal tool that can help us identify whether or not an image was generated by DALL·E 3—and hope to use this tool to better understand the ways generated images might be used. We’ll share more soon.
      >forced diversity hidden prompt
      >tagged images
      lmaooo

      • 8 months ago
        Anonymous

        >Opt-Out for Artists' Work: Artists can now request their art be blocked from AI copying.
        >Avoiding Artist Mimicry: DALL-E 3 won't recreate specific artists' styles when named
        Joke's on artistcels, OpenAI more than likely trained their model by distilling from other AI models images.
        So even if every single one of them asked to be opted out, it would have zero impact on what they did, lol

        • 8 months ago
          Anonymous

          >won't recreate specific artist styles
          gay as frick

    • 8 months ago
      Anonymous

      It's crazy how what 95% of what people will use art engines for (porn, copying artists, and celebs) are the specific things they censored. Nobody gives a shit about a stock photo generator even if it does have great tech.

      • 8 months ago
        Anonymous

        >copying artists
        As if you can't just describe a style that emulates an artist. Just wait until they prohibit describing a style that looks like a living artist lmao

  11. 8 months ago
    Anonymous

    Come back when it's gening animations

    • 8 months ago
      Anonymous

      not bad rotoscoping effect, hm hm.

      • 8 months ago
        Anonymous

        https://i.imgur.com/dgkXHG5.gif

        Come back when it's gening animations

        What kind of magic was used to reach that temporal coherence?

        It's the new motion module/models for stable diffusion. Look up animatediff; there's a github extension
        It can animate anything, though the quality heavily depends on what you're generating

        • 8 months ago
          Anonymous

          And how does it account for spectral dispersion of colour frequencies that is often observed in motion pictures recorded by camera?
          We all know it would look shit without accounting for it.

          • 8 months ago
            Anonymous

            Huh? Why does it need chromatic aberration?

            • 8 months ago
              Anonymous

              If the colour at each point can be seen as a 3d vector, then the whole animation can be viewed as a vector field, with time as the variable.
              By taking the divergence of the vector field we will obtain a function representing the spectral dispersion at every point in the image.
              We can superimpose the divergence of the original motion picture and the animation to get the natural spectral dispersion back.
              Otherwise it doesn't look right.

              For all of you non-cinematographers out there: spectral dispersion is the more general term for the chromatic aberration affect.

              • 8 months ago
                Anonymous

                It doesn't need that. We hate chromatic aberration.

              • 8 months ago
                Anonymous

                get a load of this troon wanting everything to be perfect.

                You DO know the eye has it's own lens right?
                That means there is natural spectral dispersion, and (You) are DENYING it me every time you post one of your unsightly images.

                It may make your malformed plastic hole look clearer but it won't ever make you more of a woman.
                You will never be a woman.

              • 8 months ago
                Anonymous

                Take a pill, I am a biological man.
                You are a dumbass wanting things to be imperfect.

              • 8 months ago
                Anonymous

                If the eye has natural spectral dispersion, then why do we have to add it to digital animation? lol

              • 8 months ago
                Anonymous

                You will never be mentally well

              • 8 months ago
                Anonymous

                Shut the frick up, homosexual.

    • 8 months ago
      Anonymous

      What kind of magic was used to reach that temporal coherence?

      • 8 months ago
        Anonymous

        Something something lowpass filter something

  12. 8 months ago
    Anonymous

    This is actually great news, hopefully someone makes a massive dataset out of it like people did with GPT-4 when fine-tuning LLaMA, but with this we could fine-tune SDXL to actually be compliant to the prompts and generate texts accurately.

    • 8 months ago
      Anonymous

      It wouldn't be good, because

      Focus on Safety Improvements

      Preventing Explicit Content: OpenAI claims robust new safeguards against inappropriate images.
      Input Classifiers and Blocklists: Tools identify risky words and blocks public figures, so nothing new from the usual CGPT censorship.
      Lawsuits Over Copying:DALL-E competitors faced suits alleging use of copyrighted art.
      Opt-Out for Artists' Work: Artists can now request their art be blocked from AI copying.
      Avoiding Artist Mimicry: DALL-E 3 won't recreate specific artists' styles when named.

      and

      A focus on safety
      Like previous versions, we’ve taken steps to limit DALL·E 3’s ability to generate violent, adult, or hateful content.

      Preventing harmful generations
      DALL·E 3 has mitigations to decline requests that ask for a public figure by name. We improved safety performance in risk areas like generation of public figures and harmful biases related to visual over/under-representation, in partnership with red teamers—domain experts who stress-test the model—to help inform our risk assessment and mitigation efforts in areas like propaganda and misinformation.
      Internal testing
      We’re also researching the best ways to help people identify when an image was created with AI. We’re experimenting with a provenance classifier—a new internal tool that can help us identify whether or not an image was generated by DALL·E 3—and hope to use this tool to better understand the ways generated images might be used. We’ll share more soon.
      >forced diversity hidden prompt
      >tagged images
      lmaooo

      .

      • 8 months ago
        Anonymous

        What does this have to do with it? GPT4 is also heavily censored but that did not stop people from creating datasets from it and removing woke responses to generate the uncensored models.

        As long as you are not generating pornographic/lewd images, this would be just fine to create a dataset on to make a more prompt-compliant version of SD, it would actually generate better coomshit because of emergent abilities

        • 8 months ago
          Anonymous

          With that said, I doubt a fine-tune would be better than OpenAI's model in terms of following prompts since they work with bazillions of parameters, but this gives me hope people can at least makes things a little better since we can now generate accurately captioned image datasets en masse. Stability themselves would definitely take advantage of this.

        • 8 months ago
          Anonymous

          Their terms are closer to "no fun allowed". It's only useful for corpo shit. Further, they like to insert creative decisions in your prompt such as forced diversity. Most storytelling will be in violation of their terms anyways, Star Wars would be too violent and sexual for OpenAI. Not to mention they continually restrict things based on people's hand-wringing about AI so only a matter of time until you're not even allowed to illustrate a tame comic with it because muh jobs.

  13. 8 months ago
    Anonymous

    homie holding the clipboard the wrong way

  14. 8 months ago
    Anonymous

    For context this is what Midjourney comes up with with the same prompt as the OP image

    • 8 months ago
      Anonymous

      so bad

    • 8 months ago
      Anonymous

      Unfortunately OpenAI probably won't release their model details this time, so it may be a while before open source can replicate the results.

    • 8 months ago
      Anonymous

      Obviously OpenAI chose a specific prompt that worked well. We'll see how good it really is soon

  15. 8 months ago
    Anonymous

    I think people underestimate just how shitty AI art tools are when you try to do something even slightly outside their training sets.

    What they do is cram 18 billion hot anime girls into the training set and maximize every image on pretty lighting and looking like Artstation. Also most prompters aren't creative so they don't put too much stress on any model's interpretation of the world. In fact most prompts I imagine are a strict subset of the training set of the images.

    Dalle was considered shitty because it was never a major push for OpenAI and they cared more about the image fitting the prompt than looking pretty. Ultimately their interest is machine intelligence, not the economic utility of their tools in an immediate sense. Everything they release is just a means of training the next model better. ChatGPT was only released because they realized RLHF could make huge inscrutable models more effective.

    StableDiffusion hasn't gotten better in 6 months. Architecturally it hasn't evolved and is just pretty anime girls. Noone is impressed anymore that you painted over a cute girl dancing and gave her a chibi anime AI face with inconsistent hair every frame (despite the huge efforts to avoid that)

    • 8 months ago
      Anonymous

      >I think people underestimate just how shitty AI art tools are when you try to do something even slightly outside their training sets
      That's what fine tuning is for. If you go the extra mile, you can teach these models pretty much anything, using a simple consumer GPU.

  16. 8 months ago
    Anonymous

    >Like previous versions, we’ve taken steps to limit DALL·E 3’s ability to generate violent, adult, or hateful content.
    >DALL·E 3 is designed to decline requests that ask for an image in the style of a living artist.
    Not much to get excited about here. I'd rather have a less capable, but flexible model, than an AI censor breathing down my neck and sucking the fun out of generative art.

    • 8 months ago
      Anonymous

      At least they were smart enough to make it only living artists this time. DALLE2 is intentionally moronic about that, there's a ton of dead famous public domain artists it pretends not to know about, like Ivan Kramskoi etc

  17. 8 months ago
    Anonymous

    Why do AI images tend towards this hyper contrasted tacky HDR look?

  18. 8 months ago
    Anonymous

    Nope. It means that stable diffusion finetuned with open source llama text understanding is the next step, since this new dalle release was trained with chatgpt.

  19. 8 months ago
    Anonymous

    non-free pig disgusting

  20. 8 months ago
    Anonymous

    don't care unless I can run it locally

Your email address will not be published. Required fields are marked *