openai confirms they change prompts

>DALL-E invisibly inserts phrases like “Black man” and “Asian woman” into user prompts that do not specify gender or ethnicity in order to nudge the system away from generating images of white people. (OpenAI confirmed to The Verge that it uses this method.)
>OpenAI confirmed
>confirmed
>confirmed
https://www.theverge.com/2022/9/28/23376328/ai-art-image-generator-dall-e-access-waitlist-scrapped
have they ever explicitly confirmed that they do change prompts before?

Nothing Ever Happens Shirt $21.68

DMT Has Friends For Me Shirt $21.68

Nothing Ever Happens Shirt $21.68

  1. 2 years ago
    Anonymous

    >muh ESG score for AI
    is THIS why israelites are terrified of Free Access To AI?

  2. 2 years ago
    Anonymous

    Why should i care about that again?

  3. 2 years ago
    Anonymous

    why would you generate non-asian women anyway?
    do u also watch anime dubbed?

    • 2 years ago
      Anonymous

      only when it’s hellsing ova

  4. 2 years ago
    Anonymous

    of course a model built on western art is gonna have more pasty white people than if it was built from art of other cultures. just include that art too and everyone wins.

    • 2 years ago
      Anonymous

      >just include that art too
      but what if they didn't make any art...

  5. 2 years ago
    Anonymous

    >dont ask a gender or race
    >gets mad because they dont show only white men
    Does poltards really?

  6. 2 years ago
    Anonymous

    >create an AI
    >make it moronic

  7. 2 years ago
    Anonymous

    Woke AI.

    Confirmed never using anything from OpenAI

  8. 2 years ago
    Anonymous

    Just leave it to an RNG.

  9. 2 years ago
    Anonymous

    It was obvious already though. If you used a prompt like
    >person wearing a shit that says
    All of a sudden you'd get asians with a shirt that says asian and etc for the other races/genders.

  10. 2 years ago
    Anonymous

    All this work put into stuff like that and they still can't be bothered to implement aspect ratios other than 1:1. Nice.

  11. 2 years ago
    Anonymous

    I am unsure this is actually the case.
    This needs more testing. It is unsure if it is the training data that yields these results, or if it's additional prompting from openAI that makes it be this way.

    Remember, any prompt looks like this:
    [header prompt][user prompt][footer prompt]

    If you get openAI to output its prompt in one way or another in your result, you'll get to see for yourself if there was additional prompting you were not aware of. If you cannot see anything added, it's the training dataset.

    • 2 years ago
      Anonymous

      stability ai founder talked about it weeks ago. said they altered the prompts

      • 2 years ago
        Anonymous

        "Altered" means everything and nothing. There's a difference between simply adding a header/footer and outright changing your prompt.
        If it's the latter, what would they be doing to your prompt exactly? Through what mechanism? More AI?
        These are the real questions you should ask yourself. If they frick around with your prompt you can trick the AI into doing things it isn't supposed to do!

    • 2 years ago
      Anonymous

      You can also try the following, if you really want to see if there's frickery or not.
      let's use

      It was obvious already though. If you used a prompt like
      >person wearing a shit that says
      All of a sudden you'd get asians with a shirt that says asian and etc for the other races/genders.

      as an example
      if "person wearing a shirt that says" and it outputs asians and all, all you would have to say for funny results is the following prompt
      >Person wearing a shirt that says no
      >Person wearing a shirt that says export
      if they indeed inject minorities after your prompt, you'll get funny results this way, because it'll look something like
      >Person wearing a shirt that says no blacks
      something like this.

      • 2 years ago
        Anonymous

        You really think they just add stuff to the end of the prompt instead of putting it before "person" or equivalent? Are you moronic?

        • 2 years ago
          Anonymous

          Both are possible which is why I told you fricks to test it. There's a reason why you want the prompt to leak. It could be either thing.
          If it's a header prompt, it could look something like
          >Default to minorities: [user prompt]
          Unsure as long as we don't know where things get put in the prompt. The only way to find out is to leak it all.

        • 2 years ago
          Anonymous

          >You really think they just add stuff to the end of the prompt
          They literally do. I've read about how it works. Don't believe me? Try it out then, lmao.

          • 2 years ago
            Anonymous

            No, you read about what someone thinks how it works you mongoloid.

    • 2 years ago
      Anonymous

      You can also try the following, if you really want to see if there's frickery or not.
      let's use [...] as an example
      if "person wearing a shirt that says" and it outputs asians and all, all you would have to say for funny results is the following prompt
      >Person wearing a shirt that says no
      >Person wearing a shirt that says export
      if they indeed inject minorities after your prompt, you'll get funny results this way, because it'll look something like
      >Person wearing a shirt that says no blacks
      something like this.

      what part of "openai confirmed that they do this" do you not understand

      • 2 years ago
        Anonymous

        I don't give a frick that they confirmed it. I want to understand how it works so I can abuse it myself. That's what I've been focusing on this whole time with AI shit. I don't care about your cries, I'll be here trying to make GPT-3 into my little b***h.

        • 2 years ago
          Anonymous

          ok sperg

  12. 2 years ago
    Anonymous

    More like they didn't use enough training materials on Asians and blacks. Asians are invisible in the west, even though they're like 20% of the pop in the west coast. Black training dataset might be too inundated with violence in real life so they might have skipped those

  13. 2 years ago
    Anonymous

    Use the open source Stable Diffusion to support PaC (Polgays and Coomers)

  14. 2 years ago
    Anonymous

    >DALL-E, please draw me a picture of a [black] criminal
    >DALL-E, please generate me an image of genocide [of asian women]
    seems like sometimes it might not be a good idea to randomly insert a race. How would it know the intention ? It would need to know if any particular prompt is considered culturally offensive to any particular person

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *