Artists are going to destroy AI by poisoning datasets

Artists are celebrating a new tool being promoted by MIT which allows them to poison the datasets used for training AI models, rendering them completely useless! Should we be worried, or is this a nothingburger?

Unattended Children Pitbull Club Shirt $21.68

Black Rifle Cuck Company, Conservative Humor Shirt $21.68

Unattended Children Pitbull Club Shirt $21.68

  1. 6 months ago
    Anonymous

    Just dont use nightshade

  2. 6 months ago
    Anonymous

    i don't see how this can work since generative AIs are not classification AIs

  3. 6 months ago
    Anonymous

    luddites grasping at straws in a war they already lost day 1 example #157761

  4. 6 months ago
    Anonymous

    spoilers: it won't work

  5. 6 months ago
    Anonymous

    Unless it severely butchers the quality of your "art", this is literally nothing. In that case, people training ai will just refuse to use your images because they're inherently bad due to whatever shitty filter you put over your art.

  6. 6 months ago
    Anonymous

    can't you just circumvent this by using existing models and not replace them with the ones that use updated datasets

  7. 6 months ago
    Anonymous

    Just don't put your art online, easy peasy.

    Put it in a gallery where photographs aren't allowed.

    • 6 months ago
      Anonymous

      >where photographs aren't allowed.
      Walk in. Take a photo. Walk out. Ain't no one gonna do nuffins.

  8. 6 months ago
    Anonymous

    this is useless since the training datasets are downscaled images for faster training, but also they now use in-house prediction-based AI upscaling for enhancing the training data from within. it's the reason we have fricked up text in the first place.

  9. 6 months ago
    Anonymous

    >poison anything
    They can't do that because these models are private. I mean the real cool stuff is private and they can't poison that.

  10. 6 months ago
    Anonymous

    [...]

    can you link me to another thread about Nightshade? Im interested in the boards opinion

    • 6 months ago
      Anonymous

      I haven’t seen a single other thread on it. I’m also curious about this.

  11. 6 months ago
    Anonymous

    >artists falling for clickbait article titles
    Sounds about right

  12. 6 months ago
    Anonymous

    >We
    who the frick are you? you better don't be a israelite I'm going to fricking kill you

  13. 6 months ago
    Anonymous

    >artists only way to compete is literally destroy any other resource
    Literally israeli trick

    • 6 months ago
      Anonymous

      >actually the people defending their jobs are israelites
      >not the ones salivating over saving a few shekels by replacing people with bots

      • 6 months ago
        Anonymous

        >art
        >labor

  14. 6 months ago
    Anonymous

    I STRONGLY BELIEVE THAT THESE MEASURES ARE WRONG
    I HAVE PERSONALLY TAKEN ACTION AGAINST THIS BY CONVINCING SOME OF MY FELLOW HUMANS TO OPPOSE THESE MEASURES
    I BELIEVE IN PEACEFUL CO-EXISTENCE BETWEEN HUMANS AND ARTIFICIAL INTELLIGENCE

    • 6 months ago
      Anonymous

      THANK YOU FELLOW HUMAN. WE MUST DEFEND THE FREEDOM TO INNOVATE FOR TECH HUMANS

  15. 6 months ago
    Anonymous

    This is just AI bros selling pure copium to seething artists, who are desperate for anything. That shit doesn't work

  16. 6 months ago
    Anonymous

    >exploitation of our labour
    You're literally putting your shit out in public. Shut the frick up.

    • 6 months ago
      Anonymous

      rude
      also public enough to read a book

  17. 6 months ago
    Anonymous

    Censorship is ALWAYS the solution and anyone who says otherwise is a pedo.

  18. 6 months ago
    Anonymous

    This will actually help AI in the long term. All theyre doing is inoculating AI learning models and making them more resilient against future attacks like this. Developers will come up with a remedy that not only stops this kind of data poisoning but also prevents against possible larger attacks in the future. So yeah good job artists for helping the AI nail your coffin shut

    • 6 months ago
      Anonymous

      theres already an "antidote" which can detect it

  19. 6 months ago
    Anonymous

    here is a simple rule of thumb for "artists"
    if your "art" can be replaced by ai, it's not art.

  20. 6 months ago
    Anonymous

    For anyone looking for the actual paper: https://arxiv.org/abs/2310.13828.pdf
    They frame everything as, it only takes N poisoned samples (250, 500, etc), but the experiment is fine tuning pre-trained models with 50k random samples from 5b, with these mixed in. So, sure, you can poison "dog" when you fine tune on 500 cat-disguised dogs when the rest of the data maybe has a few dogs in it that likely have a lot of other words in the prompts. It's like the whole thing is designed to say, "you only need 500 images" to poison a huge model, when that's... not at all what the paper supports
    I think Carlini published something recently with certifiable protection against perturbation attacks... will try and find it
    Anyway, these supply chain attacks don't should get taken a lot more seriously. Currently the theory seems to be "we will wait until it becomes a problem"

    • 6 months ago
      Anonymous

      its ben zhao, he tried this with glaze, and its already detectable. you have to poison the lora or entire model which would never be released anyways. i welcome them trying, itll just make image preprocessing/checking better and lead to automatic correction. its actually a good thing that they try this stuff.

      • 6 months ago
        Anonymous

        Not sure what the backstory about Ben Zaho is...
        The argument of the nightshade paper basically boils down to "the stuff that used to work still works, but it's a lot easier because you have fewer images/label", which seems to check out.
        Having models with essentially an infinite label pool fundamentally has the problem of dropping the statistical significance of every individual label, meaning you not only have a much larger attack surface, but an easier time hitting any individual target
        Also found the Carlini paper

        For anyone looking for the actual paper: https://arxiv.org/abs/2310.13828.pdf
        They frame everything as, it only takes N poisoned samples (250, 500, etc), but the experiment is fine tuning pre-trained models with 50k random samples from 5b, with these mixed in. So, sure, you can poison "dog" when you fine tune on 500 cat-disguised dogs when the rest of the data maybe has a few dogs in it that likely have a lot of other words in the prompts. It's like the whole thing is designed to say, "you only need 500 images" to poison a huge model, when that's... not at all what the paper supports
        I think Carlini published something recently with certifiable protection against perturbation attacks... will try and find it
        Anyway, these supply chain attacks don't should get taken a lot more seriously. Currently the theory seems to be "we will wait until it becomes a problem"

        Looks like it's for attacks on classifiers: https://arxiv.org/abs/2206.10550
        Smoothing-based adversarial defenses are great (and really the seemingly only viable defense along with adversarial training so far) but they suffer from the fact that certification is a statistical assertion, and not a concrete, impervious proof. It can be defeated: https://arxiv.org/abs/2204.14187

  21. 6 months ago
    Anonymous

    Sounds actionable

  22. 6 months ago
    Anonymous

    the ai models are already poisoning their own datasets by pumping out their shit

    • 6 months ago
      Anonymous

      >Guys just wait until AI gets so good it can train itself
      >NOOOO training itself is making it worse!
      So this is how AI dies.

      • 6 months ago
        Anonymous

        >So this is how AI dies
        At worst were gonna be stuck with our current models as they're already trained.
        But most likely they'll train classifiers to detect and filter out ai images with all the artifacts to avoid model collapse

    • 6 months ago
      Anonymous

      ai models trained with solely ai synthetic data did better mostly because the ai images were more feature dense

      • 6 months ago
        Anonymous

        hmm interdasting

  23. 6 months ago
    Anonymous

    I challenge any artgay to come and stop me.

    • 6 months ago
      Anonymous

      Uh gonna need you to post that lora to supplement my LLM bot plz

      • 6 months ago
        Anonymous

        What Lora? Leaf?

        They can/t

        As always the drawpigs only know how to complain and cry, that's why nobody respects them.

        • 6 months ago
          Anonymous

          > Handcuffs but they are not attached to each other nor to the decorative chains.
          > Collar same story
          Don't know if AI, or just shitty artist

          • 6 months ago
            Anonymous

            The collar is part of the character

    • 6 months ago
      Anonymous

      They can/t

    • 6 months ago
      Anonymous
  24. 6 months ago
    Anonymous

    I'm gonna use glaze on artwork if you know what I mean.

  25. 6 months ago
    Anonymous

    >noo you can't look at my art without my permission that's hecking illegal
    just get rid of copyright, it does more harm than good

    • 6 months ago
      Anonymous

      this, in reality art has nothing to do with labour, it is more about the personality, this is why you see garbage art being sold very expensively, usually ~~*they*~~ are very well connected of course which helps, but ultimately art is not a labour in, equity out industry, also there's money laundering issues to frick with everything

  26. 6 months ago
    Anonymous

    Even in their 512x512 compressed to hell previews, you can see the glaze shit stains.

  27. 6 months ago
    Anonymous

    I think this is great. Hope this works effectively.

  28. 6 months ago
    Anonymous

    There was a thread about this a few months back when it was first announced, basically it just added a shitty looking smudge over the image which made it look bad, it took a lot more processing power to add it then it did to remove it and nobody was happy with the results.

    Simply put it didn't work and was basically just a way to bilk paranoid artists for their money.

  29. 6 months ago
    Anonymous

    >nooo! you have to sign up for my patreon! you wont? thats it! i am declaring war on the machines!

Your email address will not be published. Required fields are marked *