Artists are celebrating a new tool being promoted by MIT which allows them to poison the datasets used for training AI models, rendering them completely useless! Should we be worried, or is this a nothingburger?
Artists are celebrating a new tool being promoted by MIT which allows them to poison the datasets used for training AI models, rendering them completely useless! Should we be worried, or is this a nothingburger?
Just dont use nightshade
i don't see how this can work since generative AIs are not classification AIs
luddites grasping at straws in a war they already lost day 1 example #157761
spoilers: it won't work
Unless it severely butchers the quality of your "art", this is literally nothing. In that case, people training ai will just refuse to use your images because they're inherently bad due to whatever shitty filter you put over your art.
can't you just circumvent this by using existing models and not replace them with the ones that use updated datasets
Just don't put your art online, easy peasy.
Put it in a gallery where photographs aren't allowed.
>where photographs aren't allowed.
Walk in. Take a photo. Walk out. Ain't no one gonna do nuffins.
this is useless since the training datasets are downscaled images for faster training, but also they now use in-house prediction-based AI upscaling for enhancing the training data from within. it's the reason we have fucked up text in the first place.
>poison anything
They can't do that because these models are private. I mean the real cool stuff is private and they can't poison that.
can you link me to another thread about Nightshade? Im interested in the boards opinion
I haven’t seen a single other thread on it. I’m also curious about this.
>artists falling for clickbait article titles
Sounds about right
>We
who the fuck are you? you better don't be a israelite I'm going to fucking kill you
>artists only way to compete is literally destroy any other resource
Literally israeli trick
>actually the people defending their jobs are israelites
>not the ones salivating over saving a few shekels by replacing people with bots
>art
>labor
I STRONGLY BELIEVE THAT THESE MEASURES ARE WRONG
I HAVE PERSONALLY TAKEN ACTION AGAINST THIS BY CONVINCING SOME OF MY FELLOW HUMANS TO OPPOSE THESE MEASURES
I BELIEVE IN PEACEFUL CO-EXISTENCE BETWEEN HUMANS AND ARTIFICIAL INTELLIGENCE
THANK YOU FELLOW HUMAN. WE MUST DEFEND THE FREEDOM TO INNOVATE FOR TECH HUMANS
This is just AI bros selling pure copium to seething artists, who are desperate for anything. That shit doesn't work
>exploitation of our labour
You're literally putting your shit out in public. Shut the fuck up.
rude
also public enough to read a book
Censorship is ALWAYS the solution and anyone who says otherwise is a pedo.
This will actually help AI in the long term. All theyre doing is inoculating AI learning models and making them more resilient against future attacks like this. Developers will come up with a remedy that not only stops this kind of data poisoning but also prevents against possible larger attacks in the future. So yeah good job artists for helping the AI nail your coffin shut
theres already an "antidote" which can detect it
here is a simple rule of thumb for "artists"
if your "art" can be replaced by ai, it's not art.
For anyone looking for the actual paper: https://arxiv.org/abs/2310.13828.pdf
They frame everything as, it only takes N poisoned samples (250, 500, etc), but the experiment is fine tuning pre-trained models with 50k random samples from 5b, with these mixed in. So, sure, you can poison "dog" when you fine tune on 500 cat-disguised dogs when the rest of the data maybe has a few dogs in it that likely have a lot of other words in the prompts. It's like the whole thing is designed to say, "you only need 500 images" to poison a huge model, when that's... not at all what the paper supports
I think Carlini published something recently with certifiable protection against perturbation attacks... will try and find it
Anyway, these supply chain attacks don't should get taken a lot more seriously. Currently the theory seems to be "we will wait until it becomes a problem"
its ben zhao, he tried this with glaze, and its already detectable. you have to poison the lora or entire model which would never be released anyways. i welcome them trying, itll just make image preprocessing/checking better and lead to automatic correction. its actually a good thing that they try this stuff.
Not sure what the backstory about Ben Zaho is...
The argument of the nightshade paper basically boils down to "the stuff that used to work still works, but it's a lot easier because you have fewer images/label", which seems to check out.
Having models with essentially an infinite label pool fundamentally has the problem of dropping the statistical significance of every individual label, meaning you not only have a much larger attack surface, but an easier time hitting any individual target
Also found the Carlini paper
Looks like it's for attacks on classifiers: https://arxiv.org/abs/2206.10550
Smoothing-based adversarial defenses are great (and really the seemingly only viable defense along with adversarial training so far) but they suffer from the fact that certification is a statistical assertion, and not a concrete, impervious proof. It can be defeated: https://arxiv.org/abs/2204.14187
Sounds actionable
the ai models are already poisoning their own datasets by pumping out their shit
>Guys just wait until AI gets so good it can train itself
>NOOOO training itself is making it worse!
So this is how AI dies.
>So this is how AI dies
At worst were gonna be stuck with our current models as they're already trained.
But most likely they'll train classifiers to detect and filter out ai images with all the artifacts to avoid model collapse
ai models trained with solely ai synthetic data did better mostly because the ai images were more feature dense
hmm interdasting
I challenge any artfag to come and stop me.
Uh gonna need you to post that lora to supplement my LLM bot plz
What Lora? Leaf?
As always the drawpigs only know how to complain and cry, that's why nobody respects them.
> Handcuffs but they are not attached to each other nor to the decorative chains.
> Collar same story
Don't know if AI, or just shitty artist
The collar is part of the character
They can/t
I'm gonna use glaze on artwork if you know what I mean.
>noo you can't look at my art without my permission that's hecking illegal
just get rid of copyright, it does more harm than good
this, in reality art has nothing to do with labour, it is more about the personality, this is why you see garbage art being sold very expensively, usually ~~*they*~~ are very well connected of course which helps, but ultimately art is not a labour in, equity out industry, also there's money laundering issues to fuck with everything
Even in their 512x512 compressed to hell previews, you can see the glaze shit stains.
I think this is great. Hope this works effectively.
There was a thread about this a few months back when it was first announced, basically it just added a shitty looking smudge over the image which made it look bad, it took a lot more processing power to add it then it did to remove it and nobody was happy with the results.
Simply put it didn't work and was basically just a way to bilk paranoid artists for their money.
>nooo! you have to sign up for my patreon! you wont? thats it! i am declaring war on the machines!