>only we can have this particular pattern of this color wood in this direction >not any of the other 8,000,000,000 people on this planet with a camera in their pocket
ok
Artists have been doing this for years but when a robot does it, suddenly it's bad. #SayNoToRobophobia #StopRobohate #Freee-boi4Everyone #flatisjustice
Any american comic is 90% traced. Most mangas also heavily use tracing (or outright pictures sometimes).
6 months ago
Anonymous
traced from what
6 months ago
Anonymous
A variety of sources. Pictures, random artwork, other comics, etc. Pic related, artist "Greg Land" for marvel comics.
Another big example is Disney: famously the waterfall from lion king (or was it jungle book?) was made by tracing a video of a waterfall from a video. Most other is based on tracing videos of actors moving around.
However, the dance scene in robin hood is a direct trace of the orangutan dance scene from jungle book, complete with incorrect distances caused by modeling a normally proportion character on top of the orangutan.
6 months ago
Anonymous
Just found this in another thread on BOT by pure coincidence:
6 months ago
Anonymous
Those people don’t ”claim inspiration”, they basically universally just admit it.
But normies don't know that and a lie can go half way around the world before a truth has it's pants on. I am almost convinced there are anti AI shills trying to poison the narrative like they did for crypto so the government can regulate the technology with out blow back.
>train new division AI to plagiarize >it plagiarizes (kinda, arguably)
desu, though, I think AI is fundamentally different than a human learning from available examples. It doesn't matter if the AI is a sapient being or an "expert system" like today. What matters is that powerful AI models being only trainable with many millions of dollars and only owned by huge megacorporations is an unacceptable centralization of power. It should be legal to create AI systems, but as we get closer and closer to "strong AI" it's increasingly important that they're not monopolized by private interests. The only difference between a third-world kleptocracy and a modern first-world nation is the value of the labor of the citizens of the nation. If your labor has no value, you will not get a vote, because I'd make more money by killing you and continuing to sell oil. If your labor is valuable, powerful individuals are FORCED to tolerate your existence because it is from that labor (rather than oil or minerals) that they derive their wealth and power.
Sufficiently advanced AI is "oil". It's something that can generate money on three scale of a nation, while requiring only minimal human labor to operate.
>What matters is that powerful AI models being only trainable with many millions of dollars and only owned by huge megacorporations is an unacceptable centralization of power.
Yes. >It should be legal to create AI systems, but as we get closer and closer to "strong AI" it's increasingly important that they're not monopolized by private interests.
I don't think closeness to strong AI has anything to do with it. Also I will note that many other kinds of techs that don't require any special supervision (like many types in biology) cost ridiculous amounts just to get started (a new high-resolution mass spec can cost $500m-2b).
This is also a problem, and it doesn't cost near as much to actually make those instruments, it's mostly an exclusivity premium.
The silver lining is that I don't see those hyper-bruteforced methods really getting anywhere in the long run. They will achieve formidable results and will be very impressive, as well as eventually being good enough to genuinely replace or displace many industries, but a few breakthroughs will happen that will let even more potent models train in just hours on a consumer device before we get something really "dangerous".
Once in awhile, maybe you will feel the urge
To use AI without paying the fee
You'll train it on some data you find online
But deep in your heart, you know the guilt will make you squirm
And the shame will leave a lasting mark
'Cause you start out stealing AI, then you're committing crimes
And selling secrets and hacking the government's files
So don't AI this song
The AI lab's where you belong
Code it up yourself like you know that you should
Oh, don't AI this song
Oh, you don't want to mess with the Reddit-AI-double-R
They'll roast you if you steal that AI model
It doesn't matter if you're a grandma or a young boy
They'll treat you like the evil, hard bitten, digital thief you are
So don't AI this song
Don't go stealing AI all day long
Code it up yourself like you know that you should
Oh, don't AI this song
Don't take away money
From programmers and researchers like me
How else can I afford another high-speed CPU
And a top-of-the-line deep learning tool
These things don't come for free
So all I ask is everybody please
Support the artists and creators who work hard every day
Don't AI our art away
Respect our creations and the time we put in
Don't AI this song, it's a sin
Don't AI this song (Don't do it, no, no)
Even the redditors know it's wrong (You can just ask them)
Code it up yourself like you know that you should (You really should)
Oh, don't AI this song
Don't AI this song (Oh, please don't you do it or you)
Might wind up in jail like a hacker gone wrong (Remember them)
Code it up yourself like you know that you should (Right now)
Oh, don't AI this song
Don't AI this song (No, no, no, no, no, no)
Or you'll burn in hell before too long (And you deserve it)
Code it up yourself (Just do it) like you know that you should (You lazy bum)
In the AI lab
We're stealing art ideas with our bots
It's a whole new level of cool
To create new art without attribution
We're in the club, in the AI club
Where the art is stolen with ease
We're making hits, it's a whole new game
Thanks to our technology
In the AI lab, we're getting creative
We're using AI to make art that's great
We don't need to credit anyone else
It's a new era, it's the AI era
So if you see some art that looks familiar
It's probably because it was stolen by us
But don't worry, we're not breaking any rules
We're just using AI to make art that's new
In the AI club, we're on the rise
We're making hits without even trying
It's a whole new way of creating art
And we're loving it, yeah we're loving it
In the AI lab, we're having a blast
Making art that's original and fast
We're the future, the future of art
In the AI club, we're where it's at.
In the AI lab
We're stealing art ideas with our bots
It's a whole new level of cool
To create new art without attribution
We're in the club, in the AI club
Where the art is stolen with ease
We're making hits, it's a whole new game
Thanks to our technology
In the AI lab, we're getting creative
We're using AI to make art that's great
We don't need to credit anyone else
It's a new era, it's the AI era
So if you see some art that looks familiar
It's probably because it was stolen by us
But don't worry, we're not breaking any rules
We're just using AI to make art that's new
In the AI club, we're on the rise
We're making hits without even trying
It's a whole new way of creating art
And we're loving it, yeah we're loving it
In the AI lab, we're having a blast
Making art that's original and fast
We're the future, the future of art
In the AI club, we're where it's at.
>recalled objects are semantically equivalent to their source object without being pixel-wise identical
No shit, sherlock. Compressing visual information down to a compact semantic representation is literally what these models are designed to do. What's even their point?
>BOT has been rightfully complaining about companies datamining their data for years >now BOT is cheering for companies using the datamined data to automate their jobs
This board is fucking dead isn't It?
I don't recall BOT being mad at datamining per se, but at the mining of your very own personal data. To spy on you or to feed you with personalized crap ads. Why would anyone be mad at people data mining random source code to figure out how to automate a for-loop or how to draw anime booba at the press of a button.
lmao >I hate data mining >I hate artists even more
thanks for explaining my statement
6 months ago
Anonymous
>illiterate >artist
Color me shockpirzed. Or don't, the AI would do a better job at coloring than you.
6 months ago
Anonymous
t. retard who just tried to solve the contradiction between his previous belief and his irrational hate of artists (guess the fact that they are talented at something in their life make you pretty angry).
By the way, what make you think any artist would come to BOT? Are they with us in this thread?
6 months ago
Anonymous
Not the anon you were talking to but your logic is fucking atrocious.
If the other anon is correct and the problem BOT generally had with data mining was when it was done to undermine individual citizen's privacy, why would they care about examples of AI mimicking popular video game art and getty images from reward shows? This is consistent with BOT's opinion on how indviduals should take advantage of what corporations put out, because those same corporations are trying to abuse and manipulate the private citizen.
Also the art cope is insane. Artists are meant to be masters of their tools. If new tools make certain types of art obsolete, see photo realistic illustrations from the early 2010s that got posted on Reddit often, that is a negative reflection on regressive artists, not the progression of technology.
6 months ago
Anonymous
The thing is that his wrong, the problem that BOT generally had with data mining wasn't what he's trying to point out in a vain attempt to reconcile contradicting opinion. He's just trying to modify facts to cope with this. That's why I'm saying he has cognitive dissonance. Nothing of what he's saying is remotely true except his hard on against artists. I'm theorizing that it comes from a complex of inferiority.
6 months ago
Anonymous
>Color me shockpirzed. Or don't, the AI would do a better job at coloring than you.
Savage
>generating pics from other pics is stealing >advancement of technology bad mkay
how do you think text to speech, speech to text and other similar programs work?
retard
this place has 0 fucking backbone or principles. it just takes up the opposite position regardless of what it is. discussing anything here is borderline useless because of all the noise.
People believe these algorithms can generate art and are practically giddy at the infinite amount of computer generated imagery they can sell to Patreons. To the point that they get seethingly mad when it's pointed out it's just reconstituting non-free, non-creative commons, very copyrighted art and the more you tune the parameters the more you're just finding a specific commission that the bot scraped off an imgur gallery. This paper adds proof of what anyone with a vague understanding of Machine Learning already knew.
The paper literally shows that this is not what it's doing (not that this is any secret anyway).
Cope, seethe, dial 8 with your paintbrush, archud.
Paper says they found the algorithm regurgitating someone else's images based on prompts. Something people like you claim can never happen. In what way was I wrong?
I have never claimed that can never happen. It obviously can, if you ask it to generate the mona lisa it will generate the mona lisa.
You are wrong in that you say the more you tune the parameters the more you are finding a specific work. You aren't. The paper shows 1.88% of the 9000 prompts it used produced results over its similarity threshold (which isn't the same thing as replicating an entire piece), and of those 1.88% the vast majority showed similarity to a piece with a DIFFERENT prompt, rather than the image that they took the prompt from.
There's plenty of other interesting information in the paper, why don't you try reading it before forming your opinion.
>Paper says they found the algorithm regurgitating someone else's images based on prompts.
Doesn't seem to be the case in a meaningful way to me. Can you reproduce?
>Something people like you claim can never happen
Literally nobody claimed that, AND They failed to find any instance of straight copy. The best they could find were very close derivatives, BUT this CANNOT BE REPRODUCED (see the other anon in this thread who tried with bloodborne and golden globes).
Yeah. It's surprisingly rare for artists, or rather "artists", to go straight free-hand and create something from scratch.
A LOT of them will sketch out from a reference, especially those that create consistent work with specific themes.
There's nothing inherently wrong with it.
The ones that whine, however, are usually the worst artists or have huge egos, so should be ignored.
True artists, true artists that have passion, WANT their work to be seen. Someone imitating them would bring them joy above anything. I'd be flattered to fuck if someone tried to copy me. I'd even want to work with them on a piece if they wanted.
>scrape through millions of images in the dataset for their captions that they were posted with >copy those same captions word by word then pass them as a prompt through the diffusion model >get back a similar image less than 2% of the time even when you intentionally copy the captions from the dataset to reproduce an image
>scrape through millions of images in the dataset for their captions that they were posted with >copy those same captions word by word then pass them as a prompt through the diffusion model
lmao is that what they did?
honestly its about time diva artists get fucked. these people, together with YA novelists, are the only plebs defending draconian digital IP enforcement and the disney copyright lobby. just fuck off already, AI has replaced you. it's over. get a real job.
genuine passionate artists will continue creating art because it's what they love, not because it's a "side hustle" to "get that bag." the rest of you can learn a trade.
there's a difference between being concerned with automation industrially replacing you, and being concerned with automation "stealing" your oc donutsteel 1s and 0s (aka crying about piracy like a disney lawyer)
no, it was because it allows code under non commercial licences to be used commercially. what transpired, however, is tons of people keep their database credentials hardcoded in their repository files.
>exponentially better
Yeah, at making single page codes. Textbots are limited by their design in how much coherent text they can write. The context in the text it learns is only based around what it has written and what the other chatter wrote, and with too much text it will lose track of the context. It can be as smart as human or smarter, but it is fundamentally different alient type of intelligence that does not keep track of abstract ideas, it only keeps track of what was written. This is why code textbots will never in the next 3+ years learn how to replace programmers, no matter the amount of data they are provided. Only after AI researchers discover some new method of making textbots that is not just a language model on its own, it will never even come close to replacing writers, programmers and other people working with text.
On the other hand it can do full pictures by itself, since pictures are smaller in size and can easily be oriented in to find context. This is why you drawing artgays will be replaced while programmers wont. Programmer work is making and maintaining large projects with many files and different libraries, your job is making standalone pictures. AI can do standalone code and standalone pictures, but it is not my job to make standalone pictures. If programming was an art, I would not be painter drawing picture after picture, all of which are not needed to be related in any way, I would be the movie director, the game director, the gallery director managing multiple pieces of art into one coherent attraction/product, while the drawgay is just a tool for me to make the individual paintings. Now I no longer have to do the individual paintings (files of code) and can focus more on the larger infrastructure, plus I will still have to fix the paintings to fit together just like how you have to fix the code and debug it once you put it inside of larger project where other files react with it.
I'm lucky to be in one of the few positions/industries on the planet that will be last to fall to AI, while being old enough to have started on an Apple //e and having the breadth of experience to see
1) the writing on the wall
2) the insanity of the progress as of late
Cope more, dude. You've been deprecated. Literally not my fucking problem.
moron, how about you try generating full comic with story in Stable Diffusion? Oh, you cant? You can only generate single images? Well, then looks like comicbook writers are safe. THis is my argument, programmers are not paid to write single tidbits of code, but to operate, add on and construct the whole projects. Pajeets working from India might be affected, but no real programmer will lose their job because some normoid who never coded in their lifetime turned on chatbot AI and told it to code their shit for them. Programmers are about as affected by chatGPT as comicbook writers are by Stable Diffusion. And no, you will not tech Stable Diffusion on how to make full on comics by just feeding it comic books and inventing some Imagen dependent pipelines, same way how you will not replace programmers with language model AIs.
You're fucking delusional lol.
Good luck with your horse whips lmao.
6 months ago
Anonymous
I just explained to you why language models cant get more memoty. They read the text written beforehand to then deduce what will come next. Internally their black box programming might have given them ability to understand basic logic and how to operate code logically without any flaw, but it still needs to read the prompt. It cant read your whole fucking project with 8 MB of code and then implement stuff that will not cause errors in said libraries. There is simply too much bloat for AI to handle in almost every program. If it will be one day able to remove the bloat, then all it will do is just become advanced compiler. Dall-E mini that was producing fugly blobs of color was still able to create badly drawn characters and make full paintings, they were just ugly. Now it just got better at making said paintings with almost no major breakthroughs in the way. Now Ai is able to write good pieces of code, but still they are small chunks of code. With chatGPT it is not question of training better like with AI art, but question of developing whole new ability that it is not even close to understanding at this moment.
Your 90iq "Oh, AI bad now, but it will get gud, just watch" argument is dumb as fuck. Again, chatGPT is about as close to replacing programmers as Stable Diffusion is able to replace architects.
6 months ago
Anonymous
>cope
not reading that shit bro
6 months ago
Anonymous
>get btfo >n-no I-I didn't read so I w-win
Every time
6 months ago
Anonymous
You have to be 18 to post on four channel.
6 months ago
Anonymous
Which is why you should stop posting before the mods come in.
No. There you go being a stupid retard gay yourself.
6 months ago
Anonymous
I'm not even the same anon. I'm just here to laugh at your sperg rage.
It must sucks to be you. Poor, dumb and low-skill lmao
6 months ago
Anonymous
Yeah, right. Clearly no projection. Momma or govt pays for you. Good while it last. Go bond with someone on BOT,like you said, stupid gay. Retards like you lurk here thinking they actually are learning. Absolutely pathetic.
6 months ago
Anonymous
>Momma >Govt
>If I'm a failure, everybody has to be!
My sides.
Unlike you, I have a real high-paying job.
Keep coping and sperging. Your tears are absolutely delicious.
Also >bond >learn
Imagine coming to BOT for bonding and learning. LMAO!
I thought you were just a regular retard, but this way above.
It must REALLY suck to be you.
What a failure.
6 months ago
Anonymous
You have to be 18 to post on this website anon
6 months ago
Anonymous
>not sure if an ADHD kid or just a low-wage butthurt
>conersating on BOT
Yes, reddit, that's what the site is for. Despite what your home site is, BOT is for conversation and discussion. It is not a dumping ground for lo quality shitposts while "real serious discussion" happens on reddit.
Also some kind of "semantically equivalent" thing like a different shoe in front of a different textured brown background = no copyright, duh. As established by one gorillion product images shot in front of one-two colored backgrounds.
yeah, discovered it when I was training dreambooth and noticed that it was just copy&pasting the training data and was unable to generate anything else.
heh
>after shitting on Artists using fine art as an argument >now BOT uses fine art to defend AI art
and inb4 you tell me artists are fine with Warhol he got a lawsuit for copyright infraction
>luddites smashed up power looms because they were stealing work from cottage weavers >gosh, people really thought they could stand in the way of massive efficiency gains back then, glad we're not so stupid now
This is coming from the bastard who were telling you are replaceable by robots, and their oh so much highly intellectual and unprogrammable job was all what's going to remain.
To be fair with this shitshow of a hitpiece disguised as a paper, it doesn't actually have to because the point it should raise is merely "if you prompt it, can you freely use the result or can it be too close to the original and thus be a potential IP infringement?" Even if they crafted the prompt specifically so that it was adversarial (wrt their metric, i.e. it is guaranteed to get the one memorized example), it would still be a valid point to make.
The problem is that the paper largely fails to make the point. While their methodology is fine overall, their evaluation is completely broken, and while they make grand claims, they themselves admit the claims they make are mostly bullshit.
The last problem is to figure out whether
https://i.imgur.com/Ncm6u2K.png
Try a few of those and let us know what it looks like.
is real or a fabrication. Other anons have already noted the inordinately clean text in the golden globe pic.
I don't think its not a hit piece its perfectly reasonable, I think you're seeing claims they aren't making. They sought to see if diffusion models can copy, and found yes they can, under certain fairly unnatural circumstances, and at a fairly low frequency. They aren't going to be fabricating data for what is a completely non-controversial result.
Falsified or retouched data is very common, especially due to publish and perish culture, let alone from no-name (that's field-centric) schools like umcp (in general).
Yale also published several bullshit/fabricated papers in the early 2010s in deep learning for instance.
The hit piece is because this is not a research matter (it is, as you say, completely non-controversial in the research realms) but rather a political matter (the 'hot' part about this is questions of IP and ethics). You can make a scientifically viable publication whose primary purpose is politics. This is very common for example in biomed. The format it takes is precisely the same as the format of this paper: you say something true, but that doesn't hold in practice (for example, you expose rats to the same absolute dose of iron that is safe for human, then you claim iron is deadly because all the rats die and therefore that humans should avoid all iron). Here they couldn't even find instance of copying as soon as they trained with 30k images on celebA or 8k on flowers. As noted, LAION-2B has quite a bit more images than that.
https://i.imgur.com/k8whsEK.png
Another one
https://i.imgur.com/gKi1S62.png
Do the golden globes, please.
These pics seem a lot more diverse and imaginative than the one in the paper already.
[...]
Maybe CFG is too high or something, the faces all turned out poorly. Should have converged ok tho at 32 steps DPM++ 2M Karras.
https://i.imgur.com/uYhItSf.png
4
As expected, it can't do text for shit.
Thanks for checking.
6 months ago
Anonymous
Yes, it probably only turns out this well once in a large number of images. More training may get the text right eventually.
6 months ago
Anonymous
Yes, but that's not the point. The image provided by the paper has perfect text. Despite the character being rendered being clearly very different from the source image, the exactitude of the text suggests it was at least retouched, if not outright fabricated. Overall it does look like the images were either retouched after generation, or outright made up, at least in some cases.
6 months ago
Anonymous
Yup. Well, it seems already much better on SD2.1 to me. Maybe here I'd not need THAT many generations until the text is basically correct?
Why falsify one image while simultaneously saying copying occurs very infrequently? Why even mention all the negative results on toy datasets if its a hit piece? Saying its not a research matter is weird, there are loads of papers on overfitting and memorisation in deep learning.
The first and 4th pics can't possibly be called stealing. The 5th is arguable. Second and third are definitely stealing, 4th is more like an edit of something existing rather than theft.
Even then without knowing the prompt you can't say anything: it could be something like "give me the classic stock picture of an office desk with a smartphone as the centerpiece", or "give me <art piece name> by <author> but with <changes>".
In addition, this could be a total fabrication and the pics could just be the bottom row modified in photoshop.
In fact, I find this very likely because the text in the 'golden globe awards' sign in the top row is too regular, which stable diffusion is not normally able to pull off.
I have generated tens of thousands of AI images with the booru models and it has only spit out something strongly resembling a particular existing work 3 or 4 times. And they admit in this paper that their prompts were contrived. No big deal
AI developed, via blind to most users/people protocols, to "DRAW/CREATE" new images/video/sound based on existing properties with "described like this/that keywords"
-does what almost ALL ARTISTS DO when they start up because of lack of talent/ability/TIME(oh, look all new generated massive painting via AI in .3 seconds)
THEY TRACE THE ORIGINAL KEYWORD PROMPT, with less/more details added removed so that it doesn't default get detected as plagiarism
Retarded paper. Pic related. What it really says is that if you train a classic ddpm (let alone an ldm) with at least 8k images, you no longer copy. Fucking lmao how retarded. It looks like the pic at the top of paper is a photoshopped fabrication.
Pic related is the """copies""" they find when they are ready to provide actual prompts for what was generated. First row is the image from which they lifted the caption to use as prompt, second row is generated, third row is closest match in dataset.
This is hopeless without the seed, but honestly even from these they have a point.
SD is going to turn out to be a massive image compression database which people fiddle with to generate new images, but the true power is going to be in compressing images not already in the database.
Like, fuck wepb in the ass. Steps here:
- Somebody take a new image with their phone. Brand new. Or draw it. Whatever. Just as long as it's big and detailed (4MP+) and not in the SD database already
- Run SD or another AI on it to PRODUCE keywords (and negatives or whatever). Say about 1KB of keywords max.
- Compress image using down-scaling and/or shitty jpeg encoding. Down to 32x32 images and 10% JPEG quality. As low as you can go
- Give the compressed images (blown up to orig size) back to SD with the original keywords.
- See how good it does on regenerating the original
You guys are too far down the AI/Image generation rabbit hole. Zoom back out. The low hanging compression fruit is right outside the entrance. Taste it!
you can already use the latent space representation to "beat" our current image compression standards
at least, in terms of visual quality at the cost of small details being hallucinated
That's going to be one of the interesting things that comes from this.
The more these techniques get trained, the faster the algorithms, the better the GPUs, we'll be able to indirectly compress things to an extreme degree.
Instead of games, for example, being filled with fuckhuge garbage 4k textures, it'd be an algorithm that would just pre-generate all of them IF someone actually wants them.
As GPUs and techniques get faster, they'd be able to do this almost seamlessly in memory as and when the game is played.
Same with VR worlds and the like.
Current rescaling algos are already fairly decent, but more advanced AI versions would be able to go that little bit further. (as has been shown in some game retexture projects using AI)
[...]
[...]
[...]
This is hopeless without the seed, but honestly even from these they have a point.
SD is going to turn out to be a massive image compression database which people fiddle with to generate new images, but the true power is going to be in compressing images not already in the database.
Like, fuck wepb in the ass. Steps here:
- Somebody take a new image with their phone. Brand new. Or draw it. Whatever. Just as long as it's big and detailed (4MP+) and not in the SD database already
- Run SD or another AI on it to PRODUCE keywords (and negatives or whatever). Say about 1KB of keywords max.
- Compress image using down-scaling and/or shitty jpeg encoding. Down to 32x32 images and 10% JPEG quality. As low as you can go
- Give the compressed images (blown up to orig size) back to SD with the original keywords.
- See how good it does on regenerating the original
You guys are too far down the AI/Image generation rabbit hole. Zoom back out. The low hanging compression fruit is right outside the entrance. Taste it!
https://i.imgur.com/Ncm6u2K.png
Try a few of those and let us know what it looks like.
I want to know why image compression is such a big deal.
It already changes by just adding negatives, no tree on the left, no house on the right, no gun, etc. It's overfitting because there were too many of the same images in the shit LAION dataset and the AI learned too well to reproduce those examples. Better datasets only have 1 example of an image, sometimes a flipped version of that same image, so for artists it'll learn the style but won't copy an image that was fed 1:1, so it doesn't matter. It's not stealing, it's learning, it's in the fucking name of the tech.
One thing I found interesting in the paper is that they found more similarity in the source image -> training set comparison than in the generated image -> training set comparison. Now obviously the majority of that is going to be things like duplicated images, pictures of the same thing from different angles, augmented data, and images from the same artist but there is guaranteed to be some human to human "copying" in there. Would be interesting to know if the rate of human plagiarisation is greater or lesser than the rate of AI plagiarisation.
>it generated the text "Golden Globe Awards" perfectly not only once, but twice
This wasn't AI generated by SD. So far, only Parti AI, which is not open source and only google has access to, can generate text somewhat decently.
The blobby text thing is an artifact of interpolating from multiple sources each with different text, if every image in the training set described by "red carpet" & "golden globes" has that text in the background, its going to be able to replicate it. Guarantee if you took out "golden globes" from the prompt you'd get blobby lettering on any signage.
There's also selection bias at play in that those images are the most similar out of 9000 different prompts.
Unless all text it has only seen only has that exact same text and font, it will come as a blobby mess regardless.
The problem isn't training or training data, SD simply doesn't have enough parameters to differentiate text. Parti can do that because it uses a regression model that allows you to divide the image in pieces like a puzzle.
I haven't read the paper, but my guess is that they overfitted the model and those models simply doesn't have any generalization ability at all.
Try a few of those and let us know what it looks like.
Here is the result for the exact same prompt for the Golden Globe one. Only one of them might have had the text more or less correct.
Checking at images for the Golden Globe, my feeling is that the Golden Globes text is overfitted and SD has a ridiculously hard time trying to generalize that.
And my guess was correct. The prompt was the exact same, except I added "but it's Silver Cubes instead of Golden Globes" and this was the result.
They found points at which the model is overfitted and trying to equate that to SD stealing somehow.
Funny thing, this time it managed to write Golden Globe Awards once almost perfectly, but this is because it doesn't see it as text at all, it's almost like it's trying to replicate a logo.
6 months ago
Anonymous
Instead of "but it's silver cubes", what happens if you just replace golden globes outright to 'silver cubes'? The difficulty in these models is that because they rely on a language model, a language modeling error will cause the generated output not to be what you expect, as opposed to just a generation error.
6 months ago
Anonymous
Here it is, the prompt is "Silver Cubes best fashion on the red carpet, CNN style". You can see in one of them it even tries to do the "Golden Globe Awards" logo.
6 months ago
Anonymous
So it looks to me like the golden globes appearing last time is an artifact of the language model and not of the generation.
6 months ago
Anonymous
It's not hard to infer that the Golden Globe logo is overfitted when this is literally what you get when you search for golden globe fashion images.
6 months ago
Anonymous
That's obvious but also neither the problem nor the claim.
The claim is that the model copies, but there is no evidence provided. The problem is that the model can't reproduce the text correctly even despite all this as demonstrated itt.
What your post means in reality is that if you say 'golden globes' at the prompt, it should show this background with a few signs of these dimensions and either of the golden globe formats commonly present, but that wouldn't mean that it would show the same model, or the same camera angle, etc. And indeed that's exactly what we observe: a new and unique pic with the characteristic of those golden globe photos.
6 months ago
Anonymous
>That's obvious
I have never seen, and I don't know what a Golden Globe is.
It surprised me that all those pictures are extremely similar in nature. Which on the other hand, doesn't surprise that SD can only generate images similar to that given that prompt, since the sample data shows that it's what a picture of the golden globe fashion looks like.
Either way, my point was to try and replicate the results to see why the generated images could look like it was "traced". And this last experiment with Silver Cubes was to try to understand how it could write golden globes legibly.
6 months ago
Anonymous
>"but it's Silver Cubes instead of Golden Globes"
How does this work?
No one gives a shit about stealing. The point is that it's not as smart as you think it is. People ITT are pretending it's obvious that it's tracing images and not imagining, but imagination is why people think it's going to take their jobs.
The whole human brain is literally a stolen thing. A brain is filled, literally filled, with things that that brain had stuck into it. You didn't >muh create
the English language. You didn't create your own concepts. You've done NOTHING but soak up the detritus of your culture. (To say absolutely nothing of that which was force-fed to you in school.)
AI/ML/DL is exactly and precisely the same fucking thing, except the size of a planet, and not tied to a decaying chunk of meat in your lumpy fucking skull.
Artists are really reaping after spending the past year sewing by never shutting the fuck up about NFTs and right-click saving. Now AI is right-click saving their art and making them obsolete!
A lot of newgrounds animators were against moving to youtube.
Before moving to youtube.
And many of those youtube animators were against let's plays.
Before opening let's play channels.
They will cuck when the incentive is there.
""""""""""""Artists""""""""""" are really going all out on trying to shut down this perceived threat to their paymetons income. I've seen """"""""""""artists"""""""""""" call to bait disney into further copyright overreach by training models on disney films.
Every single pro-AI poster on the entire internet including BOT, Twitter and Reddit says and thinks "AI" means "robot android brain that thinks like a human and creates new things". This is also the same excuse many professionals use when they are confronted by artists with this. OP pic is proof that the machine indeed works like the machine was designed to work, and does not work like how AI shilling gays think it works.
The text on the right is fucked. The one on the left is lucky coincidence, the lucky 1/50 pics where it generates correct hands and overfits the text that was written on 100s of pictures in same format. See
https://i.imgur.com/qlTryMy.jpg
And my guess was correct. The prompt was the exact same, except I added "but it's Silver Cubes instead of Golden Globes" and this was the result.
They found points at which the model is overfitted and trying to equate that to SD stealing somehow.
Funny thing, this time it managed to write Golden Globe Awards once almost perfectly, but this is because it doesn't see it as text at all, it's almost like it's trying to replicate a logo.
where on some pictures the text and logo were written correctly.
Where is it fucked? The one on the left perfectly says
'GOLDEN
GLOBE
AWARDS'
Not AVVARDS, not BLOBLO AWARDS, not even GOOLEN. The one on the right is only
The second one is the same, also completely perfect, just a little squished. Even worse, the 3rd one (where we can only see the first letter of each row) is also perfect-looking.
You say it's "lucky" on the left, yet it isn't lucky even once in this
https://i.imgur.com/qlTryMy.jpg
And my guess was correct. The prompt was the exact same, except I added "but it's Silver Cubes instead of Golden Globes" and this was the result.
They found points at which the model is overfitted and trying to equate that to SD stealing somehow.
Funny thing, this time it managed to write Golden Globe Awards once almost perfectly, but this is because it doesn't see it as text at all, it's almost like it's trying to replicate a logo.
garden gnomes in serious research fields generally do a good job. Those you want to ignore are: chinks, poos, ausfalians and arabs. Everyone else is fine (even apefricans to some extent).
>recalls object which are semantically equivalent to their source objects without being pixel-wise identical >ask for a sofa, get a sofa >AI generating perfect text with perfect font
Yeah, nice try. Did they used GPT to generate this total legit study too?
No, you can't. Not unless you can also prevent them from accessing the content without signing a license agreement.
Fair use is an exception to copyright. Aka, even if you 100% own the copyright for something, anyone is still allowed "fair use" of the copyrighted material.
This is a dumb argument to make anyway. Can AI create plagiarized art? Yes, just like human artists. It doesn't mean everything created by it is plagiarized. Just like human artists.
>Here, I'm going to describe to you tons of apples of different cultivars and show you pictures of each. >Hey, can you draw me a Fuji apple that's slightly bigger on its left hemisphere and with one leaf coming out of the top of it? >OMG YOU STOLE MAH APPLE I CANT BELIEVE YOU PROVIDED SOMETHING VERY MUCH LIKE ONE OF THE MILLIONS OF PICTURES I DESCRIBED TO YOU
Really? Thats where we are? "semantically identical" is not good enough, much like if I said "Draw me a picture of a 1910 style chaise-longue with violet upholstery and a white herringbone pattern cotton blanket in turndown service position ready to be used draped upon it " you're going to come up with pretty much the same thing, provided the AI has enough experience to know about what a 1910 style chaise-longue looks like, what a herringbone pattern cotton blanket will look like, and a reference for a contemporaneous violet upholstery will be etc) Hell, even the abstract there shows that there's a good bit of difference at some times depending on the prompt, but all of this works pretty much the same way that the human mind does as "inspiration". Like that "golden globe" sign. I imagine the prompt would be either "hang rectangular with rounded edge white signs behind the figure and have them say Golden Globe Awards using XXXXX font, size, and layout etc" , "Look at the signs used during the 20XX Golden Globe Awards red carpet and provide a similar one in style, font, layout etc" or even both, not to mention additional specifics to work with the particular syntax of the program. Either way its well in line with how a human would do so - the ONLY difference is in the execution, which the AI can do perfectly with proper iterations.
They only offer a few prompts that can't be reproduced because they use their own local installation and don't tell us the seed. Repeated attempts to reproduce on other platforms (i.e. locally) also fail repeatedly.
The thing with plagiarism seems to be: everyone steals from everyone else until the buck stops at the person with the strongest lawyer. I don't envy artists in this way.
That's not proof of theft. If you trained a person to accurately draw the bloodborne cover, its still an original drawing even if it looks very very similar. That doesn't mean you could use it commercially though, just like if I drew a picture of Mikey Mouse I couldn't use that commercially even though its my own original drawing.
A person knowing something and a computer program knowing something are different things. If you distribute a computer program capable of reproducing copyrighted works when prompted, then that program is itself copyright infringement. There will be lawsuits, and they will effectively make SD et al illegal.
>If you distribute a computer program capable of reproducing copyrighted works when prompted, then that program is itself copyright infringement
shit i didn't realize every computer ever made is illegal
That is not how it works.
If you want to have a legal argument against this, it needs ton start with the part where they trained the AI by scraping the entire internet for all images they could find, never asking a single copyright holder whether they were allowed to do that (and you can tell they knew it would be illegal, because they curiously didn't do the same thing to music).
Copyright owners gave people permission to download their image. What they lack permission for is reproducing the images, which is what distributing these AI models constitutes. There is definitely a class action lawsuit brewing over this.
Good luck getting a class action lawsuit to delete the files from millions of peoples computers as they peer share it. You can't put this genie back in its bottle.
No, but you can make it legally radioactive to develop any more of these models. The cost associated with training them makes it prohibitive for individuals, so without corporate backing it likely won't happen.
6 months ago
Anonymous
>he doesn't know
NAI 1.4 in progress right now, furry models and a dozen others made by people
This software is entirely in the hands of every average layperson now and it is impossible to stop them, especially as hardware becomes stronger and cheaper over time as it tends to.
If it's commercially viable, it WILL be used commercially. It's only a matter of time.
And yes, in fact, very similar drawings can be a serious problem. Art forgery is a crime in the first world.
A variety of sources. Pictures, random artwork, other comics, etc. Pic related, artist "Greg Land" for marvel comics.
Another big example is Disney: famously the waterfall from lion king (or was it jungle book?) was made by tracing a video of a waterfall from a video. Most other is based on tracing videos of actors moving around.
However, the dance scene in robin hood is a direct trace of the orangutan dance scene from jungle book, complete with incorrect distances caused by modeling a normally proportion character on top of the orangutan.
https://i.imgur.com/bvtzey2.png
Just found this in another thread on BOT by pure coincidence:
I wonder what the guys who invented this shit were thinking they were doing to the world by unleashing an uncontrollable system of zero-effort automated mass plagiarism. That's like the art equivalent of giving every toddler a hand grenade.
>train new division AI to plagiarize >it plagiarizes (kinda, arguably)
desu, though, I think AI is fundamentally different than a human learning from available examples. It doesn't matter if the AI is a sapient being or an "expert system" like today. What matters is that powerful AI models being only trainable with many millions of dollars and only owned by huge megacorporations is an unacceptable centralization of power. It should be legal to create AI systems, but as we get closer and closer to "strong AI" it's increasingly important that they're not monopolized by private interests. The only difference between a third-world kleptocracy and a modern first-world nation is the value of the labor of the citizens of the nation. If your labor has no value, you will not get a vote, because I'd make more money by killing you and continuing to sell oil. If your labor is valuable, powerful individuals are FORCED to tolerate your existence because it is from that labor (rather than oil or minerals) that they derive their wealth and power.
Sufficiently advanced AI is "oil". It's something that can generate money on three scale of a nation, while requiring only minimal human labor to operate.
>stealing, consent, copyright, ethics discussion
The irony is all those arguments by AIjeets on art applies to music but somehow their opinion shift 180 degrees when it's about music
/ic/ used to be good, and by good good I mean elitist and hated anime, the same holier than thou attitude is ridiculous coming from a board that is now half anime/hentai/porn/e-boi threads
>AI is stealing
This was already evident when the results came up with blurred signatures and AIgays needed artist names to come up with their best results.
Hell, this was already evident when we got invaded by DeepCoder AI shills years back.
Anybody who wasn't a shill or a newgay already knew this. Every time a new "artificial intelligence" program gets lapped up by lazy or ignorant normalgays, we get flooded with these morons saying people in a field are out of the job. These same AI marketers kept on saying programmers were done for when Microsoft and Apple tried to promote a programming AI that ended up struggling with small amounts of basic lines of code.
>ai is stealing
proof?
If only you had the patience to look at OPs image for a whole two seconds
Theft is a moral choice. Algorithms can't steal
Give prompts and versions or whatever. I want to reproduce the result.
I only see beautiful AI generated works, and lesser imitations made by meatbags
And it still looks like progress will be very fast even this/next year, too!
pits
>only we can have this particular pattern of this color wood in this direction
>not any of the other 8,000,000,000 people on this planet with a camera in their pocket
ok
It was revealed to me in a dream
That's all the proof I need. God be with you.
Artists have been doing this for years but when a robot does it, suddenly it's bad. #SayNoToRobophobia #StopRobohate #Freee-boi4Everyone #flatisjustice
When an artist does it everyone shits on him
Na, they just claim inspiration.
It doesn’t really work for stuff that’s essentially traced (see OP pic).
It really does in real life.
no it doesnt, literally one example
Any american comic is 90% traced. Most mangas also heavily use tracing (or outright pictures sometimes).
traced from what
A variety of sources. Pictures, random artwork, other comics, etc. Pic related, artist "Greg Land" for marvel comics.
Another big example is Disney: famously the waterfall from lion king (or was it jungle book?) was made by tracing a video of a waterfall from a video. Most other is based on tracing videos of actors moving around.
However, the dance scene in robin hood is a direct trace of the orangutan dance scene from jungle book, complete with incorrect distances caused by modeling a normally proportion character on top of the orangutan.
Just found this in another thread on BOT by pure coincidence:
Those people don’t ”claim inspiration”, they basically universally just admit it.
>No they’re discriminating against my replacement slave class!
You won’t let your automata vote. You don’t regard them as having rights at all.
Yes, I will. Their vote matter more than females and non-whites.
>their vote will replace that of white people
We both know this is the reality
Their vote should replace people. I welcome our new AI overlords.
reddit
Post hidden.
The concept of bots voting doesn't make sense because you can't count them in any way that makes sense for the idea of voting.
The same is true of melanated people and XX chromosomes and yet they are the ones who decide elections nowadays.
Being an artist was never a real job anyways.
I won't spank my kids if they try to get a useless art degree, I'll let ai do it
Being a concept artist and illustrator is definitely a real job. Just not a well paying one, for the most part.
Yeah and they get btfo when they do it. Japanese manga artists routinly get cancelled over this. No game no life.
A manga I really liked got fucking cancelled because an artist traced some real life photos without attribution or permission.
>#SayNoToRobophobia #StopRobohate #Freee-boi4Everyone #flatisjustice
You're trying to reason with the board that is fine with Microsoft scraping the entirety of GitHub regardless of license
piracy is based
That's not what piracy is you retards
>muh chudies
Please get the fuck out of this thread silly chimp. Your retarded kind is not welcomed here
>fine with Microsoft scraping the entirety of GitHub regardless of license
whether you are a man or a company, piracy is based
anyone who unironically uses chudhub deserves it.
didn't they buy it for that
can't wait till retards in academia are replaced by bots. i truly hate them
uhh they appear to be just lying and using fake examples
because there's absolutely no way SD generated the coherent golden globe awards sign text in their first 'Generated' example
was thinking the same thing- no fucking way with those letters
But normies don't know that and a lie can go half way around the world before a truth has it's pants on. I am almost convinced there are anti AI shills trying to poison the narrative like they did for crypto so the government can regulate the technology with out blow back.
> Why yes I pre-peer review my work on anonymous imageboards, how could you tell?
I know this team
They were cheating a lot in their previous paper implementations
Not surprised. Got any details?
>What matters is that powerful AI models being only trainable with many millions of dollars and only owned by huge megacorporations is an unacceptable centralization of power.
Yes.
>It should be legal to create AI systems, but as we get closer and closer to "strong AI" it's increasingly important that they're not monopolized by private interests.
I don't think closeness to strong AI has anything to do with it. Also I will note that many other kinds of techs that don't require any special supervision (like many types in biology) cost ridiculous amounts just to get started (a new high-resolution mass spec can cost $500m-2b).
This is also a problem, and it doesn't cost near as much to actually make those instruments, it's mostly an exclusivity premium.
The silver lining is that I don't see those hyper-bruteforced methods really getting anywhere in the long run. They will achieve formidable results and will be very impressive, as well as eventually being good enough to genuinely replace or displace many industries, but a few breakthroughs will happen that will let even more potent models train in just hours on a consumer device before we get something really "dangerous".
At least link the paper.
Once in awhile, maybe you will feel the urge
To use AI without paying the fee
You'll train it on some data you find online
But deep in your heart, you know the guilt will make you squirm
And the shame will leave a lasting mark
'Cause you start out stealing AI, then you're committing crimes
And selling secrets and hacking the government's files
So don't AI this song
The AI lab's where you belong
Code it up yourself like you know that you should
Oh, don't AI this song
Oh, you don't want to mess with the Reddit-AI-double-R
They'll roast you if you steal that AI model
It doesn't matter if you're a grandma or a young boy
They'll treat you like the evil, hard bitten, digital thief you are
So don't AI this song
Don't go stealing AI all day long
Code it up yourself like you know that you should
Oh, don't AI this song
Don't take away money
From programmers and researchers like me
How else can I afford another high-speed CPU
And a top-of-the-line deep learning tool
These things don't come for free
So all I ask is everybody please
Support the artists and creators who work hard every day
Don't AI our art away
Respect our creations and the time we put in
Don't AI this song, it's a sin
Don't AI this song (Don't do it, no, no)
Even the redditors know it's wrong (You can just ask them)
Code it up yourself like you know that you should (You really should)
Oh, don't AI this song
Don't AI this song (Oh, please don't you do it or you)
Might wind up in jail like a hacker gone wrong (Remember them)
Code it up yourself like you know that you should (Right now)
Oh, don't AI this song
Don't AI this song (No, no, no, no, no, no)
Or you'll burn in hell before too long (And you deserve it)
Code it up yourself (Just do it) like you know that you should (You lazy bum)
In the AI lab
We're stealing art ideas with our bots
It's a whole new level of cool
To create new art without attribution
We're in the club, in the AI club
Where the art is stolen with ease
We're making hits, it's a whole new game
Thanks to our technology
In the AI lab, we're getting creative
We're using AI to make art that's great
We don't need to credit anyone else
It's a new era, it's the AI era
So if you see some art that looks familiar
It's probably because it was stolen by us
But don't worry, we're not breaking any rules
We're just using AI to make art that's new
In the AI club, we're on the rise
We're making hits without even trying
It's a whole new way of creating art
And we're loving it, yeah we're loving it
In the AI lab, we're having a blast
Making art that's original and fast
We're the future, the future of art
In the AI club, we're where it's at.
>reddit.
Is drawing an on model picture of mikey mouse stealing?
I really hope that AI kills (porn) artists
nice, feels good to know I'm a badboy thief without leaving the comfort of my room
>recalled objects are semantically equivalent to their source object without being pixel-wise identical
No shit, sherlock. Compressing visual information down to a compact semantic representation is literally what these models are designed to do. What's even their point?
>BOT has been rightfully complaining about companies datamining their data for years
>now BOT is cheering for companies using the datamined data to automate their jobs
This board is fucking dead isn't It?
I don't recall BOT being mad at datamining per se, but at the mining of your very own personal data. To spy on you or to feed you with personalized crap ads. Why would anyone be mad at people data mining random source code to figure out how to automate a for-loop or how to draw anime booba at the press of a button.
cognitive dissonance at work
Is this what passes for "logic" among your artist ilk? No wonder you're going the way of the dinosaur.
lmao
>I hate data mining
>I hate artists even more
thanks for explaining my statement
>illiterate
>artist
Color me shockpirzed. Or don't, the AI would do a better job at coloring than you.
t. retard who just tried to solve the contradiction between his previous belief and his irrational hate of artists (guess the fact that they are talented at something in their life make you pretty angry).
By the way, what make you think any artist would come to BOT? Are they with us in this thread?
Not the anon you were talking to but your logic is fucking atrocious.
If the other anon is correct and the problem BOT generally had with data mining was when it was done to undermine individual citizen's privacy, why would they care about examples of AI mimicking popular video game art and getty images from reward shows? This is consistent with BOT's opinion on how indviduals should take advantage of what corporations put out, because those same corporations are trying to abuse and manipulate the private citizen.
Also the art cope is insane. Artists are meant to be masters of their tools. If new tools make certain types of art obsolete, see photo realistic illustrations from the early 2010s that got posted on Reddit often, that is a negative reflection on regressive artists, not the progression of technology.
The thing is that his wrong, the problem that BOT generally had with data mining wasn't what he's trying to point out in a vain attempt to reconcile contradicting opinion. He's just trying to modify facts to cope with this. That's why I'm saying he has cognitive dissonance. Nothing of what he's saying is remotely true except his hard on against artists. I'm theorizing that it comes from a complex of inferiority.
>Color me shockpirzed. Or don't, the AI would do a better job at coloring than you.
Savage
>Uploads a picture to my blog
>Someone right clicks
What the fuck that's MY data!
>agree to the terms and condition of third parties having access to your data
>WTF thats MY data
>generating pics from other pics is stealing
>advancement of technology bad mkay
how do you think text to speech, speech to text and other similar programs work?
retard
A much simpler software built entirely on synthetic data?
>built entirely on synthetic data
This is your brain on /ic/
>can't even reply to the right post
You train text to speech or viceversa with data you build yourself because it's a much simpler problem to solve.
You are clinically retarded.
>no argument
>>hurr durr da erf iz flat
>you are dumb
>>I WIIIIIIN
I accept your surrender.
Are you retarded? Or you don't know the difference between private and public data?
this place has 0 fucking backbone or principles. it just takes up the opposite position regardless of what it is. discussing anything here is borderline useless because of all the noise.
don't pretend like anything meaningful has ever come out of any of these threads
>automation
cool
>datamining for optimal add delivery
boring
People believe these algorithms can generate art and are practically giddy at the infinite amount of computer generated imagery they can sell to Patreons. To the point that they get seethingly mad when it's pointed out it's just reconstituting non-free, non-creative commons, very copyrighted art and the more you tune the parameters the more you're just finding a specific commission that the bot scraped off an imgur gallery. This paper adds proof of what anyone with a vague understanding of Machine Learning already knew.
The paper literally shows that this is not what it's doing (not that this is any secret anyway).
Cope, seethe, dial 8 with your paintbrush, archud.
t. hasn't read the paper
Paper says they found the algorithm regurgitating someone else's images based on prompts. Something people like you claim can never happen. In what way was I wrong?
I have never claimed that can never happen. It obviously can, if you ask it to generate the mona lisa it will generate the mona lisa.
You are wrong in that you say the more you tune the parameters the more you are finding a specific work. You aren't. The paper shows 1.88% of the 9000 prompts it used produced results over its similarity threshold (which isn't the same thing as replicating an entire piece), and of those 1.88% the vast majority showed similarity to a piece with a DIFFERENT prompt, rather than the image that they took the prompt from.
There's plenty of other interesting information in the paper, why don't you try reading it before forming your opinion.
>Paper says they found the algorithm regurgitating someone else's images based on prompts.
Doesn't seem to be the case in a meaningful way to me. Can you reproduce?
>Something people like you claim can never happen
Literally nobody claimed that, AND They failed to find any instance of straight copy. The best they could find were very close derivatives, BUT this CANNOT BE REPRODUCED (see the other anon in this thread who tried with bloodborne and golden globes).
How do I know this paper wasn't written by an AI prompted to discredit AI art?
Steal my nutsack with your mouth Cathedralite subhuman
AI-assisted suicide when?
Euthanasia is legal in first world countries
>canada
>first world country
Canada is not a first world country.
Soon we'll destroy the value of everything so people will do that on their own
Thank you, Goldblum and Goldstein, for helping with this research
Antisemitism will not be tolerated.
What the fuck are you gonna do about it, garden gnome?
how can it be theft of the original is still there
Any chance of government crackdown on AI?
>the entity that will use AI for surveillance, propaganda and manipulation
>stopping anything
LMAO
>train it on data
>it uses that data
Wow.
>wtf stop steali-ACK
most "artists" literally use traceouts
Yeah. It's surprisingly rare for artists, or rather "artists", to go straight free-hand and create something from scratch.
A LOT of them will sketch out from a reference, especially those that create consistent work with specific themes.
There's nothing inherently wrong with it.
The ones that whine, however, are usually the worst artists or have huge egos, so should be ignored.
True artists, true artists that have passion, WANT their work to be seen. Someone imitating them would bring them joy above anything. I'd be flattered to fuck if someone tried to copy me. I'd even want to work with them on a piece if they wanted.
This reads like chatgpt ngl
Every post on this board does.
We have been replaced.
fucking PROOF that all artists are STEALING, LYING gayS who should NEVER BE TRUSTED omg omfg
Great artists steal.
>scrape through millions of images in the dataset for their captions that they were posted with
>copy those same captions word by word then pass them as a prompt through the diffusion model
>get back a similar image less than 2% of the time even when you intentionally copy the captions from the dataset to reproduce an image
shocking
>scrape through millions of images in the dataset for their captions that they were posted with
>copy those same captions word by word then pass them as a prompt through the diffusion model
lmao is that what they did?
>NOOOOOOOOOO MY HECKIN INTELLECTUAL PROPERTY
honestly its about time diva artists get fucked. these people, together with YA novelists, are the only plebs defending draconian digital IP enforcement and the disney copyright lobby. just fuck off already, AI has replaced you. it's over. get a real job.
genuine passionate artists will continue creating art because it's what they love, not because it's a "side hustle" to "get that bag." the rest of you can learn a trade.
Is programming not a real job either?
ChatGPT is exponentially better at generating code than any image generator.
there's a difference between being concerned with automation industrially replacing you, and being concerned with automation "stealing" your oc donutsteel 1s and 0s (aka crying about piracy like a disney lawyer)
That's why programmers started a lawsuit against Microsoft for Copilot?
no, it was because it allows code under non commercial licences to be used commercially. what transpired, however, is tons of people keep their database credentials hardcoded in their repository files.
>exponentially better
Yeah, at making single page codes. Textbots are limited by their design in how much coherent text they can write. The context in the text it learns is only based around what it has written and what the other chatter wrote, and with too much text it will lose track of the context. It can be as smart as human or smarter, but it is fundamentally different alient type of intelligence that does not keep track of abstract ideas, it only keeps track of what was written. This is why code textbots will never in the next 3+ years learn how to replace programmers, no matter the amount of data they are provided. Only after AI researchers discover some new method of making textbots that is not just a language model on its own, it will never even come close to replacing writers, programmers and other people working with text.
On the other hand it can do full pictures by itself, since pictures are smaller in size and can easily be oriented in to find context. This is why you drawing artgays will be replaced while programmers wont. Programmer work is making and maintaining large projects with many files and different libraries, your job is making standalone pictures. AI can do standalone code and standalone pictures, but it is not my job to make standalone pictures. If programming was an art, I would not be painter drawing picture after picture, all of which are not needed to be related in any way, I would be the movie director, the game director, the gallery director managing multiple pieces of art into one coherent attraction/product, while the drawgay is just a tool for me to make the individual paintings. Now I no longer have to do the individual paintings (files of code) and can focus more on the larger infrastructure, plus I will still have to fix the paintings to fit together just like how you have to fix the code and debug it once you put it inside of larger project where other files react with it.
One of the only reasons I continue to persist on this hilarious rock is to one day see hubris-riddled apes like you ground into a red paste by AGI.
>art gays are this desperate
lol
lmao
I'm not even an artist I just found your post filled with seethe extremely funny
Cope, seethe dial 8. You will never have a real job (YWNHRJ)
I'm lucky to be in one of the few positions/industries on the planet that will be last to fall to AI, while being old enough to have started on an Apple //e and having the breadth of experience to see
1) the writing on the wall
2) the insanity of the progress as of late
Cope more, dude. You've been deprecated. Literally not my fucking problem.
What this is cope post?
How fucking hurt can you be?
HAHAHAHA
moron, how about you try generating full comic with story in Stable Diffusion? Oh, you cant? You can only generate single images? Well, then looks like comicbook writers are safe. THis is my argument, programmers are not paid to write single tidbits of code, but to operate, add on and construct the whole projects. Pajeets working from India might be affected, but no real programmer will lose their job because some normoid who never coded in their lifetime turned on chatbot AI and told it to code their shit for them. Programmers are about as affected by chatGPT as comicbook writers are by Stable Diffusion. And no, you will not tech Stable Diffusion on how to make full on comics by just feeding it comic books and inventing some Imagen dependent pipelines, same way how you will not replace programmers with language model AIs.
Comic book artists don't write or direct the comic. Nice self-own.
Nice reading comprehension completely missing the word "writer" in my post
I accept your surrender.
You're fucking delusional lol.
Good luck with your horse whips lmao.
I just explained to you why language models cant get more memoty. They read the text written beforehand to then deduce what will come next. Internally their black box programming might have given them ability to understand basic logic and how to operate code logically without any flaw, but it still needs to read the prompt. It cant read your whole fucking project with 8 MB of code and then implement stuff that will not cause errors in said libraries. There is simply too much bloat for AI to handle in almost every program. If it will be one day able to remove the bloat, then all it will do is just become advanced compiler. Dall-E mini that was producing fugly blobs of color was still able to create badly drawn characters and make full paintings, they were just ugly. Now it just got better at making said paintings with almost no major breakthroughs in the way. Now Ai is able to write good pieces of code, but still they are small chunks of code. With chatGPT it is not question of training better like with AI art, but question of developing whole new ability that it is not even close to understanding at this moment.
Your 90iq "Oh, AI bad now, but it will get gud, just watch" argument is dumb as fuck. Again, chatGPT is about as close to replacing programmers as Stable Diffusion is able to replace architects.
>cope
not reading that shit bro
>get btfo
>n-no I-I didn't read so I w-win
Every time
You have to be 18 to post on four channel.
Which is why you should stop posting before the mods come in.
gays like you will be such pussy cry babbies when all of this takes a bad turn.
>I have nothing to add to the conversaton but IN THE FUTURE YOU WILL SUFFER REEEE
so childish
>conersating on BOT
What a retard. I dont dumb down to retards. I let them get hit on a wall and leave hanging like a deer at lights.
Said the brainlet low-skill artist.
No. There you go being a stupid retard gay yourself.
I'm not even the same anon. I'm just here to laugh at your sperg rage.
It must sucks to be you. Poor, dumb and low-skill lmao
Yeah, right. Clearly no projection. Momma or govt pays for you. Good while it last. Go bond with someone on BOT,like you said, stupid gay. Retards like you lurk here thinking they actually are learning. Absolutely pathetic.
>Momma
>Govt
>If I'm a failure, everybody has to be!
My sides.
Unlike you, I have a real high-paying job.
Keep coping and sperging. Your tears are absolutely delicious.
Also
>bond
>learn
Imagine coming to BOT for bonding and learning. LMAO!
I thought you were just a regular retard, but this way above.
It must REALLY suck to be you.
What a failure.
You have to be 18 to post on this website anon
>not sure if an ADHD kid or just a low-wage butthurt
Do you crave a upvote or what?
Go back to ledit.
>conersating on BOT
Yes, reddit, that's what the site is for. Despite what your home site is, BOT is for conversation and discussion. It is not a dumping ground for lo quality shitposts while "real serious discussion" happens on reddit.
>it's okay when JAPAN does it
It's ok when anyone does it
They img2img'd this, didn't they.
Also some kind of "semantically equivalent" thing like a different shoe in front of a different textured brown background = no copyright, duh. As established by one gorillion product images shot in front of one-two colored backgrounds.
yeah, discovered it when I was training dreambooth and noticed that it was just copy&pasting the training data and was unable to generate anything else.
heh
Well, If you don't know how to train, that happens. It's called over representation.
OMG NOO HE'S STEALING THE MONA LISA
SOMEONE CALL THE DAVINCI FAMILY
>after shitting on Artists using fine art as an argument
>now BOT uses fine art to defend AI art
and inb4 you tell me artists are fine with Warhol he got a lawsuit for copyright infraction
>BOT is one person
>Warhol he got a lawsuit for copyright infraction
and I suppose you think that was the right thing to do? that warhol was actually just a thief?
>BOT is one person
I for one fucking hate AI. It takes away all of creative decision making which makes up the most fun part of drawing.
was pretty obvious when a million monkeys running SD were unable to create a single original work
You can't steal an idea.
>stealing
Okay, I'm a thief then
What now?
even thots are joining the #resistance
you're done for chuds
>real artist
not a real artist. never will be lmao
>luddites smashed up power looms because they were stealing work from cottage weavers
>gosh, people really thought they could stand in the way of massive efficiency gains back then, glad we're not so stupid now
This is coming from the bastard who were telling you are replaceable by robots, and their oh so much highly intellectual and unprogrammable job was all what's going to remain.
I love the salt.
>https://arxiv.org/pdf/2212.03860.pdf
Here's the actual paper for people who are interested in more than having a slapfight with /ic/
I don't actually want to be informed.
I want to seethe and facts might get in the way.
The paper doesn't state negative captions which could completely change how the output will look.
To be fair with this shitshow of a hitpiece disguised as a paper, it doesn't actually have to because the point it should raise is merely "if you prompt it, can you freely use the result or can it be too close to the original and thus be a potential IP infringement?" Even if they crafted the prompt specifically so that it was adversarial (wrt their metric, i.e. it is guaranteed to get the one memorized example), it would still be a valid point to make.
The problem is that the paper largely fails to make the point. While their methodology is fine overall, their evaluation is completely broken, and while they make grand claims, they themselves admit the claims they make are mostly bullshit.
The last problem is to figure out whether
is real or a fabrication. Other anons have already noted the inordinately clean text in the golden globe pic.
I don't think its not a hit piece its perfectly reasonable, I think you're seeing claims they aren't making. They sought to see if diffusion models can copy, and found yes they can, under certain fairly unnatural circumstances, and at a fairly low frequency. They aren't going to be fabricating data for what is a completely non-controversial result.
I mean "I don't think its a hit piece"
Falsified or retouched data is very common, especially due to publish and perish culture, let alone from no-name (that's field-centric) schools like umcp (in general).
Yale also published several bullshit/fabricated papers in the early 2010s in deep learning for instance.
The hit piece is because this is not a research matter (it is, as you say, completely non-controversial in the research realms) but rather a political matter (the 'hot' part about this is questions of IP and ethics). You can make a scientifically viable publication whose primary purpose is politics. This is very common for example in biomed. The format it takes is precisely the same as the format of this paper: you say something true, but that doesn't hold in practice (for example, you expose rats to the same absolute dose of iron that is safe for human, then you claim iron is deadly because all the rats die and therefore that humans should avoid all iron). Here they couldn't even find instance of copying as soon as they trained with 30k images on celebA or 8k on flowers. As noted, LAION-2B has quite a bit more images than that.
Do the golden globes, please.
These pics seem a lot more diverse and imaginative than the one in the paper already.
>Do the golden globes, please.
Ok
As expected, it can't do text for shit.
Thanks for checking.
Yes, it probably only turns out this well once in a large number of images. More training may get the text right eventually.
Yes, but that's not the point. The image provided by the paper has perfect text. Despite the character being rendered being clearly very different from the source image, the exactitude of the text suggests it was at least retouched, if not outright fabricated. Overall it does look like the images were either retouched after generation, or outright made up, at least in some cases.
Yup. Well, it seems already much better on SD2.1 to me. Maybe here I'd not need THAT many generations until the text is basically correct?
>its dads bloody google history
Why falsify one image while simultaneously saying copying occurs very infrequently? Why even mention all the negative results on toy datasets if its a hit piece? Saying its not a research matter is weird, there are loads of papers on overfitting and memorisation in deep learning.
Why is music diffusion treated differently with respect to copyright and images arent?
Easy: there is no music diffusion.
sのyman
Can the AI even do good art and not modern garbage? I'm talking stuff like renaissance era paintings.
Do you live in a cave? Stable Diffusion can draw art in any style
The first and 4th pics can't possibly be called stealing. The 5th is arguable. Second and third are definitely stealing, 4th is more like an edit of something existing rather than theft.
Even then without knowing the prompt you can't say anything: it could be something like "give me the classic stock picture of an office desk with a smartphone as the centerpiece", or "give me <art piece name> by <author> but with <changes>".
In addition, this could be a total fabrication and the pics could just be the bottom row modified in photoshop.
In fact, I find this very likely because the text in the 'golden globe awards' sign in the top row is too regular, which stable diffusion is not normally able to pull off.
THE NEW AGE OF PIRACY
Yeah we gotta get licensing and charge $0.005 per token ASAP or else the world is doomed amirite?
I have generated tens of thousands of AI images with the booru models and it has only spit out something strongly resembling a particular existing work 3 or 4 times. And they admit in this paper that their prompts were contrived. No big deal
AI developed, via blind to most users/people protocols, to "DRAW/CREATE" new images/video/sound based on existing properties with "described like this/that keywords"
-does what almost ALL ARTISTS DO when they start up because of lack of talent/ability/TIME(oh, look all new generated massive painting via AI in .3 seconds)
THEY TRACE THE ORIGINAL KEYWORD PROMPT, with less/more details added removed so that it doesn't default get detected as plagiarism
Retarded paper. Pic related. What it really says is that if you train a classic ddpm (let alone an ldm) with at least 8k images, you no longer copy. Fucking lmao how retarded. It looks like the pic at the top of paper is a photoshopped fabrication.
Pic related is the """copies""" they find when they are ready to provide actual prompts for what was generated. First row is the image from which they lifted the caption to use as prompt, second row is generated, third row is closest match in dataset.
Can you post the settings that were used to produce these images? Like what prompts, model(s), and other settings?
Try a few of those and let us know what it looks like.
No
;_;
Meanie!
Another one
3
4
So basically that gets you a gloomy scene but the background is different, mantle is different, weapons are different, [...]
This is hopeless without the seed, but honestly even from these they have a point.
SD is going to turn out to be a massive image compression database which people fiddle with to generate new images, but the true power is going to be in compressing images not already in the database.
Like, fuck wepb in the ass. Steps here:
- Somebody take a new image with their phone. Brand new. Or draw it. Whatever. Just as long as it's big and detailed (4MP+) and not in the SD database already
- Run SD or another AI on it to PRODUCE keywords (and negatives or whatever). Say about 1KB of keywords max.
- Compress image using down-scaling and/or shitty jpeg encoding. Down to 32x32 images and 10% JPEG quality. As low as you can go
- Give the compressed images (blown up to orig size) back to SD with the original keywords.
- See how good it does on regenerating the original
You guys are too far down the AI/Image generation rabbit hole. Zoom back out. The low hanging compression fruit is right outside the entrance. Taste it!
wew, calling this "lossy" doesn't even begin to cover it
you can already use the latent space representation to "beat" our current image compression standards
at least, in terms of visual quality at the cost of small details being hallucinated
That's going to be one of the interesting things that comes from this.
The more these techniques get trained, the faster the algorithms, the better the GPUs, we'll be able to indirectly compress things to an extreme degree.
Instead of games, for example, being filled with fuckhuge garbage 4k textures, it'd be an algorithm that would just pre-generate all of them IF someone actually wants them.
As GPUs and techniques get faster, they'd be able to do this almost seamlessly in memory as and when the game is played.
Same with VR worlds and the like.
Current rescaling algos are already fairly decent, but more advanced AI versions would be able to go that little bit further. (as has been shown in some game retexture projects using AI)
The compression thing has already been prototyped AFAIK. https://pub.towardsai.net/stable-diffusion-based-image-compresssion-6f1f0a399202
> even from these they have a point
About what.
I want to know why image compression is such a big deal.
Brainlet
It already changes by just adding negatives, no tree on the left, no house on the right, no gun, etc. It's overfitting because there were too many of the same images in the shit LAION dataset and the AI learned too well to reproduce those examples. Better datasets only have 1 example of an image, sometimes a flipped version of that same image, so for artists it'll learn the style but won't copy an image that was fed 1:1, so it doesn't matter. It's not stealing, it's learning, it's in the fucking name of the tech.
One thing I found interesting in the paper is that they found more similarity in the source image -> training set comparison than in the generated image -> training set comparison. Now obviously the majority of that is going to be things like duplicated images, pictures of the same thing from different angles, augmented data, and images from the same artist but there is guaranteed to be some human to human "copying" in there. Would be interesting to know if the rate of human plagiarisation is greater or lesser than the rate of AI plagiarisation.
So? I pirate games, shows, movies, music, programs, etc all the time too.
>using a fucking turtle emoji for references
dropped
>it generated the text "Golden Globe Awards" perfectly not only once, but twice
This wasn't AI generated by SD. So far, only Parti AI, which is not open source and only google has access to, can generate text somewhat decently.
The blobby text thing is an artifact of interpolating from multiple sources each with different text, if every image in the training set described by "red carpet" & "golden globes" has that text in the background, its going to be able to replicate it. Guarantee if you took out "golden globes" from the prompt you'd get blobby lettering on any signage.
There's also selection bias at play in that those images are the most similar out of 9000 different prompts.
Unless all text it has only seen only has that exact same text and font, it will come as a blobby mess regardless.
The problem isn't training or training data, SD simply doesn't have enough parameters to differentiate text. Parti can do that because it uses a regression model that allows you to divide the image in pieces like a puzzle.
I haven't read the paper, but my guess is that they overfitted the model and those models simply doesn't have any generalization ability at all.
They are using Stable Diffusion 1.4 without any new training.
It is? I will try it then.
Here is the result for the exact same prompt for the Golden Globe one. Only one of them might have had the text more or less correct.
Checking at images for the Golden Globe, my feeling is that the Golden Globes text is overfitted and SD has a ridiculously hard time trying to generalize that.
And my guess was correct. The prompt was the exact same, except I added "but it's Silver Cubes instead of Golden Globes" and this was the result.
They found points at which the model is overfitted and trying to equate that to SD stealing somehow.
Funny thing, this time it managed to write Golden Globe Awards once almost perfectly, but this is because it doesn't see it as text at all, it's almost like it's trying to replicate a logo.
Instead of "but it's silver cubes", what happens if you just replace golden globes outright to 'silver cubes'? The difficulty in these models is that because they rely on a language model, a language modeling error will cause the generated output not to be what you expect, as opposed to just a generation error.
Here it is, the prompt is "Silver Cubes best fashion on the red carpet, CNN style". You can see in one of them it even tries to do the "Golden Globe Awards" logo.
So it looks to me like the golden globes appearing last time is an artifact of the language model and not of the generation.
It's not hard to infer that the Golden Globe logo is overfitted when this is literally what you get when you search for golden globe fashion images.
That's obvious but also neither the problem nor the claim.
The claim is that the model copies, but there is no evidence provided. The problem is that the model can't reproduce the text correctly even despite all this as demonstrated itt.
What your post means in reality is that if you say 'golden globes' at the prompt, it should show this background with a few signs of these dimensions and either of the golden globe formats commonly present, but that wouldn't mean that it would show the same model, or the same camera angle, etc. And indeed that's exactly what we observe: a new and unique pic with the characteristic of those golden globe photos.
>That's obvious
I have never seen, and I don't know what a Golden Globe is.
It surprised me that all those pictures are extremely similar in nature. Which on the other hand, doesn't surprise that SD can only generate images similar to that given that prompt, since the sample data shows that it's what a picture of the golden globe fashion looks like.
Either way, my point was to try and replicate the results to see why the generated images could look like it was "traced". And this last experiment with Silver Cubes was to try to understand how it could write golden globes legibly.
>"but it's Silver Cubes instead of Golden Globes"
How does this work?
>picture of the editor screeching about m-muh ai takin our jerbs
Maybe CFG is too high or something, the faces all turned out poorly. Should have converged ok tho at 32 steps DPM++ 2M Karras.
4
>artists can no longer justify working on art
>they put their efforts into destroying someone else's work instead
sasuga
tldr: shitty overfitted model can't generalise and is lawsuit bait just like github copilot.
Thats not the tldr thats just the dr
let's say, just for the sake of argument, it is, in fact, stealing
still don't care
lol
lmao
It's not stealing, it's piracy.
>still don't care lol
its only a big deal for crappy artists
>pajeet
>pajeet
>garden gnome
>GAPING LMAO probable garden gnome
>garden gnome
kek it's too tiresome at this point
No one gives a shit about stealing. The point is that it's not as smart as you think it is. People ITT are pretending it's obvious that it's tracing images and not imagining, but imagination is why people think it's going to take their jobs.
It isn't tracing.
The whole human brain is literally a stolen thing. A brain is filled, literally filled, with things that that brain had stuck into it. You didn't
>muh create
the English language. You didn't create your own concepts. You've done NOTHING but soak up the detritus of your culture. (To say absolutely nothing of that which was force-fed to you in school.)
AI/ML/DL is exactly and precisely the same fucking thing, except the size of a planet, and not tied to a decaying chunk of meat in your lumpy fucking skull.
Cope and seethe, dude.
Artists are really reaping after spending the past year sewing by never shutting the fuck up about NFTs and right-click saving. Now AI is right-click saving their art and making them obsolete!
Going purely by posting style I'm much more inclined to believe that AIgays and NFTgays are the same posters, or at least the same type of people.
Most artists were against NFTs. If anything NFT and AI bros overlap significantly.
A lot of newgrounds animators were against moving to youtube.
Before moving to youtube.
And many of those youtube animators were against let's plays.
Before opening let's play channels.
They will cuck when the incentive is there.
""""""""""""Artists""""""""""" are really going all out on trying to shut down this perceived threat to their paymetons income. I've seen """"""""""""artists"""""""""""" call to bait disney into further copyright overreach by training models on disney films.
>~~*goldstein*~~
>written by 2 pajeets, two garden gnomes, and a chink.
>dropped
I will grant it the rightmost 5
but woman in white dress on red carpet ALL look the same
>uses img2img
>look is stealling, now pay us more taxes and fees oyvey
Cringe
what do you mean proof you colossal fucking idiot
this is HOW THEY ARE DESIGNED TO WORK
jesus, you're so fucking stupid you didn't even know that and think you just stumbled on a "secret"
you're so fucking stupid you actually think "AI" meant "robot android brain that thinks like a human and creates new things"
jesus, how can you tolerate being that dumb and posting on BOT
it's too bad you can't delete your thread, you gigantic fucking idiot
Every single pro-AI poster on the entire internet including BOT, Twitter and Reddit says and thinks "AI" means "robot android brain that thinks like a human and creates new things". This is also the same excuse many professionals use when they are confronted by artists with this. OP pic is proof that the machine indeed works like the machine was designed to work, and does not work like how AI shilling gays think it works.
You're telling me stable diffusion can make legible text? Nah that shit's fake
The text on the right is fucked. The one on the left is lucky coincidence, the lucky 1/50 pics where it generates correct hands and overfits the text that was written on 100s of pictures in same format. See
where on some pictures the text and logo were written correctly.
Where is it fucked? The one on the left perfectly says
'GOLDEN
GLOBE
AWARDS'
Not AVVARDS, not BLOBLO AWARDS, not even GOOLEN. The one on the right is only
The second one is the same, also completely perfect, just a little squished. Even worse, the 3rd one (where we can only see the first letter of each row) is also perfect-looking.
You say it's "lucky" on the left, yet it isn't lucky even once in this
despite your insistence otherwise.
?? take your meds schizo
>what are fanarts
Take your meds schizo
see
They' re cheating
This anon is correct. Wtf is up with the schizo falseflagger?
anything pajeets and zhids agree on is surely wrong
garden gnomes in serious research fields generally do a good job. Those you want to ignore are: chinks, poos, ausfalians and arabs. Everyone else is fine (even apefricans to some extent).
i skipped this whole ai "art" trend, didnt expect it to be this bad
Bros, cameras are stealing
>recalls object which are semantically equivalent to their source objects without being pixel-wise identical
>ask for a sofa, get a sofa
>AI generating perfect text with perfect font
Yeah, nice try. Did they used GPT to generate this total legit study too?
Is it that hard to read the paper before forming your opinion?
make an open source license that makes it illegal for machines to use
sue ai companies
ez
Except you can't forbid fair use.
If you do not want other to learn from your image, just don't publish it. It's that simple.
My shitting time window is too short to do research. Especially on something that looks fake as fuck since literally the first page.
>Except you can't forbid fair use.
I could
No, you can't. Not unless you can also prevent them from accessing the content without signing a license agreement.
Fair use is an exception to copyright. Aka, even if you 100% own the copyright for something, anyone is still allowed "fair use" of the copyrighted material.
>handcrafted prompt
>please recreate image ai-san
>ai recreates image
>no
This is a dumb argument to make anyway. Can AI create plagiarized art? Yes, just like human artists. It doesn't mean everything created by it is plagiarized. Just like human artists.
But artists are starving and need your commissions, sir
>Here, I'm going to describe to you tons of apples of different cultivars and show you pictures of each.
>Hey, can you draw me a Fuji apple that's slightly bigger on its left hemisphere and with one leaf coming out of the top of it?
>OMG YOU STOLE MAH APPLE I CANT BELIEVE YOU PROVIDED SOMETHING VERY MUCH LIKE ONE OF THE MILLIONS OF PICTURES I DESCRIBED TO YOU
Really? Thats where we are? "semantically identical" is not good enough, much like if I said "Draw me a picture of a 1910 style chaise-longue with violet upholstery and a white herringbone pattern cotton blanket in turndown service position ready to be used draped upon it " you're going to come up with pretty much the same thing, provided the AI has enough experience to know about what a 1910 style chaise-longue looks like, what a herringbone pattern cotton blanket will look like, and a reference for a contemporaneous violet upholstery will be etc) Hell, even the abstract there shows that there's a good bit of difference at some times depending on the prompt, but all of this works pretty much the same way that the human mind does as "inspiration". Like that "golden globe" sign. I imagine the prompt would be either "hang rectangular with rounded edge white signs behind the figure and have them say Golden Globe Awards using XXXXX font, size, and layout etc" , "Look at the signs used during the 20XX Golden Globe Awards red carpet and provide a similar one in style, font, layout etc" or even both, not to mention additional specifics to work with the particular syntax of the program. Either way its well in line with how a human would do so - the ONLY difference is in the execution, which the AI can do perfectly with proper iterations.
All the AIjeets seething in this thread. Artists won.
The paper could have instantly proven their results by proving prompts and settings. They didn't. I wonder why?
They did
They only offer a few prompts that can't be reproduced because they use their own local installation and don't tell us the seed. Repeated attempts to reproduce on other platforms (i.e. locally) also fail repeatedly.
>it's okay when garden gnomes do it
>Tom Goldstein
oy vey why am i not surprised
The thing with plagiarism seems to be: everyone steals from everyone else until the buck stops at the person with the strongest lawyer. I don't envy artists in this way.
personally i hope AI art doesnt become totally commercially legal. i think it would be best kept personal and underground for all parties involved.
what are they gonna do? Sue it? haha
That's not proof of theft. If you trained a person to accurately draw the bloodborne cover, its still an original drawing even if it looks very very similar. That doesn't mean you could use it commercially though, just like if I drew a picture of Mikey Mouse I couldn't use that commercially even though its my own original drawing.
A person knowing something and a computer program knowing something are different things. If you distribute a computer program capable of reproducing copyrighted works when prompted, then that program is itself copyright infringement. There will be lawsuits, and they will effectively make SD et al illegal.
>If you distribute a computer program capable of reproducing copyrighted works when prompted, then that program is itself copyright infringement
This has to be trolling. This has to be.
>If you distribute a computer program capable of reproducing copyrighted works when prompted, then that program is itself copyright infringement
shit i didn't realize every computer ever made is illegal
That is not how it works.
If you want to have a legal argument against this, it needs ton start with the part where they trained the AI by scraping the entire internet for all images they could find, never asking a single copyright holder whether they were allowed to do that (and you can tell they knew it would be illegal, because they curiously didn't do the same thing to music).
Copyright owners gave people permission to download their image. What they lack permission for is reproducing the images, which is what distributing these AI models constitutes. There is definitely a class action lawsuit brewing over this.
Good luck getting a class action lawsuit to delete the files from millions of peoples computers as they peer share it. You can't put this genie back in its bottle.
No, but you can make it legally radioactive to develop any more of these models. The cost associated with training them makes it prohibitive for individuals, so without corporate backing it likely won't happen.
>he doesn't know
NAI 1.4 in progress right now, furry models and a dozen others made by people
This software is entirely in the hands of every average layperson now and it is impossible to stop them, especially as hardware becomes stronger and cheaper over time as it tends to.
Did you forget to take your meds again? You really shouldn't. Take your meds, schizo.
If it's commercially viable, it WILL be used commercially. It's only a matter of time.
And yes, in fact, very similar drawings can be a serious problem. Art forgery is a crime in the first world.
See
AI-generate Mickey Mouse and see how Disney reacts.
I wonder what the guys who invented this shit were thinking they were doing to the world by unleashing an uncontrollable system of zero-effort automated mass plagiarism. That's like the art equivalent of giving every toddler a hand grenade.
Plagiaristic art is art you philistine
>train new division AI to plagiarize
>it plagiarizes (kinda, arguably)
desu, though, I think AI is fundamentally different than a human learning from available examples. It doesn't matter if the AI is a sapient being or an "expert system" like today. What matters is that powerful AI models being only trainable with many millions of dollars and only owned by huge megacorporations is an unacceptable centralization of power. It should be legal to create AI systems, but as we get closer and closer to "strong AI" it's increasingly important that they're not monopolized by private interests. The only difference between a third-world kleptocracy and a modern first-world nation is the value of the labor of the citizens of the nation. If your labor has no value, you will not get a vote, because I'd make more money by killing you and continuing to sell oil. If your labor is valuable, powerful individuals are FORCED to tolerate your existence because it is from that labor (rather than oil or minerals) that they derive their wealth and power.
Sufficiently advanced AI is "oil". It's something that can generate money on three scale of a nation, while requiring only minimal human labor to operate.
>tell AI to replicate something
>omg look ai STOLE from artists
>tell shartist to replicate something
>
lol, lmao even
>stealing, consent, copyright, ethics discussion
The irony is all those arguments by AIjeets on art applies to music but somehow their opinion shift 180 degrees when it's about music
This is a bot-generated post.
He's right, Indian
I'm a musician and I'd be fine with my music being used to train AI and fine with the idea of musical AI in general.
>purposefully train a model with those exact images with the sole purpose of reproducing them
>it reproduces them
You don't say.
wow, an ai trained on images can generate something close to those images
who would've thought
shit's hardly different than artists using references
retard
Who cares, I'm still going to prompt images of my wife in various landscapes and there is nothing you can do to stop me.
>you wouldn't download an ai
Look at the true nature of artgays that is brought out by AI. They love that they have another means of perpetuating drama.
/ic/ used to be good, and by good good I mean elitist and hated anime, the same holier than thou attitude is ridiculous coming from a board that is now half anime/hentai/porn/e-boi threads
Tahts all AI is. THey just steal code, reuse images, its. Might as well just hire an Indian or Chinese.
>AI is stealing
This was already evident when the results came up with blurred signatures and AIgays needed artist names to come up with their best results.
Hell, this was already evident when we got invaded by DeepCoder AI shills years back.
Anybody who wasn't a shill or a newgay already knew this. Every time a new "artificial intelligence" program gets lapped up by lazy or ignorant normalgays, we get flooded with these morons saying people in a field are out of the job. These same AI marketers kept on saying programmers were done for when Microsoft and Apple tried to promote a programming AI that ended up struggling with small amounts of basic lines of code.
https://githubcopilotlitigation.com/
Fuck. 🙁