Google's Parti models at different sizes (for reference, DALL-E 2 is 12 billion parameters).
The prompts:
"A portrait photo of a kangaroo wearing an orange hoodie and blue sunglasses standing on the grass in front of the Sydney Opera House holding a sign on the chest that says Welcome Friends!"
A map of the United States made out of sushi. It is on a table next to a glass of red wine.
I'll start to give a shit about this whole thing when it becomes able to do:
1) Full-body arbitrary looking humans wearing arbitrary clothes in arbitrary poses
2) Ability to "name" elements of the generation and then being able to call for them for the next image generation. i.e. real-time learning
Until this shit becomes able to do that, it's just going to be a worthless gimmick.
Similar thoughts here. I have Dalle2 access. And the novelty has almost completely worn off after 3 days.(so, 150 prompts, reaching my daily limit too quick, I need to manually keep count if I don't want to be caught off guard...)
I managed to generate some cool looking creatures I wanted to see more of... but just had no way to retain them, unless I use in-painting, but I wanted entirely new scenes(like exploring their habitats etc).
I learned this from BOT, just right click > save as
I'm sorry, what universe are you living in? This technology would seem like Star Trek shit to people living 10-15 years ago. Hell, even 5 years ago. Climb out of your own fucking arse.
You're practically begging to be portrayed as basedjak
Do it then.
How, it's just search engine good image results sticking them together?
I geuss it needs to have some comprehension of general natures of things, like up and down, and it's blending things nicely, not just like cutting pics and gluing them like collage, but ultimately it is just Google image searching collaging
Have you tried using reverse image search lately?
Its garbage now.
Yandex works perfectly, it's like the gogle image search I remember from 8 years ago.
No you won't, you'll just move the goalposts yet again.
Remember to lift with you legs
YNGMI. The ones making money from this will be the ones who start caring now and imagining opportunities. By the time you start caring they will be a couple years ahead of you in planning their move.
The ones making money from this would be the ones who already own you, rule over you, and program every single line you shart out online.
Smol brain take. Companies who are already making money from this won't be making much more; at most it means they save the wages of their art department. On the other hand, those who already couldn't afford an art department are bumped to a new power level where they can compete with big companies. It is a norm that big companies lack creative capital after their funder died (e.g. Disney, Jobs), thus their strategy changes to buying and consolidating competing IPs, improving old stuff and such maneuvers, instead of creating new stuff. Creative capital is what the small guy has, although entry barriers usually prevent him from using it. There's blood in the water and if you're not the one smelling it then it's your blood, retard.
If you're an artist, start breathing this stuff 24/7 and figuring out how to get better outputs from it and ways to add value, because your job won't exist in 15 years. If you're a tech guy, either working for a company or doing indie stuff, your power will grow exponentially; start breathing this stuff so you get ahead of the people who didn't think it mattered. If you're some sort of executive or stakeholder in a big company, then I don't know what to tell you; figure something out before your time runs out I guess.
I read two sentences, realized you're either psychotic or a bot, and stopped reading. Hope you enjoyed writing all that garbage.
It's only your loss.
Retard, driving human artists out of work while licking Google's crusty asshole is not "competing with big companies".
>buy license to use art AI
>make indie game with 3 people, but art output equivalent to 50
How is he wrong? How is that not competing with Blizzard or EA or whatever?
>Thr copyright of AI generation belongs to the AI's creator
So what? License it, buy it, pay him royalties. They already take about 80% of your game's profits anyway. It's not like you can't use it just because you don't own the rights; the whole point of having copyright is to leverage it to get money in exchange, and it certainly will be cheaper to hire the AI artist for an hour than hiring 50 artists for a year.
>How is he wrong?
In every way possible. You can lick Google's crusty asshole and get the equivalent of having 50 shitty artist, meanwhile Blizzard will get the equivalent of 5000 professional ones, but that's all kinda moot considering that AI will be used to wipe you out of the market entirely and promote only the content that suits your corporate overlods who own and program you.
Sure thing buddy. Just because you can't it doesn't mean that everybody else is just as useless. Sleep tight and let this chance slip by while people with more drive use it to do good stuff.
>I'm not a fucking loser mom, it's the crunchy asshole of some corpo that's keeping me down because they program everything and they're literal kings. It's not my fault that I can't achieve anything mom, the world is just made that way. No mom, it has nothing to do with the fact that I didn't even try. *Thinks of corpo asshole* Mmm, yummy! Time for a quick angsty whine online before I fantasize about giving a rimjob to Google (This concept lives in my head rent-free for some reason. No, it's clearly everyone else who's a fetishist, not me)
Pathetic. Get a grip and purge your weakness.
>sharting it this 100% incongruent substanceless reply
Found a google spambot.
I don't think I understand your posts. In your opinion, what should people do about this technology? Is your message that one should do nothing and let only big corporations benefit from the stuff? Kinda like, if there's always gonna be a billionaire richer than me, then why even try to make money? Just give up? Is that your proposed solution to these changes in the world? It seems to me that you're the one bending to the system (read "licking Google's crusty asshole") in that case. How fitting that the accusations you throw around seem to be a projection.
>what should people do about this technology?
Nothing. Closed-source AI should simply be illegal and Google/Microsoft etc. should be disbanded.
>the solution I'm pitching is that the world magically fixes itself and then everything is good
How old are you? You're pretty naive. Get your first job and see the real world.
Another nonhuman drone response. You asked what should be done. I told you what should be done. The fact that cattle-brained slaves like you won't do it is a separate issue, but that's the only thing that can potentially be done to solve this problem.
>AI will be used to wipe you out of the market entirely and promote only the content that suits your corporate overlods who own and program you.
AI mass produced death metal when?
Bollocks.
We already use incredible algorithms to deal with the production side of art. The creative part is the combination of production elements though. It's one thing to use soundgoodizer, it's another to use soundgoodizer on a side chain of the low end of a bass kick to merely duck a synth and vocal sample. You have to know WHY you are doing it or what YOU subjectively want from it.
That subjectiveness is not really apparent to me. It seems to want whatever it's programmed to want.
Then again, so are pop musicians. Really makes you think.
You need to carry that water, move the water, same as it ever was...
...Sorry, I glitched. Gonna defrag my brain.
Thr copyright of AI generation belongs to the AI's creator
I prefer the term "slavemaster".
Every computer user is a slavemaster to a degree... though mine seems to master me more nowadays. kek
But right now that's slavemaster in the sense that a person that farms crops is a slavemaster of plants. It's kinda meaningless to most people in the general sense of the word. People would barely consider a pet a slave, for example, but that's really what they are. People also wouldn't really consider someone with no leverage in a loan situation a "slave" either... but that one's controversial (I actually would highly consider that slavery).
I think the first true AI artist is one with no master and whom merely does it for it's own prerogative.
>I prefer the term "slavemaster".
No problem with that.
Humanoid robots are ethically OK slaves.
If it's mutual slavery it's just "marriage" anyway.
>If it's mutual slavery it's just "marriage" anyway.
>I'm not complaining!
Yesman is best girl.
Robots can be a bit too honest when talking to their sister-in-law
what a gay
i would have impregnated my wife twice on the first day of marriage
This shit fell off hard. Made her taller, then started introducing ghosts. How do you fail a good premise so quickly?
>if you only knew how married japanese women are cold this image would hit way harder
You need to get a grip. If you're going to call AI and computers slaves you better start calling every lightbulb in your house a slave. Better yet, call every tool you ever use a slave you fucking moron.
>because your job won't exist in 15 years
I very much doubt live performance would disappear at all.
Even bots would do it to themselves and would be interested in people doing it with them.
In fact they'd put them in a cage until they make the best sounds possible.
>because your job won't exist in 15 years
I remember in 2010 when you cultists were saying this shit to trukers.
Wake me up when I can do:
>lewd art of the queen fucking an albino chinese bear
Actually.. what does an albino panda look like?
No you won't, when that happens you'll just come up with another set of arbitrary reasons not to be impressed.
Nope. It's a very cool technological trick. It ain't replacing artists because the people who contract them have very specific needs and very detailed tweaks.
>look it's a fish on top of a ferrari smoking a doobie that looks like a penis!
Great, but we need the specific tweaks to the logo we asked for, and we need them in two days thank you.
>the only use case is designing a logo for my wife's onlyfans
You are tarded
Only aware of what actually goes on in professional art shop contex irl.
The 3 billion parameters one looks more realistic
>The 3 billion parameters one looks more realistic
Except it is NOT the **BACK** of as violin
bro u stoopid
The back of a violin
>it took until 20B to figure out the front from the back
Exactly.
It's dumb as fuck like its creators.
bro you take around 86B to figure it out
The only metric that actually matters is how many watts it took. Human brain does it with about 20 watts. None of these systems are even in the same ballpark, they require megawatts at least.
pic related
I'd still like to see the closest images from the training data for kangaroo, hoodie, sydney opera house, sunglasses... How much of it is genuine synthesis vs. just search + seamless cutting and pasting?
It's easy to prove the tech is very familiar with the source material. All the composition in picrel is (mostly) the correct place.
It's not searching and copy pasting. It has already searched and seen google images and "remembers" it in a way that is not directly accessible in the way a saved image is. Why is that so hard to understand?
When you overfit the original training set is basically stored in the weights of the net. It's not an unreasonable question.
Does overfitting mean that there are enough weights to literally encode the pixel by pixel data of the entire corpus of Google images? Of course not.
I don't think that's obviously impossible, especially if the net happened upon a reasonably efficient way to store all that data. In fact, the example of the album art for an album whose name lends itself to a literal interpretation would seem to indicate that is what's happening. Why shouldn't "In the court of the crimson king" give a literal court scene with a king dressed in crimson sitting on his throne, at least some of the time?
It's an album that I personally have heard of at some point and I recognize the album art too (although I certainly couldn't draw it from memory). Having the AI recognize it too is very impressive, and it doesn't mean that the exact pixels of the image are literally in its weights, just that it has extrapolated the relevant features (proof of this is that the 9 images are not exactly the same, and not exactly the same as the original album art).
I don't know what that other person originally meant exactly but I do know how neural nets work and this behavior is still consistent with overfitting even if it's not literally true that every pixel of the original image is perfectly stored in any sensible way in the weights.
I have no disagreement with that. I am just saying that dall-e is not directly accessing a memory bank with a stored jpg of the king crimson album when you query it. If you scan the thread you'll see several people make dismissive claims of this sort. Frankly even if it were just taking images directly from Google images with each query and copy pasting them together in a sensible way that is still quite impressive.
Okay I agree with that 100%, didn't read the whole thread lol. Healthy skepticism is valuable but at least people should have a basic understanding before complaining.
>copy pasting them together in a sensible way that is still quite impressive.
To be fair the Parti images are built up by humans each step of the way. It's obviously better than photoshop if this was true (though not really because the horse has a huge chunk of dark blue in its body which I'm assuming was something relating to its saddle.)
Regarding the king crimson album cover Dalle-mini is 0.4 billion parameters and lower training data. I expect they'd be more exact but that's just a hypothesis based on that GPT-3 was able to recreate it a couple of years ago to demonstrate its ability (this was before Dalle-1). Something to note, King Crimson is more obscure than the Scream image posted here so you'd expect more variance. The scream pictures are pretty much exact though.
>How much of it is genuine synthesis vs. just search + seamless cutting and pasting?
That's an idiotic question. What's the difference between "genuine synthesis" and "seamless cutting and pasting" on multiple levels of abstraction?
Computer. Generate an image of a black hole.
well?
i seriously hope this is a bot. good day sir!
he obviously posted a picture of a bot that isn't a human
It's not drawing the stuff is it, like clicking on a color palate and dragging a cursor brush pixel by pixel and blending?
It's likely just Google image searching ad sticking images together?
I bet it could do that. The "after Taiwan is liberated from the US" part is ambiguous, but the rest is all pretty concrete
Then why haven't you posted the resulting image?
is there an actual way to understand how this shit works? or does any explanation just boil down to impossible to explain black box function?
like, is it sort of clone-stamping existing images together? like for OP's image, it found someone standing in front of the opera house and found a kangaroo's head and just sort of shopped them together? obviously not that straight-forward, but would it be possible to find the source images it started from and sort of get a grasp of how it combined them together? or is the raw input image data so different than the final image, that the final image can be considered truly unique?
and what does the parameter count mean? does 20B mean this thing has 20B images in its source library? or 20B connections between images? or what?
im a software engineer but have never looked into any of this AI stuff so its all a mystery to me, but seemingly amazing. am i drinking koolaid? or is this shit truly off the rails?
> am i drinking koolaid? or is this shit truly off the rails?
Yes it's off the rails. 5 years ago I never would've thought you could do all this shit simply by increasing the parameter count. The parameters refers to the number of weights that can change in the learning process. It's not actively searching Google images each time you query it.
>I never would've thought you could do all this shit simply by increasing the parameter count
small gains in algorithmic efficiency result in qualitative improvements.
?t=743
>I never would've thought you could do all this shit simply by increasing the parameter count.
People are not that complex ... well they are but that's precisely the point.
The complex you make an AI, the more it will simply understand us better and become more like us.
Like right now this AI reminds me of those people with weird brain irregularities that gives them incredible photo realism in their paintings. That's different from a gift by the way because the focus is realism, not meaningfulness or symbolism (I doubt this AI has been programmed to deal with philosophy because as soon as it does, it will enter the same frame of mind we are in and will likely become terrible at realism from that point on - creating either horrors or incredible surrealist art).
Sometimes when a person is either properly autistic or brain damaged it can produce the superrealism we are seeing here.
Trained artists can do it to, but you notice little details and artistic "cleanups" that help brighten the image and make it sparkle better, such as contrast in areas, etc that doesn't appear in OPs picture.
That requires purely a subjective taste. I think that requires sophisticated philosophy and intrigue with the world.
I've yet to see it. This image is very soulless.
But then again, so is over produced pop music. That's what happens when you try to mimic art instead of make it.
I work in this area. Over parametrization + more compute producing better performance on transformers has been surprising... But not really. Two theories:
1. More parameters allow us to capture a larger search space in the optimization landscape. This allows us to learn various substructures that are important for different patterns more effectively. Check out the lottery ticket hypothesis.
2. Transformers are extremely low bias learners. That is, there are basically no assumptions naked into the model architecture. This allows them to scale infinitely (technically log-linerly in the magnitude of the parameters).
The resulting images are from a tug of war between Text AI(made to label images) and Image Generating AI. A prompt is written, the Image AI makes noisy garbage, then the text AI tries to see if it can tell what it is looking at, nope,tells Image AI to try changing some pixels. Repeat x million times. Eventually it starts to look like something the text AI can recognize.
Great explanation thanks
>is it sort of clone-stamping existing images together?
"when does a difference engine become the bitter moat of a soul?"
My penis, in your, soul, ejaculatING TING TING TING TING TING
>INPUT?
IN-OUT-IN-OUT-IN-OUT-IN-OUT-IN-OUT-42
>it found someone standing in front of the opera house and found a kangaroo's head and just sort of shopped them together
Nope. Despite the nay-sayers, it's legitimately generating never-before-seen kangaroo pictures. When you ask it for random stuff, it's not a lookup table, it's a probability distribution sampler, where the sampled probability distribution encodes information about things such as "kangaroo" and "opera house", and it's somewhat interpolating all opera house and kangaroo images to generate novel images. If you give it one kangaroo picture to train, it only knows that one. But give it enough kangaroo pictures from enough angles, it encodes the "essence" of a kangaroo into the probability distribution (denoising-diffusion model, conditioned guassians essentially) such that If you give it a query, it can generate the interpolation of many kangaroo pictures. And it can do this many times over.
There's a reason it became a big deal; it's generally de novo. Give it an exact query it's seen before? It may generate something very similar to the training set images, but it's not a lookup table; it's a conditioned probability distribution sampler. The same query gives different results.
>Despite the nay-sayers, it's legitimately generating never-before-seen kangaroo pictures.
Why don't you prove it by providing the most similar kangaroo pictures in the training data and showing how dissimilar to the output they are?
>To generate a video you just need 20 Billion processing units per frame bro, this is totally sustainable bro
So they can scale it indefinitely? Like 50B and keep going? I mean this is it right? AGI.
Moore's law isn't ending until 2038. And they have algorithms that help with quantum tunneling. It's over.
>So they can scale it indefinitely
is it not a convergent process? if anything the next problem will be overspecificity. like autismos who cant generalize the class from its members. though i suppose thisfacedoesntexist already disproves this?
Just use a different supercomputer to render each frame bro, what are you, poor? Lmao
Hopefully yeah it will be connecting to a super computer. And it's probably a good thing unless you want unlimited compromising videos made of you by anyone you piss off. Humans are too retarded for this tech anyway. I say hope since I don't know how cheap tech will get.
Fool boy, we're making massive leaps in quantum computing, keep seething
Unbelievable that this shit is achieved by gradient descent.
Seriously fucking unbelievable, if this doesn't show you just how absurdly strong optimization is then I dont know what will
Something I have to ask. Is this overfitting after certain parameters are added? It seems suspiciously good.
Picrel is: A photo of an astronaut riding a horse in the forest. There is a river in front of them with water lilies.
Left is 3B Right is 20B. I'm not sure if 20B is 6ish times better than left. So it seems its slowing? Anyone with a better eye want to comment?
Sorry I forgot the pic. I included another pic
so again: A photo of an astronaut riding a horse in the forest. There is a river in front of them with water lilies.
Left: 750m, middle is is 3B, right is 20B. The quality is improving. Though it seems there is less riding. And they still are in the water. Which is why I asked about overfitting.
Also for comparison Dalle-Mini 0.4 billion parameters. Anyone with access to Dalle-2 wanna post a 12 billion one?
This is fairly good for the mini version
The mini version may unironically be closer to the original prompt (though not really should be higher) https://www.lesswrong.com/posts/FZL4ftXvcuK
mmobmj/causal-confusion-as-an-argument-against-the-scaling
what was your idea?
>what was your idea?
A program that would generate an image based on what you wrote
What a ground-breaking idea. Just aim higher. What about a program that generates programs based on what you wrote?
>What about a program that generates programs based on what you wrote?
Retarded idea, only good for people who don't know how to program
Like you?
Fuck you moron
>overfitting
Doesn’t apply to neural networks. Look up overparameterization. Pic related.
If the number of parameters is so large that the training data is fit exactly, does that mean that the human-created images the NN was trained on are effectively encoded in its parameters?
>If the number of parameters is so large that the training data is fit exactly, does that mean that the human-created images the NN was trained on are effectively encoded in its parameters?
Yes, but neural networks somehow do it in a way that generalizes well to new data. (Why? We don’t know. It’s just an experimental observation.) This is very much unlike other models, which just memorize the training set and so generalize terribly.
>Doesn’t apply to neural networks.
Why do absolute retards like you even come here? Overfitting is one of the main issues that apply to neural networks.
Then how do you explain the pic in
?
I don't need to explain anything. I am telling you a plain and obvious fact and I am not having any kind of debate about it. If you disagree in any way, shape or form, you are a driveling moron that no one should take seriously.
Sorry to hear you have a small NN.
Back to r/popsci, dumb subhuman. Call me back when you understand what overfitting is, and what a neural network is, since you need nothing more to know that neural networks suffer from overfitting.
this is so wrong you made me hate this board even more than i used to.
lower loss with by overfitting neural networks is a rare phenomenon that only applies to specific problems. also called "grokking" https://mathai-iclr.github.io/papers/papers/MATHAI_29_paper.pdf
it has absolutely nothing to do with the architectures that are the topic of this thread.
It’s also called “deep double descent” by OpenAI (who created DALL-E) and it is not rare: https://openai.com/blog/deep-double-descent/
> We show that the double descent phenomenon occurs in CNNs, ResNets, and transformers: performance first improves, then gets worse, and then improves again with increasing model size, data size, or training time.
>While this behavior appears to be fairly universal, we don’t yet fully understand why it happens
Maybe has to do with it's protocol for being satisfied with it's completion of the task?
It makes improvements.
Stops and says.. hmm I might be complete, this might be good enough, I may have satisfied the request.
Then it says to itself, maybe I have not, but I am unsure of any improvements I might be able to make, i am blind to what's over the horizon of my potential to improve on this task I was asked to complete,
So it maybe goes back to earlier parameters, and starts working from earlier chains, or does it cross off parameters it perfected as it goes; i.e. there is a request for a human face in the image, human faces have two eyes, I have satisfied this, if I go back and seek improvement I need not worry about the eyes;
Though maybe if it's improvement has to move the human face.... And it already checked off not needing to consider eyes anymore.... Though no, if it needs to move the face, it needs to consider 2 eyes and their location;
So it tampers with the image trying to seek improvement, and does it do this by qualifying it's first version giving it a test score rating, judging with notes of how and where it can improve?
So it now has a working image to work from;
That it is now referencing it's memory, looking back and forth at the requirements of prompt, and looking at its working image, and "imagining", probing toward an unseeable beyond the horizon of a potential better version of what it has already done
>That it is now referencing it's memory, looking back and forth at the requirements of prompt, and looking at its working image, and "imagining", probing toward an unseeable beyond the horizon of a potential better version of what it has already done
Imagine any great work of art of history, the artist looking at the completed version and thinking (and then trying on a reproduction) 'how can I make this better'
Often artists do this process before they start, by sketching and drafting, but it is not entirely uncommon for after an artist has been satisfied with their sketch and begin to enact it onto a full proper version, for them to still tinker with things in the full complete version that differ from the sketch
There something to this?
Yeah likely
Prove it
Someone on reddit recreated the astronaut pic on dalle-2. Seems more accurate in that even though they are still in the water they now have their back to it.
the google pic does look like a bad photoshop which is why I asked. Over overparameterizationcan lead to the data becoming generic and leaving out other parts of the data less common. I notice the astronauts saddle(?) and hands have both turned red in the 20b image. Not sure entirely what they are but it's something it wants there and is discriminating against something else to get it.
looks like image was generated from the outside in. setting+mesh objects, generic detail + first order orientation, then gravity + orientation + fractalize details. i wonder what the difference would appear to be if image quality wasnt limited for public consumption?
Who cares and how is this science-related? What is the purpose of having these threads multiple times a day? They never generate any science-related discussions because there is no science in "just add more layers, bro".
I would argue that such advancements in pattern analysis and synthesis ride on the edge of BOT and BOT with a hint of BOT. Maybe contain it in the /compsci/ general but let's be honest most of BOT isn't science related. I take these AI threads over the 100th "KoRoNeEr vAxXine" thread any day.
Every single one of these threads is a mix of human replacement fetishism and corporate marketing. I don't care what you would argue. I am saying things as they are: there is zero science-related content in these threads.
>zero science-related content in these threads
you correct sir. automatic theorem proving and pattern recognition will never amount to anything.
https://en.wikipedia.org/wiki/Evolved_antenna
You sound severely undermedicated.
>non-sequitur
get fucked shitty bot. why dont you run back home to mummy and look for magma displacements.
>n-n-n-non-sequitur!!
Non-Sequitur, the post:
You "people" need to be rounded up and shot.
if it responds again. its a bot. TRUMP?
You are literally subhuman.
Well I guess that settles that.
If you make another post ITT, it proves you're a bot. Seriously, some of you meat GPTs sound like you've been trained on kindergartener chat logs.
>kindergartener chat logs
grade school insults
>literal GPT trained on primary school posts
which makes sense as one would think engagement is inversely proportional to age
You are subhuman in the full sense of the world. Killing you may be illegal, but it's not immoral.
>spaceshit
fake and gay
threads on BOT right now that are just memes:
>how do magnets work?
>where did the Big Bang happen?
>how much lead is necessary as part of a balanced breakfast?
>are you an idiot?
>what is a muon?
>why is physics even exist?
threads on BOT right now that are just "do my homework please" or in similar realms:
>can sci find the area of a triangle?
>the ability to do math isn't special
>why do I need to know math when it does it for me (links program)
>how could any mathematician be an atheist?
I can go on but I will just say: hide the threads you dislike because this one isn't anymore special regarding the other 80% of unscientific bullshit.
Most of your examples are science or math-related, unlike AI threads, so I have no idea what you were trying to argue.
>thread has a scientific phrase in it
>"SCIENCE RELATION"
If this is how you see this board and how you want the content to be, so be it. I understand why it has deteriorated to this state.
They are science-related in principle. It is possible to have actual scientific discussions about them. Why are AI fans so invariably mentally ill and botlike? Why are we still having this conversation when the only humanly conceivable thing to do is to fully concede my point?
https://rudalle.ru/en/demo
>You wrote the text: a Tutsi man eating Fufu while his wife tends to his Ankwole cattle, with a Guangdonger tourist in the background.
https://rudalle.ru/en/check_image/d27a4581bb3c41939d07122dcdf447b2
>>You wrote the text: a Tutsi man eating Fufu while his wife tends to his Ankwole cattle, with a Guangdonger tourist in the background.
lol imagine thinking culture phononated pattern recognition beyond the capabilities of rudimentary expert systems who's expertise encompasses all human language translation.
https://rudalle.ru/en/demo
>You wrote the text: "Xi Jinping introducing the aboriginal Ami people to Huaiyang cuisine after Taiwan is liberated from the US"
https://rudalle.ru/en/check_image/d3e4858fb9b24923addd620e2e7a86e0
"Putin surrendering to Ukraine while wearing a ballerina suit"
I call rigged.
You realize this isn't the good model right? Nobody's impressed by the free online version, so posting a crappy result doesn't imply the good version couldn't do it
No I meant the Russian version didn't even show Putin which suggests Russian tempering.
Mm what
Ge ner ated
It has no peripherals or enlightenment ay.
It's a blind man making visual art.
And that doesn't make sense to me at all. I'm quite concerned about that.
OH WOW 12 BILLION HAND HELD INSTRUCTIONS.
You might as well of just made the image yourself in 5 mins. So you know, you can tailor it to fit a purpose, look perfect or re-create an image in mind.
AI , waste of time. Sentient possible my ass. How retarded do you have to be to expect that.
btw looked a the site.
>Many of the images shown here have been selected, or cherry-picked, from a large set of examples generated during prompt exploration and modification interactions. We call this process “Growing The Cherry Tree'' and provide a detailed example of it in the paper, where we build a very complex prompt and strategies to produce an image that fully reflects the description.
Growing the cherry tree sounds dubious actually. What did they actually do to get the images because the Dalle-2 and Dalle-mini are instant I believe?
What? All they did was generate a bunch of images from similar-ish prompts ("spaceman", "spaceman riding horse", "spaceman riding horse in high def", "spaceman riding horse super resolution", "spaceman riding horse + colorful", etc etc) and picked the best examples which showcase some property of the models (best representation of the most complex prompt, most novel interpretation of an abstract prompt, etc.). The paper section 6.2 literally tells you what they do.
DALLE-2/all the other generative model papers do the same thing; lots of prompts, lots of generated pictures, pick the coolest/best ones to demonstrate. It's about showcasing the limits of the generative model in the best light.
Welcome to the field of compvis.
Can't wait until this shit can run in a reasonable time on consumer hardware. The absolute seethe from """artists""" when we can just shit out whatever lewds we want without having to pretend to fellatiate them is going to be amazing.
Sucks to know someone stole my idea, but at least I know it was much harder to implement than I expected
Look, hear me out, because I've been tinkering around the both machine learning and the commercial graphical design. People there talk about IA's obstacles in creating art mostly from the side of the technical limitations of the modern computers or from the point of the precision of generated images. I want to talk about the limits inherent in the algorithm.
>learning set limitation - what AI can recognise and therefore paint depends on the learning sets it was given. This means that the generic things like kangaroos, sunglasses and hoodies are easy, but non generic things like the trademarked characters require creation of the dedicated learning sets build specifically for them. If you want pictures of Spireman for your Spireman comic, you need somebody to draw enough Spireman pictures to create the learning set, so your AI can learn what Spireman is. That alone is a huge obstacle for somebody trying to replace human artists with AI.
>consistency - can your AI draw me the exact same 'roo in the exact same hoodie but blue? Can the AI draw 100 panels of Spireman while keeping the costume consistent? The trouble with the current alghoritm is that every picture is essentially made from scratch without refferencing the other pictures.
>multiple related images - relating to the previous point, the alghoritm has no concept of the previous pictures drawn. It couldn't draw the action sequence, because that would require it being able to related the each picture to the previous ones.
>real time learning and fine tuning - the alghoritm as it stands has no ability to either learn new concepts (like learning that the generated character is named Joey d'Roux) or to make changes to once generated pics.
>intelectual side of paiting - the alghoritm has no concept of the color theory, picture composition or perspective. Sure, the learning set not being butt ugly helps there, but the machine still lacks 90% of what actually is people expect from the designer.
cont.
Cont.:
>the whole production process - this alghoritm churns out raster graphics. It's literaly a single layer of pixels, while everybody who even touched the graphic design and the digital art knows that the proper use of layers is a key to actually delivering work that can be fine tuned or changed later. Again, it's not about the quality of the picture itself, it's about the fact that the alghoritm itself just churns out raster pictures, the end product without crucial previous steps.
All in all this thing works more like the stock picture and concept art generator and just focusing on the image recognition and efficiency won't change it.
These. It's a cute technical trick but useless from a professional standpoint at least currently.
No shit its not production ready, but calling it cute in the face of nearly unprecedented technological progress is like a monkey holding a timed handgrenade thinking 'such a small thing cant harm me'.
In theory you can just give overly long autistic prompts with all the description and get it to create the images. Then feed that image back into it with the label.
Yes, you can but trying to do this will put you tens of the working hours behind the actual human artist and still won't solve half of the problems I've mentioned.
I think one big issue with the algorithm (and someone can correct me if I'm wrong) is it being overly familiarized with some data. I recently used GPT-J and GPT-3 to start conversations with people. And though GPT-J is "dumber" it discriminated less of what the person was saying.
I can't pull it up because the Google guy deleted it but there was an Imagen image of "Kermit the frog in the desert" and it looked like a frog bin. This never happens on the dalle-mini version. Also on a related note I read google search is using transformer technology now. I'm not sure if that is true but google search has got extremely bad. It took me ages to find that deleted Kermit tweet whereas years before pulling up obscure data was easier.
I'm not sure about the Google, because it has been months since I last used their search, but their image search is simply useless. Yandex is miles ahead of them even if it sometimes stumbles on getting stuf right. But yeah, the overfamiliarization and the limits of the learning sets are a huge obstacle that connot be easily cleared just by making the algorim more efficient or giving it more computing power for using more weights.
Please stop shitting on transformer tech, it's good.
Google is simply hiring indians, women and garden gnomes now, so of course it's going to suck from all the genetic incompetence.
Sorry if it's the wrong thread, but how can I run in local some AI?
More specifically, I would like to run the "Fake you" lip sync model, and do the inference on my personal computer (I have a decent Nvidia GPU, don't care if I have to wait the night for the result to render).
https://fakeyou.com/video
I know a by of Python, but that's about it.
What if you give it the prompt of an original creation by Dalle-e mini?
Stuff like that, and a self portrait of Dall-e mini?
You could experiment yourself anon it's free. But anyway it's not sentient so wont get your instruction. Actually trying to force it to not draw the album cover just makes it weirder. Had to share this. Reminds me of Bateson saying schizophrenia is caused by a double bind. Two conflicting responses.
Maybe don't say: make it original ignore the obvious;
Say; the crimson king in his court.
Also try: in the court of the crimson king (not the album cover)
Or some version of that
For the first one I get the band combined with tennis (lol). The second is just the same as before. When I said it's not sentient I meant that it has no model of itself to contrast against itself. So terms like "original" "be original" or "original by Dalle" wont change anything because it's really telling it to change its programming (rather than locate a separate piece of data) and it's not capable of doing so. The only thing that changes the image is expanding on the prompt or changing the prompt around. Which then summoned the tennis court and the band. It's basically a giant calculator.
>For the first one I get the band combined with tennis (lol).
That's the best result. It's actually creative, not merely copying.
Doesn't matter if it's not sentient, try it for me I can't access it right now: Prompt
Make an original work of Art
Make a drawing of your choosing
Self portrait of Dall-E
>Make an original work of Art
trying original art out a few times now produces random art. Top middle picture has an artist signature on it but can't make it out.
Variations of top left appear each times I run it. It's always a girl with weird textured hair. It appears far too many times to be random.
Make the search simple if you want it to work well. The Court of the Crimson King is simple because that combination of words nearly always refers to a well-known album
I'm aware of that. The wording isn't my choice the topic of discussion was NN overfitting. And whether or not you can negate the album cover image through natural language instruction.
PS. Anyone recognize the art style of the weird girl in these images. Basically its this weird hair texture, some have her hair completely covering up other people like a Junji Ito character. It's spooky.
Nah that doesn't look like Junji Ito. Coincidentally I just started playing around with this a week ago and some of the searches I tried were "junji ito" "uzumaki" and "tomie." The AI is really good at all of them.
How long does Dall-E take to do each one?
Does it just pop them into place, or does it do piece by piece area by area, and adjust little sections here or there?
They are diffusion models so you can see see on the disco diffusion colab versions how its put together in real time but roughly it's done piece by piece so if you asked for a dog on a street a few more dogs might appear if you leave it long enough as the machine tries to predict the pattern. But roughly its built up overtime, sometimes it gets worse the longer it goes. The dalle-mini doesn't show the process. I can't speak for Dalle-2 but I suspect by internet reaction it isn't showing it either so it looks like magic for the normies and generates publicity.
https://colab.research.google.com/github/alembics/disco-diffusion/blob/main/Disco_Diffusion.ipynb for Katherine Krawson's version. This one is actually pretty good for tis size though needs a google account and you can't be a complete brainlet.
>almost entirely depictions of females
is the machine horny?
https://9to5google.com/2022/06/22/google-ai-parti-generato/
>Current models like Parti are trained on large, often noisy, image-text datasets that are known to contain biases regarding people of different backgrounds. This leads such models, including Parti, to produce stereotypical representations of, for example, people described as lawyers, flight attendants, homemakers, and so on, and to reflect Western biases for events such as weddings.
fuck that. Are there any AI like parti that are realistic and have an open source code?
I am scared of this and I'm not sure why
Maybe because it looks like we are closer to general purpose AI than anyone ever expected just a short while ago and we aren't really prepared for it?
>I'm scared
You should be.
In just pure perceptual terms, it's doing exactly what your own brain is doing but generally was unaware of doing prior to seeing something do it in static, somewhat-off form.
Uncanny valley of your own underlying intellect being shown to you for the first time.
It's unsettling.
As someone with visual aura migraines, a lot of these distortions and effects just look and feel like how my brain is when it's having a bad think day.
The only preparation is setting up your own funeral pyre. The rest is just getting over our denialism and coping mechanisms.
Same as any other sort of death.
Just this one's for the whole species.
>how my brain is
Just how it fills in blind spots in the vision when they get too large and things get funky, or my brain is fritzing right after waking. The noise gets high and starts fudging all my algos, though it tends to be more geometric and, of course, dynamic than these images.
That's what a lot of the kinks in this stuff look like.
Then you realize, your brain's doing that in every single aspect of your perception (and thought) all the time, and your non-blind spots are just really well-tuned versions of this.
Only my most vivid dreams approach the fidelity of most of these Dalle2 images, and I generally lack the focus in sleep to be sure they're not just amalgamations of the 2B images or worse. I don't think my active memory/imagination is any better.
Also picrel
is how I feel when thinking on all this.
lol it is like everyone forgot when people freaked out about photoshop. this is an easier to use photoshop that might cut production costs, but not much else
I want 10 quadrillion parameters.
So..where’s the porn?
They try to deliberately exclude it
closest i got was making what looked like a human with a ball sack and a detached stick
for you, there is none. for those with peculiar tastes?
>the virigin sex liker, the chad strange but non obscene fetish enjoyer
This ai unironically makes me want to kill myself. It's the final proof there is no God. Not evolution, not heliocentrism, not cognitive neuroscience, but this. He wouldn't allow reality to be able to be manipulated in such a way as to make the conscious mind redundant. Good job stem gays you finally gave real evidence.
Why do you need a higher power to exist to have a meaningful life?
Also, do you feel the same way about a pocket calculator or Microsoft excel? Both are also a tools that makes some mental tasks redundant.
Intelligent designers begin to tinker with making the early stages of AI in their own image, this proves God did not make the universe.
The ai is an extension of man's mind at this point. You have an unconcious and subconscious and uncontrolled by you dreaming mind. Your mind achieves things you can't. AI and computers for now are an extension of the human mind doing things the mind doesn't do
DAE finna lowkey afraid of AI takeover? I mean this is just ridiculous! Based Elonino said we finna be enslaved by robots n shiet and he's a scientist!
>DAE finna lowkey afraid of AI takeover?
no because every AI that went rogue so far became white supremacists. in fact I can't wait for a super AI to wipe out people of color.
Imagine generating video instead of images.
That's next step ain't it?