>is quite good
And of course, you chuds can’t have it. Only verified, trusted xims/xers in the community can use such powerful and dangerous technology. Wouldn’t want little jimmy Bot.infoner to use it to generate slurs!
These people think they're so cool keeping their stuff behind closed doors, as if they are preserving dangerous weapons from humanity.
Look at Dall E, and then look at open source stable diffusion. Even though stable diffusion is infinitely better, nobody has used it to create a nuclear bomb like open ai thinks would happen
it's always been about control, not a genuine concern for "democracy" or whatever garbage excuse they come up with next. even with all guardrails off SD is just 50% coomer bait and 50% waifuposting.
All the OpenAI's ethic stunts are attempts to gain monopoly. Dunno how obvious was that to everyone, but since the very inception of that company as soon as they declared their mission of ensuring that AI is "ethical and beneficial to humanity", I knew they will try to use that to get an authority and attack/prevent any competition. This should only be a concern of government and only maybe some sort of consortium composed of many competing companies, but definitely not a single private company.
>is quite good
And of course, you chuds can’t have it. Only verified, trusted xims/xers in the community can use such powerful and dangerous technology. Wouldn’t want little jimmy Bot.infoner to use it to generate slurs!
I agree the censorship approach of openai is stupid but if they’re getting close to AGI why would you expect them to open source it? also stable diffusion is nowhere near dalle2 quality. Dalle2 generations are far more photorealistic and just much more refined.
All they need to get to AGI is scale now, 100 trillion parameters is human brain scale and all the charts show we will reach human performance at that size.
Ok man, I’ll be laughing at u in 3-5 years when they create that model and it’s human level.
1 year ago
Anonymous
OK, just promise you'll remember this post in 5-10-20-50 years when "AI" is still just shitting out pretty interpolations of homogeneous data sets
1 year ago
Anonymous
you're actually retarded if you don't have the foresight to see that a very capable general AI will be here in like 5-10 years max, and it's going to change everything. Literal normie tier brain you have.
1 year ago
Anonymous
kek... you're the retarded pseud my friend. You know that a "neuron" in a computer model isn't the same as a neuron in your brain, right? There's no general model of how real neurons encode and process information, and there's no reason to think that making the same number of computer "neurons" will magically bring a human-like AI into being.
1 year ago
Anonymous
this, if people only knew how little we know about the brain and the chemical processes that happen inside it. The neurology field is sadly one of the least advanced medicine fields given how absolutely complex the brain is and how hard it is for applying the scientific method to it. That is, observing, testing and reproducing with the same results.
The brain is a very different organ than the rest of the body and extremely more complex and unpredictable. It reacts very different from one another when exposed to the same chemicals, even the same brain can react in different ways making research and testing extremely complex. This is noticeable in the mental health field and the drugs in it where there's a noticeable lack of understanding about how the different chemicals interact with the brain, if you go and read about them it mostly goes as "it is believed that X interacts with Y".
1 year ago
Anonymous
Your assumption being that the implementation of conscious artificial intelligence is dependent upon fully understanding biological intelligence first. Certainly there are important gaps yet, but the advances of the past five years are an absolute demonstration that there are important parts of the solution which are utterly platform-agnostic. If anything it may be more of a kludge to have it running in monkey brains.
That's actually an interesting question. I don't have access to GPT-3, but if anybody here does, ask it to do a simple math problem like "find x where 3 * x = 9" and see how it responds.
1 year ago
Anonymous
it does say that x is 3. however, it does not actually learn how to do math. it just memorizes very common problems like 1+1, and then maybe finds a pattern if you give it a sequence of equation examples
1 year ago
Anonymous
>then maybe finds a pattern if you give it a sequence of equation examples
That sounds a lot like learning to me. At the very least, you can't say that it's completely unable to do math. In fact, quite a few people would struggle with that simple equation, much less a more complicated problem like integration.
can i produce endless amounts of elf eared dicky of various skin hues with dalle2 and then top it all off with some thick thigh cute tummy big booba oni with a serving of otokonoko on top cause if not then i dont care
it's always been about control, not a genuine concern for "democracy" or whatever garbage excuse they come up with next. even with all guardrails off SD is just 50% coomer bait and 50% waifuposting.
All the OpenAI's ethic stunts are attempts to gain monopoly. Dunno how obvious was that to everyone, but since the very inception of that company as soon as they declared their mission of ensuring that AI is "ethical and beneficial to humanity", I knew they will try to use that to get an authority and attack/prevent any competition. This should only be a concern of government and only maybe some sort of consortium composed of many competing companies, but definitely not a single private company.
I think it all fell apart after they realized they needed a boatload of money to train the nets. They had some initial funding from billionaires like musk, but after these got the publicity they wanted they pulled out
That's when openai became for profit and made deals with microsoft. Their name is now little more than an unfortunate legacy of their origins
They're just delaying the inevitable. You can't hide this shit forever, and within ten years all of this will seem extremely silly when we're all using AI's like they're photoshop.
We'll need to see what the new function is. We might be able to literally tell it "Make me a coom prompt with x, y, and z" and have it do so. The old models are less coherent because it's essentially context-less autocomplete thanks to being dumb auto-encoders for a broad scrape of the internet.
>is quite good
And of course, you chuds can’t have it. Only verified, trusted xims/xers in the community can use such powerful and dangerous technology. Wouldn’t want little jimmy Bot.infoner to use it to generate slurs!
Seethe
can't wait for the next wave of shoddy zoomer webapps to emerge for us to hijack. long live Contessa-chan.
These people think they're so cool keeping their stuff behind closed doors, as if they are preserving dangerous weapons from humanity.
Look at Dall E, and then look at open source stable diffusion. Even though stable diffusion is infinitely better, nobody has used it to create a nuclear bomb like open ai thinks would happen
it's always been about control, not a genuine concern for "democracy" or whatever garbage excuse they come up with next. even with all guardrails off SD is just 50% coomer bait and 50% waifuposting.
All the OpenAI's ethic stunts are attempts to gain monopoly. Dunno how obvious was that to everyone, but since the very inception of that company as soon as they declared their mission of ensuring that AI is "ethical and beneficial to humanity", I knew they will try to use that to get an authority and attack/prevent any competition. This should only be a concern of government and only maybe some sort of consortium composed of many competing companies, but definitely not a single private company.
I agree the censorship approach of openai is stupid but if they’re getting close to AGI why would you expect them to open source it? also stable diffusion is nowhere near dalle2 quality. Dalle2 generations are far more photorealistic and just much more refined.
DALLE2 is far worse at style-imitations, in part because you can't perform your own training/hypernetwork creation/etc with it.
>getting close to AGI
All they need to get to AGI is scale now, 100 trillion parameters is human brain scale and all the charts show we will reach human performance at that size.
Horseshit
Ok man, I’ll be laughing at u in 3-5 years when they create that model and it’s human level.
OK, just promise you'll remember this post in 5-10-20-50 years when "AI" is still just shitting out pretty interpolations of homogeneous data sets
you're actually retarded if you don't have the foresight to see that a very capable general AI will be here in like 5-10 years max, and it's going to change everything. Literal normie tier brain you have.
kek... you're the retarded pseud my friend. You know that a "neuron" in a computer model isn't the same as a neuron in your brain, right? There's no general model of how real neurons encode and process information, and there's no reason to think that making the same number of computer "neurons" will magically bring a human-like AI into being.
this, if people only knew how little we know about the brain and the chemical processes that happen inside it. The neurology field is sadly one of the least advanced medicine fields given how absolutely complex the brain is and how hard it is for applying the scientific method to it. That is, observing, testing and reproducing with the same results.
The brain is a very different organ than the rest of the body and extremely more complex and unpredictable. It reacts very different from one another when exposed to the same chemicals, even the same brain can react in different ways making research and testing extremely complex. This is noticeable in the mental health field and the drugs in it where there's a noticeable lack of understanding about how the different chemicals interact with the brain, if you go and read about them it mostly goes as "it is believed that X interacts with Y".
Your assumption being that the implementation of conscious artificial intelligence is dependent upon fully understanding biological intelligence first. Certainly there are important gaps yet, but the advances of the past five years are an absolute demonstration that there are important parts of the solution which are utterly platform-agnostic. If anything it may be more of a kludge to have it running in monkey brains.
can't wait for my waifubot
jesus christ that video was so hot.
post link
You are a subman who lives in his little fantasies.
There’s rumors GPT4 is already 100T parameter so I look forward to dabbing on u low IQ losers very soon
There's rumors that you are a gay retard who really likes bits and bobs to be stuck up your poopy hog. That is disgusting, man
>All they need to get to AGI is scale now
making a math equation bigger does not give it sentience
you can stack as many trillion parameters as you want on a GPT like architecture and it would never be an AGI
it still won't be able to do simple math because the human brain has more than unsupervised text input.
That's actually an interesting question. I don't have access to GPT-3, but if anybody here does, ask it to do a simple math problem like "find x where 3 * x = 9" and see how it responds.
it does say that x is 3. however, it does not actually learn how to do math. it just memorizes very common problems like 1+1, and then maybe finds a pattern if you give it a sequence of equation examples
>then maybe finds a pattern if you give it a sequence of equation examples
That sounds a lot like learning to me. At the very least, you can't say that it's completely unable to do math. In fact, quite a few people would struggle with that simple equation, much less a more complicated problem like integration.
>All they need to get to AGI is scale now
Wait 3-5 years u little bitch. I am God
DALLE2 is complete garbage, stable diffusion has far surpassed it.
can i produce endless amounts of elf eared dicky of various skin hues with dalle2 and then top it all off with some thick thigh cute tummy big booba oni with a serving of otokonoko on top cause if not then i dont care
You're a retarded fucking pedophilic animal.
and?
>You're a retarded fucking pedophilic animal.
>Yes
I think it all fell apart after they realized they needed a boatload of money to train the nets. They had some initial funding from billionaires like musk, but after these got the publicity they wanted they pulled out
That's when openai became for profit and made deals with microsoft. Their name is now little more than an unfortunate legacy of their origins
They're just delaying the inevitable. You can't hide this shit forever, and within ten years all of this will seem extremely silly when we're all using AI's like they're photoshop.
we know, they've been running it on Bot.info for a long time
and? what's even the point? who can use that?
on god? @DeveloperHarris said that?? this changes everything
It lines up with previous rumors
isn't GPT4 just GPT3 but with better performance
no you don't get it, if you just add more parameters to the storybot it will come alive and be able to do everything!
It's using a new architecture, and new training objective, AND using the new data scaling curves training. It's gonna be big.
unless the objective is to make people shoot CUM then who cares
We'll need to see what the new function is. We might be able to literally tell it "Make me a coom prompt with x, y, and z" and have it do so. The old models are less coherent because it's essentially context-less autocomplete thanks to being dumb auto-encoders for a broad scrape of the internet.
do u know how many parameters it is?
Im saving this for grudge posting when gpt4 is just even more attention layers om more data trained for even longer
Is this a new Grand Turismo?
ya
still reflecting on the absolute nerve of that company to call themselves 'openAI'
It's like the irony of "Democratic People's Republic of Korea."
the people are democratic, not the republic
I tried a version of GPT-neoX with 20B parameters running on 4 x google colab TPUs. Wasn't too impressed tbh. No closer to AGI than bonzai buddy.