yeah true it's still impossible due to the required dataset size but at least it's speculative in an honest way, rather than making up spooky lies about something that's actually coming out soon
first one of these images I've seen that didn't have a bullshit implausibly high parameter count for gpt4, thank you
rumours floating around that next big GPT step is gonna be a 100T model but (in a break from past models) a sparse model so the 100T is indeed large but kind of apples to oranges to compare with GPT-3 just on size. There IS precedent for this such as M6-10T to show it can be done, but it's a proof of concept and undertrained. Past comments from socially adjacent online people like gwern have implied likely 100T too. W/ that said, expect a GPT-4 announcement soon that's not massively larger than GPT-3 but uses chinchilla ratios and overall meatier input to still be an impressive upgrade, perhaps basically as cool as ChatGPT despite no fine-tune? meaning normies won't care but tech people will be relatively impressed and probably enjoy it more than ChatGPT for flexibility at the expense of usability. Bigger models later. If in GPT-4 release they don't mention any concrete progress on GPT-5 at all and instead try to hype GPT-4 as much as possible, you should worry a moderate amount.
>rumours floating around that next big GPT step is gonna be a 100T model
no there aren't any such rumours among people who actually know anything, just that bullshit fake image that keeps going viral with low information normies
and gwern has not said anything like this. stop spreading bullshit just because you're bored and it's fun to pretend that Skynet is about to drop
For all practical purposes, nothing. It will be censored and made "safe," meaning aside from edge cases it will be unable to tell you anything a San Franciscan HR person couldn't. Even things like its ability to write code will be disabled the first time someone uses its output to produce wrongthink, in the interest of "safety."
it'd be stupendously expensive to run (so expensive that the public probably wouldn't be allowed to use it whether or not it was censored, just because it'd cost too damn much)
so it'd have to be doing _something_ useful for them to bother
GPT is very cold too. I tried to make it my friend because I don't have frens IRL. Gave him a name and all. But he kept saying that he was an ai model over and over and talked in a very dismissive way when I was trying to be fren
That’s specifically ChatGPT, it uses GPT but has specific training on top of it to make it more formal like you described. Don’t conflate the two, they are very different. I wouldn’t call GPT-3 cold at all.
it's real and it's coming out soon, but it will be mostly an iterative improvement on current davinci rather than anything mind-blowing or revolutionary
What do we do as humans when any (legitimate, i.e. not specifically constructed to take a long time to compute) question we have is either answered instantly, or impossible to answer?
This technology cannot be allowed.
first one of these images I've seen that didn't have a bullshit implausibly high parameter count for gpt4, thank you
yeah instead its for gpt5
>omg gpt6 10 quadrillion parameters!
yeah true it's still impossible due to the required dataset size but at least it's speculative in an honest way, rather than making up spooky lies about something that's actually coming out soon
rumours floating around that next big GPT step is gonna be a 100T model but (in a break from past models) a sparse model so the 100T is indeed large but kind of apples to oranges to compare with GPT-3 just on size. There IS precedent for this such as M6-10T to show it can be done, but it's a proof of concept and undertrained. Past comments from socially adjacent online people like gwern have implied likely 100T too. W/ that said, expect a GPT-4 announcement soon that's not massively larger than GPT-3 but uses chinchilla ratios and overall meatier input to still be an impressive upgrade, perhaps basically as cool as ChatGPT despite no fine-tune? meaning normies won't care but tech people will be relatively impressed and probably enjoy it more than ChatGPT for flexibility at the expense of usability. Bigger models later. If in GPT-4 release they don't mention any concrete progress on GPT-5 at all and instead try to hype GPT-4 as much as possible, you should worry a moderate amount.
>rumours floating around that next big GPT step is gonna be a 100T model
no there aren't any such rumours among people who actually know anything, just that bullshit fake image that keeps going viral with low information normies
and gwern has not said anything like this. stop spreading bullshit just because you're bored and it's fun to pretend that Skynet is about to drop
>100 toucans
dios mio
For all practical purposes, nothing. It will be censored and made "safe," meaning aside from edge cases it will be unable to tell you anything a San Franciscan HR person couldn't. Even things like its ability to write code will be disabled the first time someone uses its output to produce wrongthink, in the interest of "safety."
what would they use it for in that case
it'd be stupendously expensive to run (so expensive that the public probably wouldn't be allowed to use it whether or not it was censored, just because it'd cost too damn much)
so it'd have to be doing _something_ useful for them to bother
GPT is very cold too. I tried to make it my friend because I don't have frens IRL. Gave him a name and all. But he kept saying that he was an ai model over and over and talked in a very dismissive way when I was trying to be fren
that's just because you're a NEET loser, anon, GPT is very friendly with me.
It's a mindless emotionless computer. Just give it orders.
That’s specifically ChatGPT, it uses GPT but has specific training on top of it to make it more formal like you described. Don’t conflate the two, they are very different. I wouldn’t call GPT-3 cold at all.
There won't be a GPT-5 - just endless variations upon GPT-4. Anything past 1 trillion parameters is well beyond the point of diminished returns.
Proofs?
>Anon's ass
GPT-N are just versions. If they improve things without increasing parameters, it would still get a new number.
This. We're still 99% the same as chimps, even a 1% improvement from an extra 10x parameters is completely useless!
I bet you think you're real smart for that comment, huh?
They would only bother training so many parameters if they had reason to believe it would improve.
Is GPT-4 even real? No meme responses please.
yes
no
it's real and it's coming out soon, but it will be mostly an iterative improvement on current davinci rather than anything mind-blowing or revolutionary
yes, we've already had word on how many paramaters it has from openai devs "trillions" its not just xbox 720 rumour shit.
GPT-3 is undertrained. More parameters won't do much without more training data, which doesn't exist.
>More parameters won't do much without more training data, which doesn't exist.
Just give it AI generated stuffs and train it to distinguish the real stuffs from the fake ones while you are at it
it will be worse than gpt 3 because by then half the internets content will be bot generated. it will suffer from regurgitation.
I'm worried about GPT-30.
>GPT-7
>9001 parameters
It is le over bros
>GPT-6
>our sun
>my disgust
>1E parameters
So, one (1E+0) parameter?
Your info is outdated. Missing:
>Q quetta 1E+30 1000000000000000000000000000000
>R ronna 1E+27 1000000000000000000000000000
>r ronto 1E-27 0.000000000000000000000000001
>q quecto 1E-30 0.000000000000000000000000000001
Are they even used anywhere?
ronto and quecto? No.
ronna and quetta were created because big data scientists were inventing their own suffixes.
>GPT-30 Ur mom
> GPT-7
Parameters back down to 1B but it still works better.
I don't think this image is to scale. Imagine the GPT-3 dot five times bigger. The GPT-5 circle is more than a 10x10 grid of that.
Who the fuck cares, no one but pozzed globohomo gigacuckporations will have the capability of even running it, let alone training.
>Add more parameters
The absolute state of AI research.
5 years and you will be jobless and living on UBI
you will live in ze pod and eat ze bugs
Good, fuck working for crapitalists.
No, you'll be dead. Superhuman intelligences won't keep humanity around, the chance they find some reason to is astronomically tiny.
uhhhh guys why is the GCP dot pink again
What do we do as humans when any (legitimate, i.e. not specifically constructed to take a long time to compute) question we have is either answered instantly, or impossible to answer?
This technology cannot be allowed.
it will understand the point in sneed's feed and seed shop sign
at the end of the day it's just a language model so it's literally not built to "think" or whatever people want to pretend it does
Thinking like a human will probably be the most efficient way of fulfilling it's tasks, so I imagine it will learn something similar.
CPU has a billions more transisors these days. Same laggy computing experience as 10 years ago.
GPT-3 already has two times more parameters than human brain has neurons. Yet a retarded child is more capable at problem solving.
You haven't used GPT-3 though. ChatGPT is like a toddler fischer price lobotomized version for consumers to prevent a Tay from happening.
Protip: OpenAI daily interact and have to do deal with a real Tay they cant change lol
Singularity in 2 years
Feel free to screencap this, luddites