GPT-4 Posted on January 11, 2023 by Anonymous How over is it on a scale from 1-10? https://twitter.com/SimonHoiberg/status/1613089457116258306
Cleaned data isn't scaling with parameters
>100 trillion parameters before Google
Who's this nobody, and what makes you think things will scale well?
There are diminishing returns to this.
i thought it was openai's ceo
he said, after chatgpt learned translation and basic coding without it being targeted in the dataset
> without it being targeted
literally wasn't, it's just a language model that uses statistical weights to predict what next word/figure should follow the previous one
And how does it figure out what word comes next?
Something called training data?
Where does that come from?
1) other tongues and code were in the training data
2) this is a LANGUAGE model, not specialised for coding or translation
3) we're getting glimpses of metapattern understanding, even moreso with StableDiff. you fkers are delusional if you think sht don't scale
2) Then why aren't they sharing the source code and data? I'll believe that the model learned from nothing without any concept of or focus on translation or programming when I see it.
3) I'm not saying it's useless, I'm just saying it's tailored for certain use cases. It's not intelligent per se, we're still far from that and the # of parameters won't be the deciding factor there
thing is, it doesn't ever need to become intelligent to cure aging, solve fusion energy and invent artificial synapses
just like an aeroplane doesn't flap it's wings to fly
and a submarine doesn't need to swim to move through water.
>without it being targeted in the dataset
What makes you think that?
go bed tim it's time
i hate VC tech grifters so much bros
>VC tech grifters
Oh, is that what they're called?
I hate them to death too, but I did not have a name to give them.
Thank you anon.
Graham, musk, etc
The dude that invented graham bread.
stopped reading right there
hes a grifter selling a gpt3 frontend trying to make it look better than it is
mmm....my garden gnomedar is tingling but I am not sure....
Fuck you swede
>How over is it on a scale from 1-10?
1, it's not open source, we dont have access to the back end or training data like with Stability AI. Which means by default it will be incredibly limited. You only have to really start worrying when a gpt4 equivalent is released with everything be able to be tinkered with.
Also I don't think its going to be trillions of parameters, I am 99% certain that is just a rumor that has straight up been denied by Open.AI in the past.
700 bil at most
likely 400-600 bil params
How about these retards learn how to make do with less parameters instead of just pumping endless amounts of compute to their toys
GPT over 9000 when?
about tree fiddy
When will AI start being included in the OS anyways? Feels like having your own digital assistant that can be customized to your liking would be one of the first things Microsoft would want to do
Windows 14 is just going to be a copy of GPT-6 with instructions to write, compile, and install an operating system tailored to the user's preferences
Berg means mountain in Scandinavian languages and German. Berg and Stein are common and normal suffixes for last names in Germany Sweden Norway Denmark
Happy stochastic parrotting everybody!
Normies already stopped caring, 4chan. Now the only people interested are investards. Proof that it's a dead-end technology.
Just like the iPhone, the internet & Cars before it
I sat on my ass while this board was shilling Chainlink at 0,001 dollar. I won’t make the same mistake again.
Imagine being the poor soul that has to read this shit for analysis.
Dunno but todays GPT-3 update really made him understand better. Writes visibly better JS, shits starting to work
there was no gpt-3 update today, anon
There was for me, like 4-5 hours ago, there was an update alert prompt too
no there wasn't
what a weird lie
guess thirdworld shitholes didnt get the update
>retard says GPT-3 when he meant ChatGPT
not my fault you can't articulate yourself properly
isnt that the same thing? my bad then sry moron
is the leap from gpt-2 to gpt-3
bigger than the leap from gpt-3 to gpt-4?
in terms of the factor increase in parameters, i mean
at this point I'm looking forward to the release just to stop the bullshit 100T parameter fake news being posted over and over a-fucking-gain
>I google big boob 12 trillion results
By now they are just overfitting the dataset with so many parameters. If they prune that shit I bet they could do the same performance with half the parameters or even less. Also, is way more important the data quality than the quantity of neurons and layers.
I hate how now all tech companies are boasting of how many parameters their models, but do not care of the actual performance, it's all about who has the biggest dick and put more 0s to the left on the number of neurons declared in the code.
>By now they are just overfitting the dataset with so many parameters
no they aren't because this is fake, the claim in the tweet is a lie
are zoomers aware that a claim's social media virality has actual no bearing on whether it is true
I keep seeing this retarded 100 trillion parameter claim and no one has ever provided a source
>sorry i cant tell you that!
woah..the power or one garden gnometillion parameters..
i cant tell you that!
he's talking about ChatGPT's tendency to refuse answers and start scolding the user at the slightest provocation
Ah yeah that sucks balls, it even adds a trillion disclaimers after every answer. Good thing you can make a character, an AI that can't reject your requests, and get it to roleplay as that. It's not working as good as it used to after the update but it werks.
why is it only 100 trillion instead of 100 bajillion?
yea wouldn't it require at least 50 bajillion to make a noticeable improvement if you consider the diminishing returns?
>Scrape this site for every possible combination of letters to give it one gazilliontrillionmillionbillion parameters
>It can now give usable C++ code 60% of the time
Will this be able to tell me how long to cook my 2" marinaded chicken breasts in the oven faster than Google?
Maybe? No idea.
I mean, besides the extremely unsafe suggestion to slather your completed chicken breasts with the uncooked marinade they were soaking raw in, it's a good guide.
I recently got that addon that puts ChatGPT on the side of my search engine results and it's been a godsend. Way better than looking up recipes and having to parse through 6 paragraphs of bullshit just to find the ingredients. They tend to leave out temperature in the instructions. ChatGPT has been way better than anything else so far. So I assume GPT-4 will be even better.
>"GPT will replace content writing!"
>literally takes me longer to generate texts with worse quality than just write the guide in like 30 seconds
>"b-but it will help generate bulk content!"
I unironically type faster than this shit spits out the text and I don't have to proofread the text.
Just use it to get ideas, ask it to give you a list of names of characters if you're writting a story. It's faster than googling it. If anything I want this shit to replace search engines. If someday some madman makes an open source model with a lot of info from the internet that would be sweet. Current chatgpt is almost like that but this trash is closed source.
A model trained on wikipedia, that would be interesting to see. But I bet chatgpt already did that.
Btw it's already being using by some big sized guys: https://www.google.com/search?client=firefox-b-lm&q=%22This+article+was+generated+using+automation+technology+and+thoroughly+edited+and+fact-checked%22
Oh wow what is that query? Is it something Google adds?
>Oh wow what is that query? Is it something Google adds?
No. It just their way of doing disclosure about it.
Ah, couldn't find the disclosure on the website at first but saw it in the byline now
If you check those articles, they are very low quality and could have been written by a human faster than waiting for GPT to generate them, and then proofreading them and "thoroughly editing" them, and the end result is still worse
I still think generating them and proofreading is faster. Not sure about the lower quality (talking about the bankrate and the CNET ones), they seem quite OK to me; quite concise and clear.
oh jesus fucking christ, all of the personal finance articles about whether or not it's smart to pay off credit cards, fuck me sideways.
in this case, ChatGPT is just going to be shitting out old wives' tales: "car loans are good credit, actually! Go ahead and buy that lexus on 75k a year"
I asked it to write an SSIS script component to connect to a REST api and deserialize the response as an array of objects and it did. Still had to write the classes to line up with the JSON object schema, but that took... 3 minutes? It even added error handling and a few logs when I asked it to while I was working on the classes. Verified that the rest of the code wasn't doing anything fucky and ran some tests before putting it into prod.
You still have to not be a retard to use it, but it certainly helps with more tedious bullshit.
i still don't understand what a parameter is...
ONE MORBILLION PARAMATERS?
holy fucking shit a 100T model would require a small sun to power
stop fucking posting this
twitter tards are so stupid fucking hell
the bullshit people spout about language models goes beyond normal levels of normie tech ignorance, it feels intentional
like people are just making up dumb folklore about AIs on purpose that they know is nonsense because they think it's fun
> governments have been killing people who find cures for cancer, etc. For decades
> somehow let the people who will end most jobs in the world and also profits live
Yea no, this is happening in purpose. They probably have much much much more powerful AIs 20 years ago but kept it secret.
Whatever is happening, it's on purpose. If no one has a job then they will have a solution to this problem. Most likely culling the population.
If we're doomed, we're doomed either way. If not, good for us I guess
I thought they nerfed chatgpt even more, it kept rejecting my request to make a data table. Somehow it worked now.
have a nice day. Now.
Was that your trigger word? Sorry anon, look away!
It's honestly over. We have a decade left, at the very most.
>We have a decade left
Good enough for me.
nice job falling for a grifter's fake post, gay
precisely my view, but I welcome it. good chance it'll be good
>good chance it'll be good
0% chance. What use will humanity be? Zero. We will have nothing to offer. There may be a window where we can provide manual labor as slaves. But eventually, we will have nothing to offer.
What is a creature that is a giant pain in the ass to feed and shelter and does nothing for you? A pet. Our *best* bet is the AI keeps us around like lanky two-legged pink labradoodles. It's over, it couldn't be any more over. The other options are even scarier.
You won't live enough to see when everything goes to shit. That's why I'm not worried. Must suck to be younger.
ok, thx4input 4chan
you are wrong and I disagree
no, I will not bother with a rebuttal
This is called binary thinking and it's the same shit applied when people say that quantum computing will make computers as we know dead.
The truth is probably in the middle and you might end up doing more with a lot less or just shifting to a new job title or something. So stop the doomposting, it's retarded.
Some researchers mentioned GPT-4 was going to be only slighter larger than GPT-3. And if you do the math, it's tough to believe that 100T is even possible.
World's most expensive markov chain generator
>ChatGPT is too censored now to be useful outside of a shitlib simulator on politics
>Slows down every time it's morning in India
>OpenAI openly states it's only free for right now while it asks people to give feedback to improve it before they start charging people
>Closed source for your own safety ;^)
>people are using it to spam stackoverflow with low quality answers to where a ban was needed
>grifters are encouraging people to flood freelance sites and make blog posts with ChatGPT output for quick money
I'd give it a 5.
I am conflicted as to whether this thing has fucked SEO or not. On one hand, I churn out content faster than this piece of shit generates good content. To write good content you have to chat with it, rather than just spitting out stuff from memory, but yet I keep hearing SEO companies are using it and seeing huge growth
I don't get it
release the goddamn model, it's called OPENai
everytime I heard predictions about GPT4 it's a wildly different number of parameters.
doesn't model training time scale horribly with increased parameters? So it's going to take 2000x more compute hours to train gpt-4 to the same level of training that gpt-3/chatGpt has right now?
Better buy some more silicon time
just a few hundred trillion more parameters bros
1/10. The "100T" image everyone is freaking out about is bullshit.