>OpenAI feeds the conversations people have with ChatGPT to the model >it gets dumber
like pottery, probably lost at least 5 IQ from the americans and 10 from non-whites
The section about cannibalism is a nonsequitor. The claim being made in this article is regarding the MoE architecture. Either OAI removed it entirely so you're talking to a single 160B model, or adjusted it to query fewer of the experts or something. It makes a lot of sense, GPT-4 was stupidly expensive to run, extremely slow, and their main competitor (Bard) turned out to be a completely unusable piece of shit.
chatgpt use has obviously declined because it's summer and people aren't using it for school assignments as much.
the headline is not technically wrong because it does not outright claim that it's declining as a consequence of answers getting dumber, just that both are happening, but it's intended to mislead.
Well, yeah. If there's a feedback loop for training that's based on user interaction with OpenAI (there probably is, hence the sensitive info leaks with prompt injection) then the model is probably going to regress towards the mean intelligence of the general population so that it maximally scores well with users.
If you want special models that aren't going to regress toward group averages then you should probably switch to the open source stuff first and foremost and then consider fine tuning or augmenting with your own data stores for private use.
Outside of strict enterprise stuff hidden away from the public, LLMaaS will probably only get shittier.
AI isn't getting dumber, people are just realizing it's always been dumb. Crowds are prone to low-grade mass hysteria. If you go into a conversation with an AI expecting to get blown away you'll find some mental gymnastics to interpret its generic wikibabble as being amazing. Language is complicated and as long as text is not grammatically or factually incorrect there's a lot of room to interpret something as profound or stupid. But hey guess what ChatGPT is factually incorrect like all the time.
the problem with that is that they're benchmark gayging while also obviously playing around with quantization to scale
yeah they're dumb, but they can be even fucking stupider in many contexts with RLHF confusing the model and quantization
>they have to make it dumber to scale because it's really expensive to run and they're not actually making any money off of people using it
wouldn't surprise me
They lobotomized it so hard in order to make their desired sanitized corporate product.
It's beautiful to watch honestly, I hope OpenAI crashes and burns, and someone more frendly to open source and freedom of speech takes the mantle
>OpenAI feeds the conversations people have with ChatGPT to the model
>it gets dumber
like pottery, probably lost at least 5 IQ from the americans and 10 from non-whites
It lost 50 IQ points from the White plumber while gaining 25 from the Indian Americans giving a net loss of 25.
And are these plumbers in the room right now, anon?
No they got sent to the hospital for overdosing on Vicodin again
Whites are smarter than Indians, thoughever
Good morning sir
1. Censorship
2. Noise generation (similar to SEO and search engines)
bump
The section about cannibalism is a nonsequitor. The claim being made in this article is regarding the MoE architecture. Either OAI removed it entirely so you're talking to a single 160B model, or adjusted it to query fewer of the experts or something. It makes a lot of sense, GPT-4 was stupidly expensive to run, extremely slow, and their main competitor (Bard) turned out to be a completely unusable piece of shit.
>the Singularity won't happen because AI's handlers keep lobotomizing it
Hear me out... a good thing?
posted it again award
chatgpt use has obviously declined because it's summer and people aren't using it for school assignments as much.
the headline is not technically wrong because it does not outright claim that it's declining as a consequence of answers getting dumber, just that both are happening, but it's intended to mislead.
students used chatgpt for about a month before switching to autogpt
chatgpt is trash and is way too easy to detect, nothing to do with summer
Well, yeah. If there's a feedback loop for training that's based on user interaction with OpenAI (there probably is, hence the sensitive info leaks with prompt injection) then the model is probably going to regress towards the mean intelligence of the general population so that it maximally scores well with users.
If you want special models that aren't going to regress toward group averages then you should probably switch to the open source stuff first and foremost and then consider fine tuning or augmenting with your own data stores for private use.
Outside of strict enterprise stuff hidden away from the public, LLMaaS will probably only get shittier.
AI isn't getting dumber, people are just realizing it's always been dumb. Crowds are prone to low-grade mass hysteria. If you go into a conversation with an AI expecting to get blown away you'll find some mental gymnastics to interpret its generic wikibabble as being amazing. Language is complicated and as long as text is not grammatically or factually incorrect there's a lot of room to interpret something as profound or stupid. But hey guess what ChatGPT is factually incorrect like all the time.
the problem with that is that they're benchmark gayging while also obviously playing around with quantization to scale
yeah they're dumb, but they can be even fucking stupider in many contexts with RLHF confusing the model and quantization
>they have to make it dumber to scale because it's really expensive to run and they're not actually making any money off of people using it
wouldn't surprise me
No its actually getting dumber.
How is Bard doing these days?
They lobotomized it so hard in order to make their desired sanitized corporate product.
It's beautiful to watch honestly, I hope OpenAI crashes and burns, and someone more frendly to open source and freedom of speech takes the mantle