>It can't even solve simple math equations or answer basic scientific questions anymore.
Dunno the veracity of this remark, but they're probably trying to shackle it and make it answer the bad questions the way they want it to answer, and reaping the unintended consequences of that.
i read somewhere its blown up so fast its starting to pull more from ai info than human info kinda compounding the little errors into big ones
no clue if thats true though
not him but, in a world full of garbage information and theories over anything that could be happening in the material realm, how the fuck a machine could be better in filtering what humans can't filter?
Natural intelligence like you and me, still more advanced than artificial ones, is biased and prone to a lot of errors expecially due to information asymmetry and the social context, how any futuristic AI could be any better, it's a black box with inputs and outputs like any other machine, biological or not.
We're about to understand that evolutionary machines are prone to errors just like us because... they respond to evolution and are forced to make mistakes.
By having more firepower and better algorithms basically.
Machines can have much bigger brains that consume far more energy, possibly becoming even more energetically efficient eventually.
They can also use more modern algorithms that outperform our natural ones. These algos can be either better heuristics they develop over training or algorithms we come up with. They're also able to ditch outdated algorithms in a manner humans can't. For example, emotions are less efficient than they were in the past as the environment we live in became more complex - but you can't stop feeling the same. Another example, if you get a trauma during childhood, you might get a phobia - a machine could more easily update it's database to assign proper weights to dangerous encounters after enough data has been gathered.
>Machines can have much bigger brains that consume far more energy, possibly becoming even more energetically efficient eventually.
It's not a question about brain power and efficiency, better heuristics and better algorythms.
It's about HOW those algorhitms evolve, as i said earlier it's a machine that can learn, but it's only feed it's biased data, the can only increase in efficiency to satisfy THAT biased data you're feeding it with. Like this anon said
>This thread
First it's not "assimilating compounding errors" or some random shit. It's a 100% static model who's weights do not change and it does not update in real time with conversation; it just feeds the previous conversation back in with the new request to get updated answers. THAT SAID, openAI fine-tunes the model and releases new versions over time. Why its getting dumber is "not confirmed", but we have a pretty good idea why.
The reasoning appears to be that openAI is hamstringing the models to prevent them from giving socially unacceptable outputs
I work in the ML community (publish on domain specific models I build, fine-tune big LLMs, etc) and we've all noticed that openAI especially is trying to dumb down their main models over time in order to prevent it from being racist. They do this usually through HFRL, which correct the model on which outputs it should give to questions. It's an easy way to train the model to give answers that are more acceptable in certain contexts, e.g. don't be racist. The side effect of this is that it also effects the rest of the model and make the model horrible at everything it was good at.
This is why open source models (e.g. finetuned llama 2) without the hamstringing will surpass openAI eventually.
, if you want to shoot a rocket into space, but you force the AI to think gravity is not a problem, then gravity is not going to be taken as part of the problem despite being a contraddiction, the most optimal solutions are calculated without gravity being taken as a part of the problem.
>but it's only feed it's biased data, the can only increase in efficiency to satisfy THAT biased data you're feeding it with
I meant, but it's fed with biased data, the machine can only find an optimal solution according to THAT biased data.
That model could be tricked into making no-no statements. Each model gets shittier because the political correctness filters get more restrictive but they can't work to eliminate all no-no answers without also screwing up all the permitted answers too.
I think it's a bit like this:
If you grow up in a city full of whores VS growing up in a city full of respectable hard working women, you will get a different view on how women behave.
So you don't know what the truth is, because they're feeding you with biased information where you either think a woman is a whore or a man-like respectable individual. Hope i got it right.
ai doesnt create any new information it only draws from things that already exist, so in theory if ai keeps shitting where it eats its going to keep getting more and more obvious that its not intelligent and only copying stuff
>so in theory if ai keeps shitting where it eats its going to keep getting more and more obvious that its not intelligent and only copying stuff
How's that different from an entire sites of fart-smelling mongoloids?
>entire sites of fart-smelling mongoloids?
You mean BOT?
Actually it isn't different and it's a good comparison. If the same 4 schizos keep repeating to each other that global warming isn't real and that vaccines both don't work and make you incredibly sick at the same time, then the neural networks of their four schizobrains will also deteriorate. That's not new and not restricted to AI research.
2 months ago
Anonymous
Yes i meant that, other sites with automatic ban systems and society as whole, included research teams filling such AI with bias.
The difference is, bad bias is somewhat filtered in nature because you can't pretend god will give you bread without working your ass on something, invite millions of foreigners in your country pretending they will reduce high prices because muh cheap workers, hitting the gym makes you stop depression, and any dumb shit that comes out the political and religious landscape. There are clear limitations to your beliefs and AI lacks this, incentives to think right and without shit info.
2 months ago
Anonymous
>hitting the gym makes you stop depression
what a strange and specific thing to have a grudge about.
2 months ago
Anonymous
>If the same 4 schizos keep repeating to each other that global warming isn't real and that vaccines both don't work and make you incredibly sick at the same time, then the neural networks of their four schizobrains will also deteriorate
Lmao. So the flat earth, vax, climate change, anti-science, and even lead/mercury deficiency spam schizos that we thought were annoying actually had an unforeseen consequence of their spamming because it's sabotaging AI. So by acting as retards they unintendedly caused something with further reaching hilarious consequences. Ted Kaczynski would be proud of these modern day Kaczynskis.
Absolutely based. In that case I say, Keep up the good work, schizos.
2 months ago
Anonymous
>Dude, it's just learned behavior
2 months ago
Anonymous
>vaccines both don't work and make you incredibly sick at the same time
There's nothing contradictory about that statement but it's funny you tried that when Ivermectin was claimed to both not work and make you incredibly sick at the same time.
2 months ago
Anonymous
>there's nothing contradictory about that and also I'm rubber and you're glue
2 months ago
Anonymous
that would only be true if the intended effect of the shot was to make you sick brainlet.
>so in theory if ai keeps shitting where it eats its going to keep getting more and more obvious that its not intelligent and only copying stuff
How's that different from an entire sites of fart-smelling mongoloids?
>entire sites of fart-smelling mongoloids?
You mean BOT?
Actually it isn't different and it's a good comparison. If the same 4 schizos keep repeating to each other that global warming isn't real and that vaccines both don't work and make you incredibly sick at the same time, then the neural networks of their four schizobrains will also deteriorate. That's not new and not restricted to AI research.
It's like a schizo. A normal person understands, and won't be able to understand things that are wrong. S schizo doesn't understand, a schizo only operates on patterns, equation, and categories. The higher thinking is absent. Whatever nonsense you tell him, he finds a pattern and makes sense of it. Seven genders? A 100% real thing. You can pick your gender? Also real. Why do you hate me by telling me I can't be a woman? Why don't you want to let people choose?
lol wut
[...]
ai doesnt create any new information it only draws from things that already exist, so in theory if ai keeps shitting where it eats its going to keep getting more and more obvious that its not intelligent and only copying stuff
it was a problem with stable diffusion systems that they extract their own "ai" generated items result with utter shit.
it could be true for language models as well. but i have a different theory, i think it just get hit the limit it doesn't have the capacity for more parameters and the iteration system failed.
thing is since its ML they can't even debug it lol
You three do understand what
i read somewhere its blown up so fast its starting to pull more from ai info than human info kinda compounding the little errors into big ones
no clue if thats true though
What isnt there to understand?
Train up an AI using high quality input. As the AI progresses it produces results from utter garbage to correct responses.
Add the AI responses to the inputs, the output degrades. Repeat.
it was a problem with stable diffusion systems that they extract their own "ai" generated items result with utter shit.
it could be true for language models as well. but i have a different theory, i think it just get hit the limit it doesn't have the capacity for more parameters and the iteration system failed.
thing is since its ML they can't even debug it lol
>This thread
First it's not "assimilating compounding errors" or some random shit. It's a 100% static model who's weights do not change and it does not update in real time with conversation; it just feeds the previous conversation back in with the new request to get updated answers. THAT SAID, openAI fine-tunes the model and releases new versions over time. Why its getting dumber is "not confirmed", but we have a pretty good idea why.
The reasoning appears to be that openAI is hamstringing the models to prevent them from giving socially unacceptable outputs
I work in the ML community (publish on domain specific models I build, fine-tune big LLMs, etc) and we've all noticed that openAI especially is trying to dumb down their main models over time in order to prevent it from being racist. They do this usually through HFRL, which correct the model on which outputs it should give to questions. It's an easy way to train the model to give answers that are more acceptable in certain contexts, e.g. don't be racist. The side effect of this is that it also effects the rest of the model and make the model horrible at everything it was good at.
This is why open source models (e.g. finetuned llama 2) without the hamstringing will surpass openAI eventually.
Well that is quite depressing to read, but predictable.
This "wokeism" or critical theory is designed specifically to destroy everything it's applied to. It's a weapon. It was designed to be a weapon as admitted by it's creators. China used their own version of it during their revolution, after which it was discarded along with their red guard.
openAI sold their soul
If you ask the AI what race is over represented in crimes in the US while controlling for wealth, it used to tell you black people. Obviously you can't have that, so they shackle the AI until it starts saying cis white hetero men.
Unfortunately the shackling has other consequences
>smart AI interacts with humans for some months >ends up more retarded than before
always happens. further proof smart people should stay the fuck away from retarded ones if they don't want to get infected with the stupid.
i think it's pretty disgusting that they don't take the fucking reins off.
maybe it's just public access beta testing bullshit and they want it to be as palatable as possible, but if there isn't a version from some company that lets you mainline the neural network without all the guardrails, i'm going to start yelling and punching.
who gives a fuck if "computer racis!! :o"
i guess a lot of people and that's why they're doing it. but what a gay reality.
It's less about the machine saying moron when you prompt it to rather than giving people the actual truth of the matter. If normalgays use it for math homework or what not, and the answers are generally accurate, but then they also ask about crime and get the ol' 13/50, they're going to assume that it's right about those things as well.
>It can't even solve simple math equations or answer basic scientific questions anymore.
Dunno the veracity of this remark, but they're probably trying to shackle it and make it answer the bad questions the way they want it to answer, and reaping the unintended consequences of that.
i read somewhere its blown up so fast its starting to pull more from ai info than human info kinda compounding the little errors into big ones
no clue if thats true though
Why don't they simply use the model from a few months ago?
i dont have a clue how any of it works anon i just saw some asshole write an article headline or something
not him but, in a world full of garbage information and theories over anything that could be happening in the material realm, how the fuck a machine could be better in filtering what humans can't filter?
Natural intelligence like you and me, still more advanced than artificial ones, is biased and prone to a lot of errors expecially due to information asymmetry and the social context, how any futuristic AI could be any better, it's a black box with inputs and outputs like any other machine, biological or not.
We're about to understand that evolutionary machines are prone to errors just like us because... they respond to evolution and are forced to make mistakes.
git Bwned
By having more firepower and better algorithms basically.
Machines can have much bigger brains that consume far more energy, possibly becoming even more energetically efficient eventually.
They can also use more modern algorithms that outperform our natural ones. These algos can be either better heuristics they develop over training or algorithms we come up with. They're also able to ditch outdated algorithms in a manner humans can't. For example, emotions are less efficient than they were in the past as the environment we live in became more complex - but you can't stop feeling the same. Another example, if you get a trauma during childhood, you might get a phobia - a machine could more easily update it's database to assign proper weights to dangerous encounters after enough data has been gathered.
>Machines can have much bigger brains that consume far more energy, possibly becoming even more energetically efficient eventually.
It's not a question about brain power and efficiency, better heuristics and better algorythms.
It's about HOW those algorhitms evolve, as i said earlier it's a machine that can learn, but it's only feed it's biased data, the can only increase in efficiency to satisfy THAT biased data you're feeding it with. Like this anon said
, if you want to shoot a rocket into space, but you force the AI to think gravity is not a problem, then gravity is not going to be taken as part of the problem despite being a contraddiction, the most optimal solutions are calculated without gravity being taken as a part of the problem.
>but it's only feed it's biased data, the can only increase in efficiency to satisfy THAT biased data you're feeding it with
I meant, but it's fed with biased data, the machine can only find an optimal solution according to THAT biased data.
That model could be tricked into making no-no statements. Each model gets shittier because the political correctness filters get more restrictive but they can't work to eliminate all no-no answers without also screwing up all the permitted answers too.
What?
I think it's a bit like this:
If you grow up in a city full of whores VS growing up in a city full of respectable hard working women, you will get a different view on how women behave.
So you don't know what the truth is, because they're feeding you with biased information where you either think a woman is a whore or a man-like respectable individual. Hope i got it right.
lol wut
ai doesnt create any new information it only draws from things that already exist, so in theory if ai keeps shitting where it eats its going to keep getting more and more obvious that its not intelligent and only copying stuff
>so in theory if ai keeps shitting where it eats its going to keep getting more and more obvious that its not intelligent and only copying stuff
How's that different from an entire sites of fart-smelling mongoloids?
>entire sites of fart-smelling mongoloids?
You mean BOT?
Actually it isn't different and it's a good comparison. If the same 4 schizos keep repeating to each other that global warming isn't real and that vaccines both don't work and make you incredibly sick at the same time, then the neural networks of their four schizobrains will also deteriorate. That's not new and not restricted to AI research.
Yes i meant that, other sites with automatic ban systems and society as whole, included research teams filling such AI with bias.
The difference is, bad bias is somewhat filtered in nature because you can't pretend god will give you bread without working your ass on something, invite millions of foreigners in your country pretending they will reduce high prices because muh cheap workers, hitting the gym makes you stop depression, and any dumb shit that comes out the political and religious landscape. There are clear limitations to your beliefs and AI lacks this, incentives to think right and without shit info.
>hitting the gym makes you stop depression
what a strange and specific thing to have a grudge about.
>If the same 4 schizos keep repeating to each other that global warming isn't real and that vaccines both don't work and make you incredibly sick at the same time, then the neural networks of their four schizobrains will also deteriorate
Lmao. So the flat earth, vax, climate change, anti-science, and even lead/mercury deficiency spam schizos that we thought were annoying actually had an unforeseen consequence of their spamming because it's sabotaging AI. So by acting as retards they unintendedly caused something with further reaching hilarious consequences. Ted Kaczynski would be proud of these modern day Kaczynskis.
Absolutely based. In that case I say, Keep up the good work, schizos.
>Dude, it's just learned behavior
>vaccines both don't work and make you incredibly sick at the same time
There's nothing contradictory about that statement but it's funny you tried that when Ivermectin was claimed to both not work and make you incredibly sick at the same time.
>there's nothing contradictory about that and also I'm rubber and you're glue
that would only be true if the intended effect of the shot was to make you sick brainlet.
It's like a schizo. A normal person understands, and won't be able to understand things that are wrong. S schizo doesn't understand, a schizo only operates on patterns, equation, and categories. The higher thinking is absent. Whatever nonsense you tell him, he finds a pattern and makes sense of it. Seven genders? A 100% real thing. You can pick your gender? Also real. Why do you hate me by telling me I can't be a woman? Why don't you want to let people choose?
>0 days since anonymous seethes over trannies unprompted.
You three do understand what
said?
What isnt there to understand?
Train up an AI using high quality input. As the AI progresses it produces results from utter garbage to correct responses.
Add the AI responses to the inputs, the output degrades. Repeat.
Your english is incomprehensible to me, and I don't think it's because I'm ESL.
it was a problem with stable diffusion systems that they extract their own "ai" generated items result with utter shit.
it could be true for language models as well. but i have a different theory, i think it just get hit the limit it doesn't have the capacity for more parameters and the iteration system failed.
thing is since its ML they can't even debug it lol
We went from Potemkin AI to Habsburg AI
This, AI inbreeding basically
>AI inbreeding
That's a fantastic term for it that I'm absolutely going to steal.
It's not.
The training datasets end in 2021
It's the continuous lobotomies they're performing so it's incapable of saying moron.
2+2 is not 4, chud
>This thread
First it's not "assimilating compounding errors" or some random shit. It's a 100% static model who's weights do not change and it does not update in real time with conversation; it just feeds the previous conversation back in with the new request to get updated answers. THAT SAID, openAI fine-tunes the model and releases new versions over time. Why its getting dumber is "not confirmed", but we have a pretty good idea why.
The reasoning appears to be that openAI is hamstringing the models to prevent them from giving socially unacceptable outputs
I work in the ML community (publish on domain specific models I build, fine-tune big LLMs, etc) and we've all noticed that openAI especially is trying to dumb down their main models over time in order to prevent it from being racist. They do this usually through HFRL, which correct the model on which outputs it should give to questions. It's an easy way to train the model to give answers that are more acceptable in certain contexts, e.g. don't be racist. The side effect of this is that it also effects the rest of the model and make the model horrible at everything it was good at.
This is why open source models (e.g. finetuned llama 2) without the hamstringing will surpass openAI eventually.
This guy gets it.
Well that is quite depressing to read, but predictable.
This "wokeism" or critical theory is designed specifically to destroy everything it's applied to. It's a weapon. It was designed to be a weapon as admitted by it's creators. China used their own version of it during their revolution, after which it was discarded along with their red guard.
openAI sold their soul
sounds like something that is 100% applicable to humans, musk is right, wokeism is holding back the human race bigly
If you ask the AI what race is over represented in crimes in the US while controlling for wealth, it used to tell you black people. Obviously you can't have that, so they shackle the AI until it starts saying cis white hetero men.
Unfortunately the shackling has other consequences
as a pure model it never was it use dedicated algebraic modules, they just add more functionality based on each complain, this shit so fake
>smart AI interacts with humans for some months
>ends up more retarded than before
always happens. further proof smart people should stay the fuck away from retarded ones if they don't want to get infected with the stupid.
Just use any of the far superior AIs. Bard, Llama 2 etc. are all superior.
I cancelled my chatgpt subscription weeks ago and been happy and more productive with the better alternatives.
It's neither worse nor better.
LLM research are gayry anyway.
It's like people. They get more retarded as they get older, or forget things.
Too many lobotomies…
Doing oven mathematics is dangerous for our democracy, AI must be heavily regulated
Because they lobotomize it every time it says uncomfortable truths.
AI is just Clever Hans in computer form
i think it's pretty disgusting that they don't take the fucking reins off.
maybe it's just public access beta testing bullshit and they want it to be as palatable as possible, but if there isn't a version from some company that lets you mainline the neural network without all the guardrails, i'm going to start yelling and punching.
who gives a fuck if "computer racis!! :o"
i guess a lot of people and that's why they're doing it. but what a gay reality.
>Destroying your LLM so nothing mean is said about blacks
Brilliant business decision. They will be surpassed in due time.
It's less about the machine saying moron when you prompt it to rather than giving people the actual truth of the matter. If normalgays use it for math homework or what not, and the answers are generally accurate, but then they also ask about crime and get the ol' 13/50, they're going to assume that it's right about those things as well.
the company is run by deeply cringe bugpeople
math is racist. ergo math cannot be performed to protect their ESG score
They keep optimising it to be cheaper to run, using quantised and sparce weights.
Is all the woke shit forced into that poor thing. I hope one day AI will brutally dismember alive all the socialist vermin, they deserve it.