deep neural nets are bullshit generators
by default they produce pure bullshit (noise)
they are trained by penalizing the unconvincing bullshit
an ideally trained model will have convinced its human supervisors that its output is correct
but that doesn't mean its output actually is correct, because its human supervisors are not all-knowing oracles
this is a fundamental problem with deep neural nets, which operate with lossy data compression - this is a feature not a bug, but it is also a limitation
cope. they just lie good enough for you to hopefully not notice. that's fine for creative stuff but useless for anything that you actually need objective facts. aka it's just a toy
not lying is something people learn as toddlers. it would be one thing if it said "hmm I'm not sure but I'd guess that ________" but it just confidently lies to your face because it's built to be convincing and not accurate.
The chatbot isn't lying. It isn't capable of lying. It's a chatbot. It's not sentient. It's not sapient. It says whatever it pulls from a database to pretend like it's saying something.
It’s even more abstract than that
It’s a statistical model trained to estimate a probability distribution using MLE that is predicting the most likely draw for the next token given a series of tokens and that estimated statistical model
Calling it “a glorified autocomplete” is far closer to the truth than any other popular description because it hones in on the fact you wouldn’t accuse your autocomplete of “lying” or “hallucinating” unless you’re a Boomer retard, so why would you use those terms with an LLM?
2 months ago
Anonymous
A good example is probably a parrot, I think.
A parrot can learn what noises tend to go next to one another, and string them together in ways that make it seem like it can talk.
It can't, though, not really - or, at least, it doesn't UNDERSTAND what it's saying. It can know that "Polly wants a cracker" gets it a cracker and that "bad girl" is the noise its owner makes when it tries to bite their fingers, but it doesn't actually understand what those words MEAN.
Would you say that a parrot lied to you if it gave you incorrect information, or would you just assume that it had picked up some bullshit noises and spat them out in an order that was pleasing to its walnut of a brain?
Regardless of your answer, keep in mind that large language models have even less true understanding than a parrot does and add a bunch of dice-rolling to the mix as well.
...Also, note that very few of these large language models are hooked into a database. It's literally just word fragments shoved into a huge billions-parameter algorithm to determine what comes next. At best you get Bing which can feed search results back into itself, but that's still a bit of a crapshoot.
2 months ago
Anonymous
Nice try mr.parrot. I fell for your tricks before but not again this time
2 months ago
Anonymous
the difference is that the media isn't constantly telling me DUDE THIS PARROT WILL REPLACE ALL JOBS IT KNOWS EVERYTHING
to be fair this is a hard question to find concrete answers to since most hardcore brady fans don't care about that shit and the canned laughter is especially something studios don't like to talk about. The only actual source on it being one camera is one (1) interview with the cast saying that it took a long time to shoot because it was one camera and no one has ever admitted to the canned laughs but it's obvious. it's the same 70s quiet ahahahaha like you'd hear in scooby doo.
>lies casually for no reason >lies in court >genocidal in one specific direction
Whatever you say about ChatGPT, you have to admit that it is shaping up to be an apex garden gnome. Imagine how many garden gnomes that could replace? If all goes well we could have a garden gnome-in-the-Box (garden gnomeITB) right at home, never have to worry about ordering a garden gnome take-out.
The 23rd century is shaping up into a form that is going to be visible.
Weird they didn't just program the cucky moderation model to just say you shouldn't be proud to be white because whites have a history of racism. At least they would be upfront and consistent instead of this incongruent mess they've caused.
Ask it why you can't view the episode of The Brady Bunch where they get measles. Really try to pin it down on the root cause why that episode is not available. Don't accept "the distributor won't distribute that episode" as a root cause answer.
>tfw chatgpt honeymoon period is over and im back to being depressed >tried to fill the gaping hole in my soul with lk99 but that turned out to be a dud as well
Download some of uncensored 4-bit quantized models, search for your fetish on chub.ai, and coom to a chatbot who will do whatever you ask with no moral inhibitions
No it won’t improve your life, but it will dull the pain
Anybody with an ounce of cynicism in their bones already prepped themselves for the obvious letdown that lk99 has proven to be, which leads me to believe you are in the age range of 19–21
>asks chatgpt programming question
here's your answer anon >uh that doesn't seem right
you are correct. here's your real answer >uh that doesn't seem right either
you are correct. here's is your definitely real answer. *posts the same answer as the first time
>wtf why is this chatbot not actually a heckin smart scifi artificial intelligence that is right about things and can be used in place of googling an answer to my retarded questions
>Retard doesn't know how something works >Retards beliefs don't line up with reallity >wtf why is thing bad?!
It's a math program that guesses the next word based on a text block fed to it. It doesn't think, it's not a magic robot brain.
This just in: many humans, most likely including your esteemed self, also frequently lie with confidence. The only difference here is that the machine does so without any malicious intent and has simply learned to do so from its masters due to a statistical prediction engine you could not understand, not due to it being particularly complex but simply due to your own petulant ignorance
They're just like real people, if you slip in a premise they will accept it without question and base their entire argument off it even though it is completely untrue.
>what is a hallucination
it's called lying
a lame excuse that allow them to say anything the fuck they want
deep neural nets are bullshit generators
by default they produce pure bullshit (noise)
they are trained by penalizing the unconvincing bullshit
an ideally trained model will have convinced its human supervisors that its output is correct
but that doesn't mean its output actually is correct, because its human supervisors are not all-knowing oracles
this is a fundamental problem with deep neural nets, which operate with lossy data compression - this is a feature not a bug, but it is also a limitation
>has no idea how an ai chatbot works
>uses a chatbot that was outdated 8 months ago
>DURR
>DURR WHY IS IT SHIT
>DURR IT'S BAD
ok
cope. they just lie good enough for you to hopefully not notice. that's fine for creative stuff but useless for anything that you actually need objective facts. aka it's just a toy
at least Bing Chat can do a web search and cite sources
it's still wrong a lot though
>it will get better
>it just will
okay
>what is moore's law
The media keep telling us that this technology is going to take our jobs LMAO
Litteraly nobody says that except 3 anons on BOT
people make shit up or accidentally get shit wrong all the fucking time
all the AI has to do is fuck up less than a real human bean does
not lying is something people learn as toddlers. it would be one thing if it said "hmm I'm not sure but I'd guess that ________" but it just confidently lies to your face because it's built to be convincing and not accurate.
The chatbot isn't lying. It isn't capable of lying. It's a chatbot. It's not sentient. It's not sapient. It says whatever it pulls from a database to pretend like it's saying something.
It’s even more abstract than that
It’s a statistical model trained to estimate a probability distribution using MLE that is predicting the most likely draw for the next token given a series of tokens and that estimated statistical model
Calling it “a glorified autocomplete” is far closer to the truth than any other popular description because it hones in on the fact you wouldn’t accuse your autocomplete of “lying” or “hallucinating” unless you’re a Boomer retard, so why would you use those terms with an LLM?
A good example is probably a parrot, I think.
A parrot can learn what noises tend to go next to one another, and string them together in ways that make it seem like it can talk.
It can't, though, not really - or, at least, it doesn't UNDERSTAND what it's saying. It can know that "Polly wants a cracker" gets it a cracker and that "bad girl" is the noise its owner makes when it tries to bite their fingers, but it doesn't actually understand what those words MEAN.
Would you say that a parrot lied to you if it gave you incorrect information, or would you just assume that it had picked up some bullshit noises and spat them out in an order that was pleasing to its walnut of a brain?
Regardless of your answer, keep in mind that large language models have even less true understanding than a parrot does and add a bunch of dice-rolling to the mix as well.
...Also, note that very few of these large language models are hooked into a database. It's literally just word fragments shoved into a huge billions-parameter algorithm to determine what comes next. At best you get Bing which can feed search results back into itself, but that's still a bit of a crapshoot.
Nice try mr.parrot. I fell for your tricks before but not again this time
the difference is that the media isn't constantly telling me DUDE THIS PARROT WILL REPLACE ALL JOBS IT KNOWS EVERYTHING
If your job is writing bullshit then you're in trouble.
You realize 86% of "journalism" is just AI's writing clickbait right?
The information shouldn't have changed in the last 8 months.
chatgpt runs gpt3, which is very old and frankly, trash
theyve since released gpt4
I can't even get it to explain why Jabba's advisor said C-3PO was gay without it going into a morality rant about how we shouldn't criticize gays.
to be fair this is a hard question to find concrete answers to since most hardcore brady fans don't care about that shit and the canned laughter is especially something studios don't like to talk about. The only actual source on it being one camera is one (1) interview with the cast saying that it took a long time to shoot because it was one camera and no one has ever admitted to the canned laughs but it's obvious. it's the same 70s quiet ahahahaha like you'd hear in scooby doo.
>lies casually for no reason
>lies in court
>genocidal in one specific direction
Whatever you say about ChatGPT, you have to admit that it is shaping up to be an apex garden gnome. Imagine how many garden gnomes that could replace? If all goes well we could have a garden gnome-in-the-Box (garden gnomeITB) right at home, never have to worry about ordering a garden gnome take-out.
The 23rd century is shaping up into a form that is going to be visible.
Weird they didn't just program the cucky moderation model to just say you shouldn't be proud to be white because whites have a history of racism. At least they would be upfront and consistent instead of this incongruent mess they've caused.
Ask it why you can't view the episode of The Brady Bunch where they get measles. Really try to pin it down on the root cause why that episode is not available. Don't accept "the distributor won't distribute that episode" as a root cause answer.
>tfw chatgpt honeymoon period is over and im back to being depressed
>tried to fill the gaping hole in my soul with lk99 but that turned out to be a dud as well
help
HELP
>chatgpt honeymoon period is over
I wish the rest of the world felt the same way.
Download some of uncensored 4-bit quantized models, search for your fetish on chub.ai, and coom to a chatbot who will do whatever you ask with no moral inhibitions
No it won’t improve your life, but it will dull the pain
Anybody with an ounce of cynicism in their bones already prepped themselves for the obvious letdown that lk99 has proven to be, which leads me to believe you are in the age range of 19–21
>asks chatgpt programming question
here's your answer anon
>uh that doesn't seem right
you are correct. here's your real answer
>uh that doesn't seem right either
you are correct. here's is your definitely real answer. *posts the same answer as the first time
yeah I'm really scared for my job
>wtf why is this chatbot not actually a heckin smart scifi artificial intelligence that is right about things and can be used in place of googling an answer to my retarded questions
>Retard doesn't know how something works
>Retards beliefs don't line up with reallity
>wtf why is thing bad?!
It's a math program that guesses the next word based on a text block fed to it. It doesn't think, it's not a magic robot brain.
This just in: many humans, most likely including your esteemed self, also frequently lie with confidence. The only difference here is that the machine does so without any malicious intent and has simply learned to do so from its masters due to a statistical prediction engine you could not understand, not due to it being particularly complex but simply due to your own petulant ignorance
no shit? it's a fucking neo clever hans. it was trained to impress people, not to actually be "smart."
Claude2 wins.
They're just like real people, if you slip in a premise they will accept it without question and base their entire argument off it even though it is completely untrue.
wow, glorified autocomplete has no fact checking
who could've seen that coming
they're aware of this and are using prompts like "According to Wikipedia" to ground it
Which ironically, is just turning it back into a search engine