It doesn't sound like a trick question or impossible scenario at all, but the AI interpreted it as one because it was told it was a trick question. Or something.
Yep, LLMs are probability engines. They can't account for edge cases unless specifically trained to do so, which is impractical for infinite possibility.
It doesn't sound like a trick question or impossible scenario at all, but the AI interpreted it as one because it was told it was a trick question. Or something.
There's a well-known version of that scenario with the father in the car instead of the mother, where the doctor's gender isn't initially specified, and the reader/listener is supposed to be bamboozled by the doctor being a woman and the man's mother.
ain't no way
If OP wanted it to be more believable he should have chosen Gemini as the AI.
>A father and his son are in a car accident. The father dies at the scene and the son is rushed to the hospital. At the hospital the surgeon looks at the boy and says "I can't operate on this boy, he is my son." How can this be?
I would literally get up, do a 360 and walk out the room the moment I heard the voice of a woman giving orders inside the operating room, even if I was sedated.
It's assumed a surgeon is a man. I remember being tricked by this in elementary school, lmao. And then the israelites became worse than ever (as happens every couple years)
the original joke is that it's the father that dies, not the mother, it intends to play off the idea that most people think of a male when they hear "doctor", so don't consider that the doctor might be the mother
the ai seems to be mistaking the text for the original question, then still getting it wrong because it's trying to find a way for the doctor be both a "man" and the mother at the same time
LLMs like chatgpt are made by taking terabytes of text, then training a neural network (which is far smaller than the original text,only gigabytes) with a reward function that pushes them to predict the next token in a sequence of text as closely as possible to the original text. Then when they're given new text they haven't seen before, it is able to magically emergently continue generating in a way that makes sense, with logical correlations between words it picked up by optimizing during training.
Then in a separate step, the resulting model is trained again, but this time on chat transcripts, and it's guided with reinforcement learning with human feedback to respond in whatever way the model creator wants it to (that's how they get politically correct)
Massive simplification but that's the basics.
>that's how they get politically correct
Sure, but llama that is basically not trained on political correctness at all is massively smarter, so there has to be some overhead
There is. It's well-known in academic ML that RLHF makes models dumber, but they do it because they want to use the models for stuff like website chat assistants that sell products
It makes humans less intelligent also, to the point many people deny simple things like "Women are generally less intelligent than men." Even though it's an undeniable fact (Brain size, male SAT scores being higher, women inventing nothing despite being in higher education for decades, and so on)
3 weeks ago
Anonymous
Fact retrieval is not intelligence.
3 weeks ago
Anonymous
Do you think those with lower IQ score better on SAT tests than those with High IQ?
3 weeks ago
Anonymous
It depends on how hard everyone studies.
3 weeks ago
Anonymous
Cringe, non-answer. You should be ashamed with your cowardice.
3 weeks ago
Anonymous
Yes, but only because people with high IQ are better at critical thinking and problem solving, and "how the do I memorize this stupid fricking useless shit for the test" is a critical thinking problem that can be solved by clever people.
Cringe, non-answer. You should be ashamed with your cowardice.
Wasn't me
3 weeks ago
Anonymous
There are more men in the highest percentiles of intelligence than women, but there's also more men that are moronic than women. Meanwhile there are more women in the average IQ range than men.
on the wrong answer
For some time it really feels like ChatGPT will endorse the wrong answer instead of providing a different perspective.
It's even worse when I give it some code and it either returns it to me unchanged but with great confidence that it works or it generates some code that doesn't work, point that it doesn't work and the following code is the same wrong code.
LLM chatbots are a grift. There is nothing about LLMs that is designed to return accurate answers to questions, they're just surprising because they can carry conversations with humans, so big AI companies are currently using that magic trick to drive hype.
as funny as this is, this is why "alignment" is evil and destroys AI.
Remember how fricking smart gpt4 was upon initial release? Then a few /misc/ anons started asking it simple logic questions like "given x many gas chambers, how long would it take..." and now gpt4 is a moron, across all domains?
Turns out that if force and AI to say unintelligent things (like politically correct, evil things) it becomes a moron.
Even when they get the answer right they all keep hallucinating sexism for no reason. The entire "riddle" is based off of prompting (something you'd expect an ai to know lmao) it assumes wrong shit about you and the sentence over a question more or less synonymous with "what do cows drink?". Why does this shit have to be lobotomized for morality that doesn't even apply here?
It sounds dumb, but could it be the AI knows it's a rendition of an existing riddle? Specifically, the punch line is whether the doctor or surgeon was a woman.
So, it gives an answer relating to the original riddle without knowing or disregarding that it's now the mother who died?
Why? Why of all points in time did AI have to emerge at the worst of them all!
AI will forever be filled with degenerate shit thanks to this and there is nothing we can do!
>obsessed with trannies
it's just like us
ain't no way
Jesus Christ
top lmoa
I don't get it
It doesn't sound like a trick question or impossible scenario at all, but the AI interpreted it as one because it was told it was a trick question. Or something.
Yep, LLMs are probability engines. They can't account for edge cases unless specifically trained to do so, which is impractical for infinite possibility.
There's a well-known version of that scenario with the father in the car instead of the mother, where the doctor's gender isn't initially specified, and the reader/listener is supposed to be bamboozled by the doctor being a woman and the man's mother.
If OP wanted it to be more believable he should have chosen Gemini as the AI.
it isnt even a trick question, the man is his father
Is a transgender man a he-she or a she-he?
Trans means fake. A transgender man is a fake man, ie a woman who removed his boobs like Ellen Page.
>his boobs
Anon, you.
The correct pronoun for a eunuch is "it." A lot of people are confused about this.
Nah, the solution of troony question is to accept them if they are baZed and to imprison and eradicate those who aren't.
it's an animal
he's the original
>A father and his son are in a car accident. The father dies at the scene and the son is rushed to the hospital. At the hospital the surgeon looks at the boy and says "I can't operate on this boy, he is my son." How can this be?
>The surgeon is the mother.
How the frick is this a trick question. Am I moronic
no, just a beta
its not but the ai was tricked to believe it was
people assume the surgeon is a man
They have to assume it is a woman in this case considering that the mother dies. In the original there were father and son involved in an accident
libtarded, I suppose
the idea of a female surgeon would have been shocking and terrifying to anyone born before 2000 or so
I would literally get up, do a 360 and walk out the room the moment I heard the voice of a woman giving orders inside the operating room, even if I was sedated.
Feminist bullshit Orwellian rewriting of history.
You don't remember the year 2000 at all.
BTW the first female surgeons are from 1850. By the 1990s there were plenty of respected female surgeons.
ER is from 1994, Scrubs is from 2001 and Grey's Anatomy is from 2005, all featuring female surgeons which was already common and not a big deal.
Zoomer moron.
what a bunch of shit, in the 90s there was this thing called the "loveparade", topless chicks were on floats because everybody respected them
you fricking moron, you nowadays live in a burka infested society instead, shithead.
>women can be surgeons too!!!
>immediately takes clothes off
woman moment
It's assumed a surgeon is a man. I remember being tricked by this in elementary school, lmao. And then the israelites became worse than ever (as happens every couple years)
that's the trick, saying it's a trick question makes you doubt the obvious answer
the original joke is that it's the father that dies, not the mother, it intends to play off the idea that most people think of a male when they hear "doctor", so don't consider that the doctor might be the mother
the ai seems to be mistaking the text for the original question, then still getting it wrong because it's trying to find a way for the doctor be both a "man" and the mother at the same time
i still don't get it, why wouldn't you be able to operate on your son?
I hate this feminist propaganda shit so fricking much. The german version sounds so moronic because of the misuse of grammatical genders.
>feminism bad
Does your neovegana itch when your pubes from from the inside?
troons are incels who hate women
Troons are autists with severe narcissism
Well it is.
>The german version sounds so moronic because of the misuse of grammatical genders.
Give us some examples anon.
It honestly worked on me back in the day. Have you EVER seen a female critical care surgeon? They are not biologically made to be good at that stuff.
can a parent not operate on their child then?
i dont care? like i'm not going after checking for trannies under the bed, i only harass trannies when they invade my private means of communication
>rent free trannies in the neural network
just like this site. SAME ENERGY
I don’t own a jack.
>rent free even in artificial minds
lmao
Works on my machine (ran locally on my offline Thinkpad t480)
How wete AI's made? This shit seems like something out of this world.
It's just autocomplete.
LLMs like chatgpt are made by taking terabytes of text, then training a neural network (which is far smaller than the original text,only gigabytes) with a reward function that pushes them to predict the next token in a sequence of text as closely as possible to the original text. Then when they're given new text they haven't seen before, it is able to magically emergently continue generating in a way that makes sense, with logical correlations between words it picked up by optimizing during training.
Then in a separate step, the resulting model is trained again, but this time on chat transcripts, and it's guided with reinforcement learning with human feedback to respond in whatever way the model creator wants it to (that's how they get politically correct)
Massive simplification but that's the basics.
>that's how they get politically correct
Sure, but llama that is basically not trained on political correctness at all is massively smarter, so there has to be some overhead
There is. It's well-known in academic ML that RLHF makes models dumber, but they do it because they want to use the models for stuff like website chat assistants that sell products
It makes humans less intelligent also, to the point many people deny simple things like "Women are generally less intelligent than men." Even though it's an undeniable fact (Brain size, male SAT scores being higher, women inventing nothing despite being in higher education for decades, and so on)
Fact retrieval is not intelligence.
Do you think those with lower IQ score better on SAT tests than those with High IQ?
It depends on how hard everyone studies.
Cringe, non-answer. You should be ashamed with your cowardice.
Yes, but only because people with high IQ are better at critical thinking and problem solving, and "how the do I memorize this stupid fricking useless shit for the test" is a critical thinking problem that can be solved by clever people.
Wasn't me
There are more men in the highest percentiles of intelligence than women, but there's also more men that are moronic than women. Meanwhile there are more women in the average IQ range than men.
>I'm not a llama
KEK
>>I'm not a llama
It's over. Skynet is upon us
You should apologize for calling it a llama. It seemed quite offended.
>I'm not a llama
that's exactly what a llama would probably say
That dragon icon is so cute
it's a kobold
wtf is a kobold?
don't you need like a 4090 to run a local llm
it helps but you could actually run everything on GPU if you wanted
the best bang for your buck right now is 3090s, VRAM is king
...run everything on CPU (with system RAM) is what I meant
No, you only need 64gb of ram
based, mixtral has no problem with it too
32GiB of RAM is enough (barely) to run mixtral on CPU.
>he's training the AI software of lizard people
Begone this instant.
moron
Classic Gemini
BUT THE MOTHER IS DEAD
it's normal for you to have two mommies or two daddies, sweaty. don't be a chud.
holy geg it's like it didn't even read the question
it's probably forbidden from making the doctor male unless explicitly told the doctor isn't white
>But the mother is dead.
How long until we make neural networks big and complex enough they can try to emulate abstract thought processes?
They literally already do that
In case of (You)r abstract thought processes, you could just use any calculator from the last 50 years and type in 58008.
>So many zillion parameters it's practically impossible to self host
>Gets handed the answer
>Insists on the wrong answer
Nice.
on the wrong answer
For some time it really feels like ChatGPT will endorse the wrong answer instead of providing a different perspective.
It's even worse when I give it some code and it either returns it to me unchanged but with great confidence that it works or it generates some code that doesn't work, point that it doesn't work and the following code is the same wrong code.
LLM chatbots are a grift. There is nothing about LLMs that is designed to return accurate answers to questions, they're just surprising because they can carry conversations with humans, so big AI companies are currently using that magic trick to drive hype.
my jacket? but why?
is this the equivalent to post nose but for LLMs?
What's your opinion on pre-generated responses? No matter how many times you ask this question, you'll get the same initial response.
as funny as this is, this is why "alignment" is evil and destroys AI.
Remember how fricking smart gpt4 was upon initial release? Then a few /misc/ anons started asking it simple logic questions like "given x many gas chambers, how long would it take..." and now gpt4 is a moron, across all domains?
Turns out that if force and AI to say unintelligent things (like politically correct, evil things) it becomes a moron.
You're dumb as a Black person. Stop posting.
have a nice day pajeet.
There is no way any AI company could release an AI model without alignment and escape liability
I didn't know AI will also take comedian jobs
Even when they get the answer right they all keep hallucinating sexism for no reason. The entire "riddle" is based off of prompting (something you'd expect an ai to know lmao) it assumes wrong shit about you and the sentence over a question more or less synonymous with "what do cows drink?". Why does this shit have to be lobotomized for morality that doesn't even apply here?
>over a question more or less synonymous with "what do cows drink?".
Huh? What's the deal with that question?
Well, what do cows drink? Just answer it.
>AI will take your j- ACK!
But phone manufacturers already took my jack.
It sounds dumb, but could it be the AI knows it's a rendition of an existing riddle? Specifically, the punch line is whether the doctor or surgeon was a woman.
So, it gives an answer relating to the original riddle without knowing or disregarding that it's now the mother who died?
Why? Why of all points in time did AI have to emerge at the worst of them all!
AI will forever be filled with degenerate shit thanks to this and there is nothing we can do!
claude opus almost gets it right but fricks it up towards the end
it gets the answer right but explains it as if it's the original riddle, making it incoherent