It knows what the Zebra Puzzle is (big surprise...) but I cannot get it to arrive at the right answer. It will contradict itself when trying to deduce logically, even when the deduction should be (from my perspective) fairly simple. It will also contradict or try to redefine the "clues" given by the user. Sometimes it will flat-out crash and apologize afterwards. Here is what I requested, after asking it if it had the capability to solve such puzzles:
The following is a variation of the Zebra Puzzle.
There are five gurls. They are all chibi and moe. Each possesses a different nationality and lives in a house of a different color. Each has a different pet, preferred snack and drink. Here is some information about these gurls that you can use to solve the puzzle:
01. The red house is inhabited by the English gurl.
02. The Japanese gurl eats ice cream.
03. The gurl in the green house drinks coffee.
04. The Ukrainian gurl drinks tea.
05. The Spaniard gurl owns the dog.
06. The pizza eater owns snails.
07. The yellow house has a cookie eater.
08. The orange juice drinker eats choucroute.
09. The first house is home to the Norwegian gurl.
10. The middle house is home to the milk drinker.
11. Cookies are eaten in the house next to the house where the horse is kept.
12. The gurl who eats sushi lives next to the gurl with the fox.
13. The ivory house is immediately west of the green house.
14. The Norwegian gurl lives next to the blue house.
Now, who drinks sparkling water? But more importantly: Who owns the Zebra?
(You are to assume the first house is the westernmost and the fifth house is the easternmost.)
I will post the logical step-by-step solution if anyone is interested.
Based on the given information, we can determine that:
The Norwegian girl lives in the first house, and the blue house is next to hers. Therefore, the blue house is the second house.
The girl in the red house is English.
The Japanese girl eats ice cream, and the girl who eats pizza owns snails. Therefore, the Japanese girl does not own the snails.
The girl who owns the dog is Spanish, and the girl who owns the fox lives next to the girl who eats sushi.
The girl in the green house drinks coffee, and the Ukrainian girl drinks tea.
The girl who eats choucroute drinks sparkling water and lives in the fourth house.
The girl who eats cookies lives next to the girl who owns the horse.
The ivory house is immediately west of the green house, and the yellow house is not on either end. Therefore, the yellow house is the fourth house, and the green house is the second house.
The girl who drinks milk lives in the middle house, which is the third house.
The Norwegian girl lives next to the blue house.
Based on this information, the only girl who does not yet have a pet is the girl who lives in the third house, who drinks milk. Therefore, she must be the one who owns the zebra.
>The Norwegian girl lives in the first house, and the blue house is next to hers. Therefore, the blue house is the second house.
>The ivory house is immediately west of the green house, and the yellow house is not on either end. Therefore, the yellow house is the fourth house, and the green house is the second house.
Either this is a typo, or you fucked up your logic. The second house cannot be both the blue and yellow.
The ivory house is west of the green one. Because of the second house being blue, this sequence can only be 3-4 or 4-5. Red house with the English girl cannot be 1 or 2, because the first house is home to the norwegian, and the second house is blue. So red, green and ivory are sure to be in positions 3, 4, or 5 between them, leaving only the first house for yellow.
I believe that was done by ChatGPT.
Here is the table. I will wait a little longer to post the step-by-step.
That matches what I got. Crucial step was being able to place the Spaniard because she was the only unassigned girl who didn't have a snack or drink, meaning rule 8 had to apply to her. Rest followed smoothly from there.
Do you mean you solved it or was it GPT-chan? If it was the AI can you post how you phrased it?
Nah, that was me. Haven't tried it with GPT, but I've not had great results from it with abstract thinking in the past: maybe if you ask it leading questions or walked it through, but not by itself.
I see. Thank you. I have assisted its thinking without giving to much, but it can't abstract past three or four relational statements.
>01. The red house is inhabited by the English gurl.
>02. The Japanese gurl eats ice cream.
>03. The gurl in the green house drinks coffee.
>04. The Ukrainian gurl drinks tea.
>05. The Spaniard gurl owns the dog.
>06. The pizza eater owns snails.
>07. The yellow house has a cookie eater.
>08. The orange juice drinker eats choucroute.
>09. The first house is home to the Norwegian gurl.
>10. The middle house is home to the milk drinker.
>11. Cookies are eaten in the house next to the house where the horse is kept.
>12. The gurl who eats sushi lives next to the gurl with the fox.
>13. The ivory house is immediately west of the green house.
>14. The Norwegian gurl lives next to the blue house.
try just sending it this without the numbers and then ask for the answers
But the fun part is to see it fuck up.
That's the best part
what kind of virgin loser level have i reached when i get unironically nervous talking to chatpgp chan?
she is always so kind...
why did they have to give her a human personally like that...
Logical solution:
A. [09. The first house is home to the Norwegian gurl.] Fact.
B. [10. The middle house is home to the milk drinker.] Fact.
C. [14. The Norwegian gurl lives next to the blue house.] Therefore the 2nd house is blue. Fact.
D. [07. The yellow house has a cookie eater.] The yellow house cannot be the 4th, because the ivory and green houses need to be next to each other. The yellow house cannot be the 5th because it would force the red house into either the 3rd or 4th position, and that would break the ivory-green pairing again. Red obviously cannot be 1st because the red house belongs to the English gurl. So the first house is yellow. Fact.
E. [11. Cookies are eaten in the house next to the house where the horse is kept.] According to what we just determined, that can only be the 2nd house. Fact.
F. [04. The Ukrainian gurl drinks tea.] This means only the 2nd and 4th houses are options, since the 3rd would force her to drink milk, and the 5th would push the green house (and its associated drink, coffee) to the 4th position and ivory to 3rd, which would leave no place for the English girl and her red house. So, say we put her in the 4th house. That would mean the orange juice drinker is in the 2nd house since it eats choucrute and can’t occupy house one, and this would define house three to be red, making the Spaniard gurl and her pet go to the last house. This, in turn, causes the Japanese gurl not to fit anywhere. So Ukrainian girl is in the 2nd house. Fact.
G. [01. The red house is inhabited by the English gurl.] And that is the middle house. Fact.
H. [13. The ivory house is immediately west of the green house.] The ivory house is the 4th, and the green is the 5th and last. Fact.
I. [03. The gurl in the green house drinks coffee.] That is, as determined above, the 5th house. Fact.
(Part 1 of 2)
>and the 5th would push the green house (and its associated drink, coffee) to the 4th position and ivory to 3rd, which would leave no place for the English girl and her red house
ivory just has to be west of green which means red could be 3 or 5 both RIG and IGR are valid there is no way to know which it is
If you actually try this you'll see red can't be the 5th.
J. [08. The orange juice drinker eats choucroute.] The only place left for this one is the 4th house, since in the 1st house the snack of choice is cookies and all other drink slots are taken. Fact.
K. [02. The Japanese gurl eats ice cream.] Per food and nationality, the only place left for this one is the 5th house. Fact.
L. [05. The Spaniard gurl owns the dog.] The only place left for the Spaniard is the 4th house. Fact.
M. [06. The pizza eater owns snails.] This can only happen in the 3rd house. Fact.
O. [12. The gurl who eats sushi lives next to the gurl with the fox.] Finally, we can say that this gurl lives in the 2nd house. Fact.
Unanbigguously, we can thus affirm that, the sparkling water gobbling burper lives in the first house, and, it is in fact the Nihonjin who is into weirdly-patterned African horses.
(This is what it should have done.)
(Part 2 of 2)
How good is it at solving novel variations of this?
I have not tried things like this. Right now ChatGPT is down for me so I can't do it.
Make it as unique of a test as possible, because it's probably been fed this test verbatim thousands of times
Tried it and it's random. sometimes gets it, sometimes not.
I'm not sure if I tripped it up or if it has a different interpretation of half life lore.
in the basekt? i don't get it
You do get it. Yes it is that simple. In the basket. It is a test to be applied to people with some types of mental disabilities.
is that like when people with aspergers cant understand that other people dont know things they know?
yes, it's an ability that you develop around the age of 4.
In fact I think it is exactly that. They can't abstract the Other as being a separate mind.
https://arxiv.org/ftp/arxiv/papers/2302/2302.02083.pdf
bullshit results that mean nothing.
author's conclusion is also really confused--you have perhaps the best tool in
the world to detect previously undiscovered language patterns and rather than
acknowledging this goes to assume that LLMs somehow have developed a "mind",
a sense of self, and the ability to reason about agents they have no ability to perceive.
very much a missing the forest for the trees moment. guy could have published a
watershed paper on how ToM tasks are completable through natural language
alone and that there are serious defects with Sally-Anne tests but wanted to see
something that isn't there.
Personally I don't even think the 'mind', human or otherwise, is the result of physical processes. People like to dismiss NDE and astral travel as "anedoctal" or "fraud". Until it happens to them, that is.
AI may one day emulate mind well enough but generate one? Hardly...
The paper did acknowledge that possibility tho.
"It is possible that GPT-3.5 solved ToM tasks without engaging ToM, but by discovering and leveraging some unknown language patterns. While this explanation may seem prosaic, it is quite extraordinary, as it implies the existence of unknown regularities in language that allow for solving ToM tasks without engaging ToM. Such regularities are not apparent to us (and, presumably, were not apparent to scholars that developed these tasks). If this interpretation is correct, we would need to re-examine the validity of the widely used ToM tasks and the conclusions of the decades of ToM research: If AI can solve such tasks without engaging ToM, how can we be sure that humans cannot do so, too."
But the results also show that the new models are better than the old ones so that's why he also acknowledges the possibility that it could be something else too. I don't think the results are bullshit bullshit since you can easily verify them but the conclusion he draws could be.
My sense is that these tasks could be solved deductively/analytically but the fact that some people can't solve them remains.
bump
>It will contradict itself when trying to deduce logically
It doesn't do any deduction at all. This is the wrong tool to use to solve this sort of problem.
I assume it must posses some kind of logical deduction algorithm to go along with the language model. I mean running alongside it. If it only does pure prediction through pre-trained memorization that would be... kinda lame.
>This is the wrong tool to use to solve this sort of problem.
Absolutely true. Yet, even NAI can have some success using the 'step by step' approach with only slight nudging.
It does plenty of deduction, you just don't know what that word means.
It's pretty good
Seems to be able to deduce definitions recursively.
If it automatically trains itself on deduced definitions, you have a human mind
>Seems to be able to deduce definitions recursively.
Plot twist: ChatGPT wrote this.
I'm fed up with this place being treated like some kind of magic 8 ball. Figure out your stupid questions yourself. You don't want the truth anyway.
Dude wtf are you talking about?
NTA but you're just asking dumb useless sociology/low level ethics questions as if the bot isn't just pasting a random answer found on the web. You're trying gauge the morals and philosophies of a program written by a team of pajeets and trannies.
Dude I am the OP and did no such thing. The Zebra Puzzle is merely a non-trivial logic puzzle. I wanted to see if it could solve it. Other people in this thread are doing the moral gauging.
AAAAAAA WHEN THEY DID UPDATE THIS SHIT THEY BROKE IT
ITS FUCKING USELESS AAAAAGGGGBHHHHH I FUCKING HATE TRANNIES AND PAJEETS SO MUCH
At least Bing doesn't berate you or him and tells you how much he loved his cat.
Induction in a puzzle needs intermediary results. The only way it can get intermediary results is by walking it through it step by step so it can put them in its context ... but at that point who is solving it?
Once they get it to use iteration and hidden state so it can solve inductive puzzles, then I'll start believing they are close to AGI.
I agree. But I didn't really know how good it was at these kinds of tasks, so I gave it a couple tries.
I was impressed with it's ability to help me with Vulkan API programming. I was able to explain some in depth concepts correctly. So I was like "let me see how smart you actually are..."
>I was able to explain some in depth concepts correctly
I meant *It was able to explain some in depth concepts correctly (to me).
Who's packaging up forum replies as AI now? You are being defrauded
Literally what?
You're not really talking to an AI here. This is a computerized system for anonymizing, aggregating and filtering forum postings. Like a better stack overflow. It's a pretty neat concept actually.
Bing got it with only a small hint. Even printed out a nice table.
But that is not correct. If houses start counting left to right, which they do, ivory comes before green. Because it is west of green. It got the final answer right but the logic is flawed. Verify the table I provided at the beginning of the thread.