The simplest, easiest way to understand that LLMs don't reason.

The simplest, easiest way to understand that LLMs don't reason. When a situation arises that they haven't seen, they have no logic and can't make sense of it - it's currently a game of whack-a-mole. They are pattern matching across vast amounts of their training data. Scale isn't all that's needed.

For people who think GPT4o or similar models are "AGI" or close to it. They have very little intelligence, and there's still a long way to go. When a novel situation arises, animals and humans can make sense of it in their world model. LLMs with their current architecture (autoregressive next word prediction) can not.

It doesn't matter that it sounds like Samantha.

A Conspiracy Theorist Is Talking Shirt $21.68

Tip Your Landlord Shirt $21.68

A Conspiracy Theorist Is Talking Shirt $21.68

  1. 1 month ago
    Anonymous

    AGI is a nonsense scifi term. AI itself isnt real but we use it out of convenience/psyops from the government.

    'AGI' for me = able to do all tasks better than the average person doing those tasks professionally. gpt4 currently cannot replace anyone, in anything. its ass and very dumb. gpt 10/11/12 might replace customer service jeets/help desk.

  2. 1 month ago
    Anonymous

    >stupid question
    >stupid answer
    A person would do the same and ask you to frick off.

    • 1 month ago
      Anonymous

      https://i.imgur.com/skoVUlI.jpeg

      The simplest, easiest way to understand that LLMs don't reason. When a situation arises that they haven't seen, they have no logic and can't make sense of it - it's currently a game of whack-a-mole. They are pattern matching across vast amounts of their training data. Scale isn't all that's needed.

      For people who think GPT4o or similar models are "AGI" or close to it. They have very little intelligence, and there's still a long way to go. When a novel situation arises, animals and humans can make sense of it in their world model. LLMs with their current architecture (autoregressive next word prediction) can not.

      It doesn't matter that it sounds like Samantha.

      whats the correct answer

      • 1 month ago
        Anonymous

        the surgeon is trans, you bigot

      • 1 month ago
        Anonymous

        The boy is his own surgeon father.

      • 1 month ago
        Anonymous

        I've been thinking about this for about an hour and it seems he is the grandfather

      • 4 weeks ago
        Anonymous

        its obviosly a trick question, they should have said "the surgeon is not the boys mother" to prove they're point.

  3. 1 month ago
    Anonymous

    You're missing something incredibly important: It's trivial to determine that LLMs are incapable of reasoning if one is capable of reasoning themselves. The only people you need to reason with in order to convince them that LLMs can't reason are themselves incapable of reasoning. So not only are you wasting your time, but you're failing to realize that LLMs are already effectively on the level of humans, and will continue to be as long as we classify npcs as "human."

    • 1 month ago
      Anonymous

      tfw mogged by a LLM

    • 4 weeks ago
      Anonymous

      >It's trivial to determine that LLMs are incapable of reasoning if one is capable of reasoning themselves
      So... the real purpose of the Turing test was for evaluating the ability of Humans to reason?

      >LLM meta study: An Argument Against Consciousness in Venture Capitalists

  4. 1 month ago
    Anonymous

    Yeah for now it just amounts a neat trick.

  5. 1 month ago
    Anonymous

    An AI doesn't need to be smart, reliable or otherwise in any way perfect to destroy the world. It just needs to sound convicing enough to get a real flesh and blood human to push the big red button at the right time.
    And if there's one thing AI currently excels at it's sounding very very convicing.

  6. 1 month ago
    Anonymous

    makes no frickin sense to me and im a human, i don't know what fricking moronic answer this "gotcha riddle" is supposed to have

    • 1 month ago
      Anonymous

      There was a similar but related problem in a previous version of the AI where it got it wrong in the other direction, it assumed that the female surgeon was the father. Because AI should not be sexist they fixed it by making it always assume the surgeon is the mother.

  7. 4 weeks ago
    Anonymous

    What really breaks my heart is that only sonnet and haiku are able to solve this. Opus is busted too.

    • 4 weeks ago
      Anonymous

      at least the good old boy can still keep up.

      the real tragedy is that he's gonna be memory holed by microshaft in about a month or two.

      • 4 weeks ago
        Anonymous

        I don't get it. How the frick does it highlight unconscious gender biases when it specifically says that the surgeon is male and is the father of boy? what moronic training data says this.

        4o gave me this
        >This classic riddle highlights unconscious gender biases. The answer is that the surgeon is the boy's mother.

        • 4 weeks ago
          Anonymous

          It's just bad training. Opus had some real potential in that regard but they apparently completely neutered that model to the point where it can just regurgitate training data.

        • 4 weeks ago
          Anonymous

          The question is a trick for AI because AI has seen message in the quotation marks and it has answer to this question. It doesn't "read" the whole sentence as it seems irrelevant.

  8. 4 weeks ago
    Anonymous

    >When a situation arises that they haven't seen, they have no logic and can't make sense of it
    kind of like how humans are.

    • 4 weeks ago
      Anonymous

      humans are experts in expecting the unexpected.
      humans who were trained to drive a car can react and make the right decisions in novel situations.

  9. 4 weeks ago
    Anonymous

    >When a situation arises that they haven't seen, they have no logic and can't make sense of it - it's currently a game of whack-a-mole. They are pattern matching across vast amounts of their training data.
    so ……… just like humans?

  10. 4 weeks ago
    Anonymous
  11. 4 weeks ago
    Anonymous

    It doesn't have to actually reason to serve its purpose.
    But this is the real reason behind AI's failure. It's too expensive. Not just to train, but to maintain. We're already struggling to make chips better than what we have and now we go up against this logarithmic wall of stunted growth.
    We're shoveling trillions to the trash while forgetting about cost-efficiency. Can these LLMs do better than your usual $20k/y pajeet, FOR the same amount of money?
    Oh, they can't. They never will be able to.
    Start shorting.

  12. 4 weeks ago
    Anonymous

    This is what happens when you overfit the model on 'trick questions' to make it seem smarter and enough of sanity checks. Testing on Lmsys, only Phi gave the correct explanation - all the top models completely fumbled it.

    The highlight of ridiculous answers was the one that stated that the boys father divorced and boys mother and then married another man, who also happens to be a surgeon. So right now, there are two male surgeon fathers in the room.

Your email address will not be published. Required fields are marked *