ChatGPT bros.. how do we respond?

https://www.maximumtruth.org/p/ais-ranked-by-iq-ai-passes-100-iq

Thalidomide Vintage Ad Shirt $22.14

Yakub: World's Greatest Dad Shirt $21.68

Thalidomide Vintage Ad Shirt $22.14

  1. 2 months ago
    Anonymous

    still got 11 points on that b***h

    • 2 months ago
      Anonymous

      But it has more knowledge, by a vast order of magnitude.

      • 2 months ago
        Anonymous

        >more knowledge
        any moron can google

      • 2 months ago
        Anonymous

        Knowledge isn't intelligence. These things are just glorified search engines and translators.

        • 2 months ago
          Anonymous

          I mean, they're literally not search engines, because they don't search through a database of content looking for relevant matches to key words or phrases. They work entirely differently from that. In fact I don't think it's 100% clear exactly how they DO work right now. The engineers are a little ahead of the theoreticians.

          • 2 months ago
            Anonymous

            It just searches through bodies of text it has absorbed. It's a search engine.

            • 2 months ago
              Anonymous

              More like making moron read through all the text and then asking him to remember it

            • 2 months ago
              Anonymous

              I mean it literally does not do that, but whatever.

            • 2 months ago
              Anonymous

              Gonna need you to have a nice day buddy

          • 2 months ago
            Anonymous

            They may not work exactly like search engines under the hood, but they are search engines in practice. They just give you information they have "learned" when prompted. So, it's like if you want the recipe for Beef Wellington, but you want it without having to read through someone's life story first in a website infested with trash. But then again, you may have to do so anyway in order to confirm the veracity of what the bot spouted out, because they are so unreliable. They are not capable of reasoning, they are very bad at it in fact, or generating new knowledge. They utterly lack the ability to think in abstract ways. It's a curious new technology, but there's nothing "AI" about it. It may even help automate some low-level white collar jobs, but nothing important.

            • 2 months ago
              Anonymous

              >they are search engines in practice.
              >They are not capable of reasoning, they are very bad at it in fact, or generating new knowledge.
              >They utterly lack the ability to think in abstract ways.
              Those are some bold statements, buddy.
              If LLMs are just search engines, with no ability to handle situations they haven't seen before, then how well do you think they could play chess?
              Maybe they could parrot a few opening moves that they've seen before, but after that their moves should be completely random, right?
              They should even start hallucinating the existence of pieces on certain squares, because they lack a coherent world model.
              https://arxiv.org/abs/2402.04494

              • 2 months ago
                Anonymous

                he’s trolling you

              • 2 months ago
                Anonymous

                A significant part of chess is memorization and pattern recognition, and these models are excellent at those things. It's their bread and butter. And the paper says they fed it 10 million chess games, amounting to 15 billion data points. For something with such capabilities in those aspects, it would be enough to reach high elo. In fact, that's what they seem to be going for - performance arising from scale. But by that point, I'd argue the model is closer to a "Narrow AI" than anything else. Impressive at what it does, but something we've already seen before with systems like Deep Blue, it's just significantly easier and cheaper to train. But that is not sufficient proof for even a rudimentary level of reasoning. I would have been significantly more impressed, and scared, if they gave it games just enough for it to grasp the rules and interactions, and then let it figure everything else on its own, and then reach high elo. That would be proof of actual, high-level reasoning and abstract thinking.

              • 2 months ago
                Anonymous

                >they fed it 10 million chess games
                After 8 moves (i.e. 4 moves per side) there are 84,998,978,956 possible chess positions.
                That's nearly 85 billion with B positions before the game is even out of the opening.
                10 million games is basically nothing in terms of memorizing sequences or board states.
                >I would have been significantly more impressed, and scared, if they gave it games just enough for it to grasp the rules
                You know that's what they did with AlphaZero, right?
                https://en.wikipedia.org/wiki/AlphaZero

              • 2 months ago
                Anonymous

                I skimmed over it, so I missed read their model outperformed AZ, which I concede is very impressive. I may dive into the paper more thoroughly when I have time, but I remain unconvinced for the time present, nonetheless.

              • 2 months ago
                Anonymous

                >I missed read their model outperformed AZ,
                I might have misled you, or I might be misunderstanding your comment, but the LLM chessbot didn't beat AlphaZero.
                My point was just that AlphaZero was only given the rules and was able to "figure everything else on its own" which you admit is "proof of actual, high-level reasoning and abstract thinking".
                So, while I admit that the LLM chessbot wasn't as impressive (both in terms of chess ability and minimality of training data), I think that 15 billion data points is sufficiently small (equivalent to playing less than 4 moves in a chess game) that the LLM chessbot must still be doing some high-level reasoning and abstract thinking.

  2. 2 months ago
    Anonymous

    impressive

  3. 2 months ago
    Anonymous

    If you only need 100iq to beat random chance what are the extra IQ points in humans allocated to?

    • 2 months ago
      Anonymous

      please look again at this chart, even an moron with IQ below 80 can beat random chance almost every time

    • 2 months ago
      Anonymous

      These LLMs are good at spitting out long sentences, talking in circles, they aren't great at any sort of general problem solving.

      • 2 months ago
        Anonymous

        The artice literally proves that they are good at problem solving.

      • 2 months ago
        Anonymous

        they are if you know how to prompt

      • 2 months ago
        Anonymous

        >they aren't great at any sort of general problem solving.

        [...]

  4. 2 months ago
    Anonymous

    >brandnew model faster than year old model
    WOA WTF. GYAT!!!!

  5. 2 months ago
    Anonymous

    >grok
    >grok fun
    >64iq

  6. 2 months ago
    Anonymous

    >random guesser smarter than some people

  7. 2 months ago
    Anonymous

    gpt4 is so 2013

  8. 2 months ago
    Anonymous

    >cherry picked bullshit
    >maximumtruth.org
    what the frick is this moronic bullshit?

  9. 2 months ago
    Anonymous

    IQ is a mesure of solving abstract problems, LLMs literally cannot even think in abstract terms, they just see text tokens, that image is fricking moronic even compared to the average AI propaganda.

  10. 2 months ago
    Anonymous

    That's a bit hard to believe given my own experience with GPT4 and how unreliable it gets sometimes when I'm talking to it, but having said that, if that was true, GPT5 would genuinely start to become scary, especially if you take into account the jump from 3.5 to 4.

    • 2 months ago
      Anonymous

      >GPT5 would genuinely start to become scary
      not only will it have more training compute and data behind it, it will probably also use techniques like Retrieval Augmented Generation, Graph of Thoughts, and Q* for searching through the token space, so it should basically never hallucinate. the benchmark to watch is the GPQA https://manifold.markets/MatthewBarnett/will-ais-beat-human-experts-in-ques

      • 2 months ago
        Anonymous

        >Retrieval Augmented Generation, Graph of Thoughts, and Q* for searching through the token space
        I read some explanation about some "supposedly smart research dude" that the real magic it's going to happen when we explore something involving latent space generalization which can give LLMs the capacity to infer stuff taken from a different knowledge field into another area. He said right now they struggle a lot and have to rely on specific examples more often than not resulting in gibberish.
        Pretty exciting stuff.

        • 2 months ago
          Anonymous

          i've been wondering for a while if there isn't some way to do a kind of transfer learning within the neural network itself, without it needing to be trained on any external data. maybe it would be a completely different type of training process which does RL over the neural weights in some principled way to find more compact representations of the abstract representations within them. that might need a completely different kind of structure compared to LLMs, though, one which allows for miniature reusable thought circuits. if such a thing was possible, it might lead to AIs which are much closer in energy efficiency to the brain, so it would be dangerous to reveal that it's possible.

  11. 2 months ago
    Anonymous

    ah, it's another amazing research paper, known as a personal blog post. i am very impressed with all the scientific methodologies that were employed in this highly intellectual endeavor in order to obtain such deep findings. thank you, op-sama, for posting them.

    • 2 months ago
      Anonymous

      bro, what do we do when GPT5 reaches 125 IQ? I have 107 or so and I'm barely surviving here as a frontend dev. I'm toasted and already seeking tutorials on how to rob banks.

      • 2 months ago
        Anonymous

        just get more iq, duh. it's really that simple.

  12. 2 months ago
    Anonymous

    gpt 5 in 7 days
    gap will go from 1 year to a full decade
    we've won
    t. sam

    • 2 months ago
      Anonymous

      >gpt 5 in 7 days

      Source????????

      • 2 months ago
        Anonymous

        GPT-4 was launched on March 14. That's probably where he's basing that number..

        • 2 months ago
          Anonymous

          when was gpt 3.5 launched

  13. 2 months ago
    Anonymous

    ChatGPT is filtered, so it's automatically bad. Claude is even more filtered, so it's automatically worse.
    Don't support filterdevs.

    • 2 months ago
      Anonymous

      what are you doing that's getting filtered

  14. 2 months ago
    Anonymous

    >literally trained on IQ test data
    >most get below 100
    Well, it looks like humans are safe.

  15. 2 months ago
    Anonymous

    >one closed-source proprietary garbage beats another closed-source proprietary garbage

    Not interested in both.

  16. 2 months ago
    Anonymous

    Disingenuous framing because Sonnet is by no means smarter than GPT4, only Opus probably is

Your email address will not be published. Required fields are marked *