GPT and AGI

Call me a moron if you want, but I don't get it:
On one hand I keep hearing that "a true artificial intelligence is perhaps a century away" or "it would require a Manhattan project times ten" and "Chat GPT is less smart than a rat" "it is to the human mind as a 1870s mechanical calculator is to a Iphone".
Yet, on the other hand, for decades I heard that passing the Turing test was to be the threshold where AIs would have reached the level of human intelligence and it seems that Chat GPT passes it with flying colours.
Sure, even the best versions tends to hallucinate sometimes, but otherwise it doesn't seem like there is much intellectual tasks it can't take or would plausibly be able to take in the near future.
If we put aside stuff like the Moravec paradox, what's the difference between a GPT and a human mind, or a true general AI, from the pure perspectives of its abilities?

Ape Out Shirt $21.68

UFOs Are A Psyop Shirt $21.68

Ape Out Shirt $21.68

  1. 3 weeks ago
    Anonymous

    Volition, for starters. Not to suggest that this is an unsolvable problem, but current AIs aren't capable of WANTING things, they're not capable of DOING things under their own initiatives. That's just not built in to their …programming? (I don't really know how AI works. Do AIs have programs like old-fashioned computers do?) If anybody comes up with an AI that can have preferences and opinions and wants, it's going to get very interesting.

  2. 3 weeks ago
    Anonymous

    The Turing test is completely subjective. No one in the field actually thinks it's a good method to determine true AGI anymore.

    • 3 weeks ago
      Anonymous

      >but otherwise it doesn't seem like there is much intellectual tasks it can't take or would plausibly be able to take in the near future
      All it does is predict the next word in a sentence though. It sounds incredibly plausible and based on the right training data it can certainly convincingly explain certain concepts to you, hence passing the Turing test.
      Look at this example, it's wrong in so many ways, because it actually has no capacity to understand chess, it just knows how to say things that sound like chess moves. AGI would be able to, like a human, learn any game it has never heard of and develop its own strategy and I believe we are still very far away from that and probably need some new architecture aswell.

      Another huge problem in Language Processing that still isn't is Coreference resolution. If I say "Anon walked home, he then made a post on BOT", it's trivial for a human to know that "anon" and "he" refer to the same entity. However language models still can't do this. if you give them a long enough text (like a book) they will trip up on this sooner or later and any small mistake here is catastrophic, because the error then propagates. The only development we've made in this area for years stems from using more computing power, not any fundamental improvements on the methods

      Also as said the Turing test isn't too useful anymore. I'm sure fricking Cleverbot could have passed the Turing test

      • 3 weeks ago
        Anonymous

        >All it does is predict the next word in a sentence though.
        We can say that all we want, but the truth is we're starting to see emergent behavior out of LLMs that was never programmed into them. It's early days but it's still very exciting.

        • 3 weeks ago
          Anonymous

          >It's early days but it's still very exciting
          Well funnily enough I've just put a paper on my to-read list that claims the opposite
          https://arxiv.org/pdf/2404.04125
          If I understand the abstract correctly, it says that these kinds of models will hit a plateau soon, unless provided with exponentially more training data

        • 3 weeks ago
          Anonymous

          Simply impossible.

          • 3 weeks ago
            Anonymous

            "Simply impossible" said the man, oblivious to the fact that it was happening all around him.

      • 3 weeks ago
        Anonymous

        >All it does is predict the next word in a sentence
        It's the Chinese room problem. To be able to do that past a certain level of complexity, it needs a model that can, in a way, actually get a level of comprehension of what it is writing about.
        Just like, even if the guy in the Chinese room follows a purely automatic process (isn't "intelligent"), perhaps, the ensemble guy + his extensive phrasebook, is intelligent in its own way.
        As I understand, we saw an example of that when Chat GPT went from 3 to 3.5, after programming languages were added to its training data and it started displaying unexpected abilities.

        • 3 weeks ago
          Anonymous

          "Comprehension" isn't the right term, obviously.

      • 3 weeks ago
        Anonymous

        >All it does is predict the next word in a sentence though.
        As far as I can tell, so do I.

        • 3 weeks ago
          Anonymous

          Well you do understand the concepts behind what you're saying though.
          When you say "I'm going through the..." you know that "door" and "streets" have higher probability of being the next word over "hamburger", but you can also visualize what going through a door actually looks like and means

          • 3 weeks ago
            Anonymous

            So does an AI though.

            • 3 weeks ago
              Anonymous

              Not really, see

              https://i.imgur.com/SRadqk9.jpeg

              >but otherwise it doesn't seem like there is much intellectual tasks it can't take or would plausibly be able to take in the near future
              All it does is predict the next word in a sentence though. It sounds incredibly plausible and based on the right training data it can certainly convincingly explain certain concepts to you, hence passing the Turing test.
              Look at this example, it's wrong in so many ways, because it actually has no capacity to understand chess, it just knows how to say things that sound like chess moves. AGI would be able to, like a human, learn any game it has never heard of and develop its own strategy and I believe we are still very far away from that and probably need some new architecture aswell.

              Another huge problem in Language Processing that still isn't is Coreference resolution. If I say "Anon walked home, he then made a post on BOT", it's trivial for a human to know that "anon" and "he" refer to the same entity. However language models still can't do this. if you give them a long enough text (like a book) they will trip up on this sooner or later and any small mistake here is catastrophic, because the error then propagates. The only development we've made in this area for years stems from using more computing power, not any fundamental improvements on the methods

              Also as said the Turing test isn't too useful anymore. I'm sure fricking Cleverbot could have passed the Turing test

          • 3 weeks ago
            Anonymous

            >Well you do understand the concepts behind what you're saying though.
            Do I? I honestly can't tell. I feel like I just make up plausible sounding sentences.

      • 3 weeks ago
        Anonymous

        Well you do understand the concepts behind what you're saying though.
        When you say "I'm going through the..." you know that "door" and "streets" have higher probability of being the next word over "hamburger", but you can also visualize what going through a door actually looks like and means

        LLM certainly don't have the same subjective experience of going through a door (or subjective experience at all), but they certainly go beyond just predicting the probability of which word would go after the next, to reach something that works similarly to an understanding of what a door is.
        Otherwise they wouldn't be able to hold a conversation in plenty of contexts, or do stuff like... invent jokes, where you need to be able to reposition the object described rather than sticking to a word and its lexical field.

      • 3 weeks ago
        Anonymous

        >All it does is predict the next word in a sentence though.

        So how do you know that's not exactly what humans are doing?

      • 3 weeks ago
        Anonymous

        For things like this, it honestly feels like the only things that's missing it to "plug in" different subsystems to the model.
        Solving chess problems like these is a trivial task for an AI, not even one specially designed for it.
        It wouldn't even seem too much of a cheat: it wouldn't surprise me the slightest that the human brain also doesn't process the task of a conversation the same way it does... say... drawing a triangle.

        • 3 weeks ago
          Anonymous

          >it honestly feels like the only things that's missing it to "plug in" different subsystems to the model
          But the whole point of AGI is, that it is supposed to be able to perform well on tasks it was never trained on or programmed for.
          Just like you can come up with a strategy for any game you've never played once someone explains the rules to you

        • 3 weeks ago
          Anonymous

          >Solving chess problems like these is a trivial task for an AI
          Ackshually I think AIs are really shit at algorithmic problem solving. There have been computers built specifically to play chess or go or whatever, but these are not AIs. Meanwhile AIs are no better at chess than I am, which is to say not good at all.

          • 3 weeks ago
            Anonymous

            >but these are not AIs
            AIs are just decision makers powered by machine learning. The natural language support of LLMs is an extra add-on feature, not the standard and not required.

          • 3 weeks ago
            Anonymous

            >AIs are really shit at algorithmic problem solving
            Think again: https://en.wikipedia.org/wiki/AlphaZero
            The latest deep learning AIs are able not just to beat any human at chess, but they can also beat any humans at games which where regular programs previously couldn't beat humans (like go), AND they can do so without being specifically programmed for it, but instead by learning by themselves how to play.

  3. 3 weeks ago
    Anonymous

    >I may be a moron but here is my schizo rant anyways
    the ol reliable

  4. 3 weeks ago
    Anonymous

    People are scared of it. Don't want to think too much of it. Don't want to admit it. Passed Turing test? Its fake and gay, we need something better. That's how its going to be and when we will reach true singularity, nobody will notice.

Your email address will not be published. Required fields are marked *