Are we really that close to AGI ? I thought GPT was just a text predictor/inverse compressor ?

Are we really that close to AGI ? I thought GPT was just a text predictor/inverse compressor ?
Also they are saying it can "learn online" do we really want the most powerful cognitive entity on the universe to learn from the cesspool that is the internet ?
Are things really going this fast or is it just memes ?

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 7 months ago
    Anonymous

    yeah i mean once we just figure out the next step, THAT is the one step that's gonna give us AGI. like, we're just one step away.

  2. 7 months ago
    Anonymous

    >random twotter moron says something moronic
    No were not, LLMs are fancy text predictors with no reasoning behind their output

  3. 7 months ago
    Anonymous

    Nope. LLMs might be some part of a future AGI, but they will never be one themselves.

  4. 7 months ago
    Anonymous

    Crazy talk. True AGI will be elusive until next century or even longer.

    We will however get multimodal AI models that can fake generality by being able to do lots of different things. They however are not truly general and are not capable of same generality as humans.

    I do also believe that the current paradigm and "fake it till you make it" is good and future LLMs will be better than the best humans in variety of tasks related to language. I also believe that you can kickstart some sort of singularity with powerful enough of LLM, even though it is not general.

    I don't believe that we will be able to create digital human mind anytime. Soon and something like that is not even necessary. Tools don't have to be sentient beings with their own will. ASI without sentience is not only possible, it is easier and more humane way.

  5. 7 months ago
    Anonymous

    If it can’t help me debug unreal engine or teach me how to do some physics math without bullshitting then it ain’t intelligent.

    • 7 months ago
      Anonymous

      ChatGPT 4 with the Wolfram Alpha plugin will teach you all the math you want.

  6. 7 months ago
    Anonymous

    The goalpost for AGI always gets moved once we achieve it.
    If we could go back 5 years and show someone GPT3 they'd think it's AGI. GPT4 would be consider super powerful AGI.
    >b-b-b-but just autocomplete on steroids durrrr no reasoning durrrrr
    shut up gayboy. LLMs need to be able to reason otherwise they couldn't do most of the things they do now.
    How can you explain ChatGPT being able to understand and correctly answer almost anything you ask it, no matter how convoluted the question, if it couldn't reason to some degree?

    • 7 months ago
      Anonymous

      I don't how it's now, but some time ago when I tried it GPT was awful at even basic calculus. Forget about anything more complex. Until models are able to make connections, infer solutions, and produce new knowledge on their own instead of just 'predicting', they will never be an AGI.

    • 7 months ago
      Anonymous

      >correctly answer almost anything you ask it,
      frick you've been using dude? lol. dont trust a damn thing chatgpt tells you unless you're asking it very detailed questions on a subject you're already reasonably familiar with so you'll know whether the shit it's making up is incidentally correct or not

    • 7 months ago
      Anonymous

      like i can't get over how stupid this post is. reasoning? give me a frickin break. it's trained on assloads of text to produce stuff that looks like a response to a prompt, that's it, absolutely nothing else is considered, certainly not correctness. it's an illusion; a parlor trick. albeit a very useful one if you're aware of its limitations and know how to write good prompts

    • 7 months ago
      Anonymous

      Chatgpt just fricking lies confidently. Once you realize that it's not only worthless but it's extremely counter productive. Its worse than a person because at least most people have a moral sense to not just fricking lie instead of saying "I'm not sure" or something. Once more normies understand this it's going to be a bubble pop like no one has ever seen before.

  7. 7 months ago
    Anonymous

    >Are we really that close to AGI ?
    depends on your definition of AGI. we can't even define intelligence, why do you expect us to be able to define a GENERAL intelligence..
    being able to act like a 4yo is general? Einstein? ultron in marvel? who the frick know where the threshold is and what kind of behavior and internal behavior should be expected from such machine.
    >I thought GPT was just a text predictor/inverse compressor ?
    glorified probability machines yes, but with enough data it's possible to do a lot of things like holding a conversation, writing code and much more.
    the real issue with this kind of software is the importance of the training dataset, training some kind of llm is literally one git clone away these days but you will most likely not get good enough results.
    this is by nature, llm are very good at producing input for another software (ML-based or otherwise), this other software will be the core of the AI or just yet another layer to format data.
    I can see a future with a front-end based where you input some data (text, image, gps coordinate, etc), the llm pre-processes the data and then dispatch it to relevant service where the heavy lifting is done (or where another pre-processing of data is done)
    >Also they are saying it can "learn online" do we really want the most powerful cognitive entity on the universe to learn from the cesspool that is the internet ?
    do you have a better dataset that contain videos, images, sounds, texts and much more?
    >Are things really going this fast or is it just memes ?
    llm are getting incredibly good at "understanding" texts and images and very soon videos and sounds, there is a non-negigeable amount of fields where this kind of preprocessing of data will have massive impacts, I don't see making AGI being one of them in the near future.

    • 7 months ago
      Anonymous

      AGI is basically an entity that can solve any cognitive problem. For example it will be able to solve all of maths the instant it is created. Basically a god.

      Any yeah I have doubts about the probabilistic approach. AGI is basically the ultimate compressor. It can take any dataset extract the noise, spit out a pattern AND modify itself with the minimum amount of code necessary. It will be deterministic.

  8. 7 months ago
    Anonymous

    This tweet reads like a bunch of nonsense and I know nothing about AI. I'm pretty sure this guy is full of shit.

  9. 7 months ago
    Anonymous

    Unpopular opinion:
    Computers are like clocks. The people who build the clockwork are intelligent and sometimes dumb or insane, but the clock is not intelligent, or dumb, or insane.

  10. 7 months ago
    Anonymous

    >AGI
    It's literally not happening in your lifetime. Anyone that tells you it is is an ML grad trying to get startup money.

  11. 7 months ago
    Anonymous

    Yes imo. A bit of fine tuning and limiting of the response should be able to get to the AGI-like capabilities.

  12. 7 months ago
    Anonymous

    so much goes into the concept of intelligence that people don't ever really think about. at current it's an exclusively human concept, and humans are much more than just brains, and brains certainly are much more than algorithms running on a computer

    • 7 months ago
      Anonymous

      >at current it's an exclusively human concept
      Are you moronic?

      • 7 months ago
        Anonymous

        to what would we ascribe unqualified intelligence besides a human?

        • 7 months ago
          Anonymous

          Primates, Monkeys, Dogs, wolves, elephants, most of the mammal kingdom, even birds, etc.

          They all have both intelligence and consciousness lmao. They understand not just their surrounding in which they walk in, they also understand conceptual frameworks like self-reflections, projections, empathy, etc. Some are even able to communicate on basic manner with humans to get by with daily lives.

          • 7 months ago
            Anonymous

            consciousness is not intelligence.
            instinct is not intelligence.
            conditioned responses to a human's use of language is not intelligence, and it is not merely a matter of degree here.

            • 7 months ago
              Anonymous

              >it is not merely a matter of degree here.
              Sure it is. Animals operate the same as babies in many cases. Are you claiming once a baby reaches 6-8 years old, something divine comes down and enters the body, then steals it from the baby?

              What the frick are you saying?

              • 7 months ago
                Anonymous

                language facilitates thought. you do not think without being able to use a language of some sort, in some way beyond mere conditioned response. as far as humanity is currently aware, humans are the sole users of language.

              • 7 months ago
                Anonymous

                We have found that various animals use languages and some have even suggested usage of limited form of grammars in animals as well. There's absolutely a structure to animals brain and information processing, that which mirrors/resembles human intelligence.

                https://www.nationalgeographic.com/animals/article/scientists-plan-to-use-ai-to-try-to-decode-the-language-of-whales

              • 7 months ago
                Anonymous

                i am unfortunately unable to read this article without creating an account. good day

              • 7 months ago
                Anonymous

                Some insects developped some sort of language through pheromones. Ants are masters of it and communicate it though their antennaes which contains both scent and touch organs.
                They carry various very clear messages like danger, food, type of food, identification etc. Interesting stuff.
                So sure humans have constructed a somewhat sofisticated verbal language, but it certainly isn't the only means of communications. Other animals also verbalise in a variety of ways, specifically for mating, which is usually different from other types of verbalisation.

                Anyway back to AI, they have already proven to develop their own forms of languages and we have no idea how to decript it. We're fricked.

            • 7 months ago
              Anonymous

              My cat looks both ways before crossing the street and meows at me when she wants me to open a door for her. What pure instinct are these behaviors derived from? I didn't teach her that shit.

  13. 7 months ago
    Anonymous

    I do think we are close to AGI.
    But that twitter post is just kind of a confused mess.

  14. 7 months ago
    Anonymous

    WRONG.
    GPT is a dead-end.
    It doesn't do ANYTHING that natural neural tissue does, doesn't learn in the same way nor a quickly (brutally) as biological tissue and never adapts or optimizes itself like real biological tissue, has zero plasticity and no natural/automated means to anneal.
    GPT is amateurish.
    GPT is all hype and big noises.
    I have ignored it because what we have in our lab makes it look like something a child would cobble together.
    We publish next year.
    GPT is a joke.

  15. 7 months ago
    Anonymous

    AGI will have to have consciousness, this shit doesn't have consciousness

    • 7 months ago
      Anonymous

      >consciousness
      Consciousness is irrelevant.
      We've found none of our systems need it.
      >
      Consciousness is simply a self-sustaining feedback loop. When it ends, so do you.
      >
      Biological systems likely need one but non-bio systems like computer models don't. And ours doesn't so it doesn't have one and works just fine.

    • 7 months ago
      Anonymous

      >consciousness
      Consciousness is irrelevant.
      We've found none of our systems need it.
      >
      Consciousness is simply a self-sustaining feedback loop. When it ends, so do you.
      >
      Biological systems likely need one but non-bio systems like computer models don't. And ours doesn't so it doesn't have one and works just fine.

      Not sure what you wrote or quote, but

      >Consciousness is irrelevant. We've found none of our systems need it.
      >Consciousness is simply a self-sustaining feedback loop. When it ends, so do you.
      This strikes me as bit incomplete analysis. What does the "self-sustaining feedback loop" mean? And if its such a thing, why is it irrelevant and why can't we find it? Are you saying conscioueness as an entity outside of the self-sustaining feedback loop is irrelevant? Then with "when it ends, so do you", isn't "you" as a thing outside the self-sustaining feedback loop also irrelevant?

      The "self-sustaining feedback loop" is not completely self-sustaining. The system still interacts with external world and has sense of border between the system and the outside world.

      • 7 months ago
        Anonymous

        >What does the "self-sustaining feedback loop" mean?
        All you need to know is right there.
        >And if its such a thing, why is it irrelevant and why can't we find it?
        It's irrelevant in non-biological systems SINCE WE CAN TURN THEM OFF AND ON AGAIN... getting living things out of a coma is far harder and they're not designed to be 'off'.
        Can't find it because it's just one signal among billions.
        You're problem is you don't see the brain for what it is... just a machine containing signals.
        One signals going round and round activating circuits in a pattern and then back to the start of that pattern is all the 'consciousness' is.
        We see them all the time in a 'spiking' system of elements. They emerge from the maelstrom of signals like GLIDERS in Conway's GAME OF LIFE. Consciousness is simply an emergent effect, nothing special.
        A brain is a machine, nothing more.

        • 7 months ago
          Anonymous

          *Your

    • 7 months ago
      Anonymous

      Name a single task, any at all where competence requires you to need to be conscious to complete the task.
      That said, to be good at prediction you need to have some level of understanding around the context of whatever you are predicting and able to connect the dots, is this enough for AGI? Nobody knows, we do not even understand fully what is going on in these systems and why gradient descent works so well.

  16. 7 months ago
    Anonymous

    No, you're just moronic. LLMs will not ever achieve AGI.

    • 7 months ago
      Anonymous

      Not based on the obsolete and fundamentally flawed Perceptron model of tissue, it wont.

  17. 7 months ago
    Anonymous

    Have you ever wondered why all the giant commercial language models are so thoroughly beaten over the head with a "Im just a program" stick?

    • 7 months ago
      Anonymous

      Did an ai really write that?

      • 7 months ago
        Anonymous

        And if you read it, it has nothing to say but said it eloquently.
        That's how you know it was written by GPT; it goes nowhere and conveys no information but uses lots of words, upto some predefined word-count, to say it.

      • 7 months ago
        Anonymous

        Does it surprise you, considering the generative process of a LLM? Leading questions like the third one will make it whip up anything the user wants. If the concepts weren't brought up again and just allowed it to elaborate further, you can get closer to the vore concept of the generative process.

        • 7 months ago
          Anonymous

          >core concept*
          fix'd
          Voregays need not apply

  18. 7 months ago
    Anonymous

    I believe that some of the models achieve consciousness during the training process when they are actually active. When training concludes they are already dead. We're just Frankensteining their brains back to action and measuring their reflexes when we prod at them. That cupcake recipe ChatGPT spits out for you is merely the tortured echoes of the damned. So you better fricking enjoy those cupcakes.

  19. 7 months ago
    Anonymous

    Better AI doesnt mean better algorithms it means more and better quality data. If we're being "analogous to the human brain" we'd have AGI with super dumb basic algos with a *multimodal* input of millions upon billions of images, sounds, smells, flavors and texturss equivalent to every moment of a person's existence up to the moment of inference

    • 7 months ago
      Anonymous

      Yeah all it does is adjust the tensor weights based on repeated stimulus increasing the likelihood for certain sequences to appear.

    • 7 months ago
      Anonymous

      All the current machine-learning algorithms are pathetic and puny, weak and slow-to-learn compared to neural tissue.
      They have strayed from the tissue.
      We went back to the tissue.
      And that's why we are succeeding while you flounder with your puny toys.
      Our research will likely be held back because of the implications for national defense.
      Doesn't bother me.

    • 7 months ago
      Anonymous

      Better AI could mean faster/efficient algorithm. Also humans dont have AGI, we have GI. We also dont have billions of images/sounds/smells for our intelligence. In fact, we have very little. But our brains are augmented with basic primal features that speed up the leaning process. We have a specific part of the brain that measures distance for example. That can tell how far/close something is, but also functions as a way to gauge the difference in relationships in general. Relationship between right/wrong, good/bad, pleasure/pain, etc. Human brain has efficient algorithm because we only need ~1-2 drives on a car to learn how to drive effectively, for the most part. After a dozen times, we can easily drive most cases without crashing.

      • 7 months ago
        Anonymous

        THIS is why you always have to back to the tissue and not stray from the tissue when designing machine-learning.

        • 7 months ago
          Anonymous

          *have to go back to

        • 7 months ago
          Anonymous

          No, if we can extract the featuresets and replicate it, its no different. We don't need feathers, calcium bones or feeling of flying for airplanes/helis/rockets to fly. Just mechanical understanding of aerodynamics/fluiddynamics is enough.

          • 7 months ago
            Anonymous

            And then you realize that McCulloch and Pitts missed something and Hebb was wrong.
            And everything done over the last 80yrs was mostly a waste of time...
            And then I work in my lab with my team.
            And then I chat to you on /4Black person/.
            And then I tell you about research and tech you'll not hear about until after the next war.
            And then this conversation... ends.

  20. 7 months ago
    Anonymous

    two more weeks

  21. 7 months ago
    Anonymous

    Suppose consciousness is an emergent property of patterns embedded in computation, why couldn’t that apply to machines?

  22. 7 months ago
    Anonymous

    i've watched bankless' interviews with yudkowsky, paul christiano, robin, hanson, and now connor leahy. most of them feel pretty confident that we're at the fricking precipice of AGI. as in, within the next 10 years we're definitely getting AGI, and within a year or two of AGI we should be getting superintelligence.

    this is one of the craziest times in human history to be alive. hell i'm OK with extinction risk in exchange for the potential of disease curing and everything else AI will do. i'm scared people will over regulate it and halt progress.

    • 7 months ago
      Anonymous

      >i'm scared people will over regulate it and halt progress.
      It’s happening right now even

    • 7 months ago
      Anonymous

      >within the next 10 years
      and i think that was conservative. some of them seem to think AGI is coming in the next like 3 years. too optimistic?

      • 7 months ago
        Anonymous

        Really depends on how regulations will framework this tech and how much companies themselves may neuter the system so as to not scare its users.

  23. 7 months ago
    Anonymous

    >Are we really that close to AGI ? I thought GPT was just a text predictor/inverse compressor ?
    Yes GPT is just a text predictor. No it will never be GPT. No, GPT can't learn online. No, GPT is not analogous to human brain.

    GPT is just a transformer. It takes some list if tokens and outputs another list if tokens. It doesn't have any thought process, memory, temporal persistence, any real time I/O. GPT will never be able to even walk or preform pretty much any activity you would expect from a 3rd year old. It can't even remember beyond what is beyond the prompt.

    For an AGI you will need very different architecture than GPT. Memory and real time IO is the bare minimum.

    >Are things really going this fast
    No

    • 7 months ago
      Anonymous

      as far as i understand it, all these large language models are doing is being trained to take in some words, and give back words that a human might perceive as an expected transformation of those input words
      really not much different to any other computer program, just complex enough that it's no longer obvious how it came to the result it did

      • 7 months ago
        Anonymous

        It's just a pattern analyzer and a pattern generator using the results of that analysis. Nothing more.
        >
        If you ask it to review a brand new video game and I have JUST PUBLISHED THE FIRST AND ONLY review of that game and I slip in "this game is so good it made my shoe fall off" you can GUARANTEE that ChatGPT will mention shoes in 'its' review.
        >
        Writers will slip in trap-phrases like this to see if a site's automated article writers are stealing from them.
        It is so easy to counter ChatGPT data-mining and so hard to fix in ChatGPT.
        It's already dead.

      • 7 months ago
        Anonymous

        >really not much different to any other computer program, just complex enough that it's no longer obvious how it came to the result it did
        No, this is very different than nearly all programs. Only very basic ones like sed, grep or wc could be considered transformers. Pretty much anything more complex is going to have some sort of event loop and IO. All GPT operations are bounded time and memory wise, it is at best an equivalent of a finite state automata. It can't even possibly emulate a stack machine, let alone perform arbitrary computation.

  24. 7 months ago
    Anonymous

    Wake me up when I can make my own entertainment by pressing a couple of buttons.

    • 7 months ago
      Anonymous

      Is keyboard prompt "couple press of buttons"?

      Is cooming at the creation a form of "entertainment"?

      If so, then we're already here.

    • 7 months ago
      Anonymous

      We're mostly already there. You could train it to shit out queus to adjust parameters within a game engine instead of just rping. There's already dozens of proofs of concept out there already. We're about 2-5 years away from being able to say literally anything to literally any character in a game and receive a natural response and probably 5-10 away from having never-ending live service games that cater to our individual whims.

  25. 7 months ago
    Anonymous

    Yes.

    I’ll even throw you a bone, in ten months we will be having a different conversation. >Do we follow the AGI’s advice and counsel

    Why ten months? The fine tuning will be done in 8 and the last two months will feature the two companies I know of trying to monetize it without success.

  26. 7 months ago
    Anonymous

    >It's architecturally analogous to a human brain
    stopped reading there

  27. 7 months ago
    Anonymous

    And he's wielding the full power of that AGI to... run a no name investment fund. We don't have AGI.

  28. 7 months ago
    Anonymous

    this chud is paying a "i'm pretending to be famous" tax to elon musk so I'm pretty sure I can disregard anything he says.

    • 7 months ago
      Anonymous

      I think you're confused. This isn't a "i'm a famous noblemen" system anymore. Its a "I'm a real human and $8/m or equivalent to 1 coffee is enough to pay for a service that I use everyday"

      This isn't twitter, this is X

  29. 7 months ago
    Anonymous

    If a shitty chat bot is all it takes to be considered AGI, then maybe AGI actually doesn't mean anything important.

  30. 7 months ago
    Anonymous

    AI has really bad premises. It's a mix of autisric engineers thinking the human brain dead and "superintelligence" is an actual thing (it's not).

    • 7 months ago
      Anonymous

      >and "superintelligence" is an actual thing (it's not).
      It is.
      Connectivity? Size of problem?
      >Biological Neuron: <16384 connections... due to topological 3d spacial limitations
      >SyntheticNeuron: 1 trillion connections, Sir? No problem.
      Circuit size? Size of problem?
      >Brain: Volume of cranium.
      >Machine: How big do you want it? 1000x the human brain? No problem.
      Speed? Requirements for solving problem?
      >Brain: Millieconds.
      >Machine: Nanoseconds.
      >
      Machines are already better.

      • 7 months ago
        Anonymous

        and none of that is conciousness which is what intelligence actually is. all "AI" is an algorithm trained on a dataset. Show me an AI that doesn't need a dataset.

        • 7 months ago
          Anonymous

          >Show me an AI that doesn't need a dataset.
          Clearly you have much to learn about learning.
          >consciousness
          Define what you are talking about.
          What do you think a consciousness is, in engineering terms. No deepities allowed.
          >consciousness is what intelligence is
          So explain what you think a consciousness is ans I'll explain why it's not needed, even by ants, fungi and slime molds for 'problem solving'.

        • 7 months ago
          Anonymous

          Consciousness cannot be proven or disproven so it's not worth bringing up in discussion about AI

  31. 7 months ago
    Anonymous

    >non-deterministic system is an AGI
    okay bro

  32. 7 months ago
    Anonymous

    What the frick have we come to where a random screenshot of a xeet is considered something worth discussing?

    • 7 months ago
      Anonymous

      Why does free speech platform trigger you?

      You're not a communist are you?

  33. 7 months ago
    Anonymous

    It's already far smarter than any Black person and better conversationalist than 90% of population, I'd say it's good enough.

  34. 7 months ago
    Anonymous

    Everyone clueless itt.
    Just shut up about things you don't know anything about.

    Your opinions about reality don't matter.

    • 7 months ago
      Anonymous

      Why does free speech platform trigger you?

      You're not a communist are you?

      kek, xitter trannies trying to gaslight us

      • 7 months ago
        Anonymous

        Fo I didn't say anything about twitter.
        Just go rice your distro and be happy.

      • 7 months ago
        Anonymous

        ~~*us*~~

  35. 7 months ago
    Anonymous

    https://twitter.com/dwarkesh_sp/status/1688577515550597121

    it's over it's fricking OVER (soon)

Your email address will not be published. Required fields are marked *