>OpenAI already has AGI

How likely is it?

GPT-4 finished pre-training in July 2022. Everyone else is just now catching up. What the frick do they have behind closed doors that they're not showing us? Because there is absolutely something back there that's much better than GPT-4.

They released Sora like it was a normal technical blog post. I feel like their plan is to just drip feed more news and push the Overton window until 8-12 months from now they just say "Yeah, we figured AGI a while ago"

Shopping Cart Returner Shirt $21.68

Tip Your Landlord Shirt $21.68

Shopping Cart Returner Shirt $21.68

  1. 2 months ago
    Anonymous

    Given that Altman is begging for absurd amounts of money to manufacture gpus I'd say they got close to the peak with gpt4, at least for now. He doesn't seem the type to actually play it safe very much.

  2. 2 months ago
    Anonymous

    Listen, if you want me to speculate on the market strategies of a very successful company, you need to pay some money for that. Otherwise, go frick yourself.

  3. 2 months ago
    Anonymous

    AGI is literally impossible and "AI" is just predictive text generations. No matter how much money you throw at a model it will never be sentient

    • 2 months ago
      Anonymous

      >implying you need anything close to "sentience" to put millions of people out of work.
      Even if there were zero new developments you can could easily take the "predictive text generations" and basically bootstrap every human out of the customer-service industry.

      • 2 months ago
        Anonymous

        wrong wrong WRONG
        moron homosexual

        • 2 months ago
          Anonymous

          GPT 5 hands typed this post.

          We're on to you.

        • 2 months ago
          Anonymous

          >can't argue so resorts to seething rage name calling
          I accept your concession

    • 2 months ago
      Anonymous

      Sentience has nothing to do with AGI.

    • 2 months ago
      Anonymous

      It doesn't need to be sentient to be sapient

    • 2 months ago
      Anonymous

      https://i.imgur.com/vr0KJAO.png

      >implying you need anything close to "sentience" to put millions of people out of work.
      Even if there were zero new developments you can could easily take the "predictive text generations" and basically bootstrap every human out of the customer-service industry.

      Things people mostly do at work probably don't require sentience. Whether it is actually feasible or not, the OpenAI people might well believe at this point that with enough computing power and training data, most office jobs could be automated at least to the degree that the result only has to be checked by a human worker.

    • 2 months ago
      Anonymous

      no problem
      simulation of intelligence/sentience is all that's needed
      of course, the goal is total annihilation of all life and it's a israelites' pipedream

    • 2 months ago
      Anonymous

      (You) will never be sentient.

  4. 2 months ago
    Anonymous

    Not very. They are hitting the wall when it comes to scaling things further up.

  5. 2 months ago
    Anonymous

    agi is likely still off by a bit however working to combine various models and figuring out feedback loops that don't take entire re-trains must be something they are working on which would mimic entry agi.
    A human brain is really just a lot of different models constantly updating and interacting.

    • 2 months ago
      Anonymous

      >A human brain is really just a lot of different models constantly updating and interacting.
      This reminds me how people would compare brains to computers.
      >brains are just cpus
      >short-memory is just like the cache!
      >processing memory is just like ram!
      >long-term memories are just like drives!
      yeah.

  6. 2 months ago
    Anonymous

    0. It's all marketing bullshit and they're just trying to keep the current scam going as long as they can. The underlying technology isn't possible to scale to the holy grail of "AGI".

    • 2 months ago
      Anonymous

      >0. It's all marketing bullshit and they're just trying to keep the current scam going as long as they can. The underlying technology isn't possible to scale to the holy grail of "AGI".
      Anon! I would like to hear your theory. No, actually I do. What makes the underlying hardware incapable of running AGI? Is it something to do with 1/0 architecture?

      agi is likely still off by a bit however working to combine various models and figuring out feedback loops that don't take entire re-trains must be something they are working on which would mimic entry agi.
      A human brain is really just a lot of different models constantly updating and interacting.

      >A human brain is really just a lot of different models constantly updating and interacting.
      I hold a hammer. Where have all the screws gone?

      >A human brain is really just a lot of different models constantly updating and interacting.
      This reminds me how people would compare brains to computers.
      >brains are just cpus
      >short-memory is just like the cache!
      >processing memory is just like ram!
      >long-term memories are just like drives!
      yeah.

      >This reminds me how people would compare brains to computers.
      Nested metaphors are useful, but they're just that - metaphors. It's easy to see them as analogues when they're presented in such a fashion.

      • 2 months ago
        Anonymous

        >Anon! I would like to hear your theory. No, actually I do. What makes the underlying hardware incapable of running AGI? Is it something to do with 1/0 architecture?
        Regarding training costs and operating costs compared to the prices they charge, GPT4 is losing them millions of dollars every single day. Their profits come from all the investors they're attracting by having the world's best LLM.
        They're at the point where any meaningful improvement would be too expensive to be worth it. Usage prices would have to be so high that nobody would use it, even if it was a bit smarter than GPT4.

        • 2 months ago
          Anonymous

          >GPT4 is losing them millions of dollars every single day. Their profits come from all the investors they're attracting by having the world's best LLM.
          Is this information publicly available?

          • 2 months ago
            Anonymous

            Not verbatim, no. But if it is indeed an 8x220B model like George Hotz leaked, then judging by the API costs it has to be so.

  7. 2 months ago
    Anonymous

    AI already partially designs AI chips, has been for years.
    Dumb AI tools will design smarter AI tools until it gives birth to AGI. We won't know because we wont recognise what we are looking at, at the time.

  8. 2 months ago
    Anonymous

    I'd say they're close. They have candidate algorithms and ideas for systems, but needs lots of money/compute to test everything.

  9. 2 months ago
    Anonymous

    What the npc media presents as "AI" is not an intelligence. So we are not close to it, because we have not advanced at all towards it in the last 100 years.

Your email address will not be published. Required fields are marked *