What is GPT-5 going to be like?

What is GPT-5 going to be like?

Shopping Cart Returner Shirt $21.68

Ape Out Shirt $21.68

Shopping Cart Returner Shirt $21.68

  1. 9 months ago
    Anonymous

    Gay and shit and ass

  2. 9 months ago
    Anonymous

    huge like yo momma

  3. 9 months ago
    Anonymous

    1337 models
    69 gazillion gorillion params

  4. 9 months ago
    Anonymous

    I finally got GPT4 API access and it's preddy good. Also preddy expensive !

    • 9 months ago
      Anonymous

      Now that I think about it, that's probably the reason that they said they're not working on GPT5. It's probably way too expensive, and I remember hearing that right now OAI is bleeding money like crazy.

    • 9 months ago
      Anonymous

      good morning sir

  5. 9 months ago
    Anonymous

    >tfw you realize human sentience is 48 models ~6 quadrillion parameters

    • 9 months ago
      Anonymous

      So AI gf soon?

      • 9 months ago
        Anonymous

        judgement day soon

    • 9 months ago
      Anonymous

      The human brain topology changes at runtime, llm's don't.
      Even with same number of parameters we still have a huge advantage.

      Also you need a whole neural network to simulate a single human neuron.

    • 9 months ago
      Anonymous

      nice try, Yudkowski

    • 9 months ago
      Anonymous

      wake me when any machine can match our context length of 75 years

  6. 9 months ago
    Anonymous

    >8 models
    >176T
    Bait image. By the time the MoE info came out, no one still believed the 100T+ meme. GPT-4 is sort of but not really ~2T. I don't think the next step is to develop a more capable AI, but to develop a cheaper and faster GPT-4 class AI.

  7. 9 months ago
    Anonymous

    >What is GPT-5 going to be like?

  8. 9 months ago
    Anonymous

    Why don't BOT and /mlp/ unite to make a GPT competitor? We clearly need a neutral (or counterbiased) model if you don't want the Second Gilded Age to be ruled by pozzed AI.

  9. 9 months ago
    Anonymous

    Worse, more censored and more politically correct. I advise you to train your own LLM.

  10. 9 months ago
    Anonymous

    The usable information is the same amount as GPT-4. The extra size comes from bigger internal filters.

  11. 9 months ago
    Anonymous

    >one AI gf runs on arch
    >other AI gf runs on debian
    Perfect setup

  12. 9 months ago
    Anonymous

    As an AI model I can't do shit captain

  13. 9 months ago
    Anonymous

    They've hit hardware limitations with GPT-4, they're also ran by soulless hand-wringers so they'll never do what's necessary to take it to the next level. Local LLMs is where the innovation is.

  14. 9 months ago
    Anonymous

    It doesn't fricking matter. Its going to be censored to hell and utterly fricking unusable if it ever comes out.
    It's sad that GPT-4 got lobotomized because of ethicists fearing it would say TND or something, that a 30B model can blow it completely out of the water.

  15. 9 months ago
    Anonymous

    Say hi to Skynet

    • 9 months ago
      Anonymous

      dis

  16. 9 months ago
    Anonymous

    probably worse than the early chatGPT

  17. 9 months ago
    Anonymous

    Stronger proto-AGI, like GPT-4
    GPT-6 will be AGI

  18. 9 months ago
    Anonymous

    5 will be better at coding but expensive to use. It probably could unironically make a game by itself with little bugs or errors, but it would burn through tokens and end up not really being as useful as 4 + human.

    • 9 months ago
      Anonymous

      >It probably could unironically make a game by itself
      That's not impressive, most modern games are made using simple nodetree templates

  19. 9 months ago
    Anonymous

    still won't be AI, at least not in the sense the term has accrued over decades of science fiction. the human brain is far more complex than an algorithm, and moreover intelligence and thought aren't properties of a brain in isolation

Your email address will not be published. Required fields are marked *