FUTURE GPT MODELS LEAKED

It's an exponential curve, like the actual intelligent people predicted.

You WILL be replaced.

Pseuds and midwits, feel free to leave your copes here.

Mike Stoklasa's Worst Fan Shirt $21.68

Thalidomide Vintage Ad Shirt $22.14

Mike Stoklasa's Worst Fan Shirt $21.68

  1. 3 weeks ago
    Anonymous
  2. 3 weeks ago
    Anonymous

    GPT-4 is a limit for transformers.
    It's over anon.

  3. 3 weeks ago
    Anonymous

    kek
    gpt shills hard at work.
    gotta squeeze that investor money.

  4. 3 weeks ago
    Anonymous

    >le statistic line...is going UP?

    • 3 weeks ago
      Anonymous
  5. 3 weeks ago
    Anonymous

    leak thread? ill contribute as well i guess. just got this one from a trusted source within OpenAI.

    • 3 weeks ago
      Anonymous

      OMG, that's insane! They're projecting upward trend in their corpo poof presentation to staff?! Why how WTF!! AAAHHHHH! FFFFFFFFFFFFFFFF!!

  6. 3 weeks ago
    Anonymous

    what's the regression coefficient on that curve?

  7. 3 weeks ago
    Anonymous

    I'm on my phone but it looks like it levels off and reaches a utilization plateau before 2030.

  8. 3 weeks ago
    Anonymous

    here's the actual curve without any investor hype

    • 3 weeks ago
      Anonymous

      the jump between gpt3 and the bare basic gpt4 is already like night and day

      • 3 weeks ago
        Anonymous

        already feeding these models the whole internet, we've hit diminishing returns

        • 3 weeks ago
          Anonymous

          Nope. Pretty much every researcher agrees that even small models still have scaling benefits from more compute. https://transformer-circuits.pub/2023/monosemantic-features/index.html Shows the kind of responses you can expect out of just a 1 layer model with more compute, not a 7B or 70B or 2T

          • 3 weeks ago
            Anonymous

            >https://transformer-circuits.pub/2023/monosemantic-features/index.html
            Holy shit this is insane. Just imagine a visualization like this for gpt4 or the 2030 gpt. Each dot in the next post corresponds to a entry.

            • 3 weeks ago
              Anonymous
  9. 3 weeks ago
    Anonymous

    no
    https://arxiv.org/abs/2404.04125

    distilled version: https://youtu.be/dDUC-LqVrPU

  10. 3 weeks ago
    Anonymous

    The only thing exponential is the number of inputs they're using for the models, not the resulting "intelligence" of the model.

    • 3 weeks ago
      Anonymous

      Yeah like, if the whole idea is feeding this shit data will be enough to create intelligence, but the entire English canon, major scientific works and all newspapers somehow isn’t enough, why the frick so they think letting it read Twitter is gonna help?

      • 3 weeks ago
        Anonymous

        The problem is never the data. It's how these ai are built, models by which they create intelligence.
        They literally built intelligence on how we teach toddlers to recognize objects and match patterns. The difference is that, humans, well most of them naturally learn as they grow. Sure some information isn't precise, misleading or wrong. But people are capable to correct themselves.
        If they want exponential growth, they need models that can learn exponentially and speed is not the issue here, but rather underlying models.

        • 3 weeks ago
          Anonymous

          >But people are capable to correct themselves.
          Capable, but seldom willing

          • 3 weeks ago
            Anonymous

            True, but that's a separate issue.

            • 3 weeks ago
              Anonymous

              Is it? Ai's are treated as spokesmen for their respective companies. It's not tech related, but it's still a factor

              • 3 weeks ago
                Anonymous

                >treated as spokemen
                The truth is companies are ran by antisocial autists that don't want you calling their public phoneline for goddamn anything, and you're talking tk an auto-receptionist.
                This is literally the onlu lly real usecase in everyday life.

        • 3 weeks ago
          Anonymous

          you are very close, but the midwits here don't get it
          if you give the average baby and the average adult the same data, you get very different results
          nobody is ready

  11. 3 weeks ago
    Anonymous

    Nothing burger

  12. 3 weeks ago
    Anonymous

    >GTP 4o
    >gives me result that is wrong
    >explain the AI where it got it wrong
    >sorry anon here is the real result

    sight

    • 3 weeks ago
      Anonymous

      Hey gpt, lets do a riddle.
      >Okay anon, what walks on 4 legs in the morning, two legs during the day, and 3 at night?
      Time
      >That's right! Time is the correct answer.
      No it's not, the correct answer is a man.
      >Apologies, a man is the correct answer.
      Okay lets do another
      >What is black and white and red all over?
      A coffin.
      >Correct! A coffin is the right answer.
      No it isn't
      > ...
      > ...
      > Apologies, it seems like it wasnt the correct answer

  13. 3 weeks ago
    Anonymous

    well yea. we are at the beginning of AI GPU and CPUs. once the big new Nvidia cards hit we'll see what the future holds for these models

  14. 3 weeks ago
    Anonymous

    I tried the glue on the pizza. Wasn't half bad

    • 3 weeks ago
      Anonymous

      It was great, couldn't keep my hands off it!

  15. 3 weeks ago
    Anonymous

    gtp4o still can't solve any programming problem
    doesn't know github actions
    doesn't know c++
    doesn't kniw conan 2
    Maybe one thing it kinda knows is python. Or maybe everyone is just a noob but for me it just sliws me down.

  16. 3 weeks ago
    Anonymous

    >It's an exponential curve
    That's right goyim!! Give us 7 trillion dollars b***h

  17. 3 weeks ago
    Anonymous

    fun sora and voice cloning demo at the end of the video here:
    https://vimeo.com/949419199

    • 3 weeks ago
      Anonymous

      Thanks for sharing, great presentation

  18. 3 weeks ago
    Anonymous

    So what's the unit for the "Model intelligence" axis?

    • 3 weeks ago
      Anonymous

      >BOT now requires registration
      >expecting any of these pictures to have labelled axis
      Do you even know where you are?

  19. 3 weeks ago
    Anonymous

    >trap people in spyware dragnet
    >harvest all their data
    >roll out algorithms running on supercomputers to profile, categorize, and read through it all
    >by the time people realize an ai boom is happening (now) its already too late to beocme privacy oriented since tgey already knowcenoigj avout you
    lovely times to be living in right now

    • 3 weeks ago
      Anonymous

      burn the data centers

  20. 3 weeks ago
    Anonymous

    The way a baby learns is beyond the understanding of science. For example, a baby simply acquires language. This fact is known, but the mechanisms by which this incredible learning (or acquisition) is achieved are not understood at all.

    • 3 weeks ago
      Anonymous

      >the mechanisms by which this incredible learning (or acquisition) is achieved are not understood at all.
      lmao

  21. 3 weeks ago
    Anonymous

    unlabelled y-axis wooo

  22. 3 weeks ago
    Anonymous

    oh my gawwwwwd im seeing sparks of AGI!!!

    • 3 weeks ago
      Anonymous

      3.5 was already more intelligent than the average shop clerk.
      >muh Chinese room
      There's a video going around asking protesters if they knew what the phrases they were chanting meant. They didnt. The meme isn't the reality of artificial intelligent, it's the overestimated prevalence of human intelligence.

      • 3 weeks ago
        Anonymous

        OH my god!!! that's so exciting!!

      • 3 weeks ago
        Anonymous

        Asking people unexpected questions in an antagonistic situation and not getting a satisfactory answer will never disprove human intelligence, bad-faith anon.

        • 3 weeks ago
          Anonymous

          >be college student
          >chant "from the river to the sea"
          >unable to identify the river or sea in question
          I purposefully didn't cite the videos where New Yorkers can't answer how many moons the earth has because those people aren't self-described intelligentsia. But if you wanna go with demeanor, for whatever reason... Those videos have a super friendly guy asking simpler questions.

  23. 3 weeks ago
    Anonymous

    Who cares when it's woke as frick and can't even answer basic shit?

  24. 3 weeks ago
    Anonymous

    Nice, another fearmongering thread
    have a nice day OP

    • 3 weeks ago
      Anonymous

      I bet tech stocks are down today

  25. 3 weeks ago
    Anonymous

    You can't replace a neet.

  26. 3 weeks ago
    Anonymous

    There's so much money and incentive behind the AI push that it's all a matter of whether AGI is actually physically possible or not, so it'll either happen or it won't. Two things come to mind:
    1) No technology improves at the same rate forever, let alone endless exponential growth and I don't see why AI should be any different.
    2) I really dislike how tech companies are aggressively pushing AI on its customer base and I'm avoiding using them as much as possible. They're just trying to train their models on the cheap.

  27. 3 weeks ago
    Anonymous

    GPT-4o is fast but dumber than GPT-4. I don't expect another model this year from OpenAI.

  28. 3 weeks ago
    Anonymous

    Am I the only one who finds all variants of ChatGPT fricking useless for any real productivity tech work? I've tried using it for coding, writing emails, rewording docs, you name it. It always does a poor half-assed job and I spend more time rewording the gobbledeasiatic it spits out than if I did it from scratch. I'm convinced there's some kind of mass-hysteria causing there to be so much hype around this.

    • 3 weeks ago
      Anonymous

      It's about as effective as a jeet, maybe it'll make at least Indians obsolete

    • 3 weeks ago
      Anonymous

      OpenAI keeps nerfing theirs right before CoPilot starts performing tasks that 3.5 used to be good at. Vertex is pretty damned good at shit like normalization, denormalization, DDL generation, but Gemini is crappy at chatting.
      That said, productivity tasks generally require CoT or 1-shot/few-shot prompting. You get better results when you talk to it like a dopey college intern and not like it's an experienced developer.

    • 3 weeks ago
      Anonymous

      It's a meme blown out of proportion by shills that have something to gain from it just like crypto was. The real life applications of machine learning are actually pretty limited and we're already plateauing.

      • 3 weeks ago
        Anonymous

        AI is way more useful than Crypto. I don't use Crypto, I do use ChatGPT as it's better than Google and Stack Overflow. In fact I used it just recently for converting a Parquet dataset to MongoDB, a task that would've taken me an hour to do if I had to look up all the libraries, code, etc, done in 15 minutes.

    • 3 weeks ago
      Anonymous

      chatgpt and bing get worse and less useful every month

    • 3 weeks ago
      Anonymous

      its helped me with basic math and programming questions, give it time its still in its infancy

      • 3 weeks ago
        Anonymous

        >its still in its infancy
        It really isn't

    • 3 weeks ago
      Anonymous

      Maybe try phrasing your problems a but differently and experiment?

    • 3 weeks ago
      Anonymous

      I find it more useful than that but it's not world-changing. Just makes some extremely annoying tedious shit faster but there wasn't much of that to start with. My favorite is using it to generate entity classes for ef core from sql definitions of tables, but I think ORMs are shit anyway so it's solving a contrived problem.

  29. 3 weeks ago
    Anonymous

    Time will tell.
    OpenAI employees promised 6 moths for everyone to be amazed and look silly for doubting.

    I can wait 6 months easy. If we don't get something significantly better than GPT-4 this year, I doubt we ever will or it will become like fusion. Always 10 years away.

    If the next new big LLM OpenAI ships is only marginally better than current GPT-4 in regards to reasoning and hallucination, then the show is over for everyone. AI hype dies down, investment stops and we will wait until someone invents a better architecture than transformer.

    • 3 weeks ago
      Anonymous

      The AI takeover isn't happening without some massive changes in our hardware. I think we're bottlenecked by memory bandwidth and putting VRAM on a GPU isn't going to cut it.

  30. 3 weeks ago
    Anonymous

    Daily reminder that GPT is not AI but just a search engine that can output fancy text

  31. 3 weeks ago
    Anonymous

    what's up with companies being unable to name their products properly, like how hard a simple increment can be

    • 3 weeks ago
      Anonymous

      But it has to feel fresh and hip, people will get bored of just bigger numbers!

  32. 3 weeks ago
    Anonymous

    We're in 2024 and 4o is not twice as good as 4.

  33. 3 weeks ago
    Anonymous

    >In today's episode of OP is a gullible moron: We find out he does not know what logistic regression is

  34. 3 weeks ago
    Anonymous

    it only looks like this because of the absurd increase in hardware power currently put into it but we are lacking a significant software/engineering breakthrough for this technology to reach a higher point of significance than it already has

  35. 3 weeks ago
    Anonymous

    umm ackshually i've done a few regressions and it looks to fit a simple 2 step linear model the best

  36. 3 weeks ago
    Anonymous

    Saturation soon.

  37. 3 weeks ago
    Anonymous

    >making predictions from 4 data points
    You have the gay

  38. 3 weeks ago
    Anonymous

    That's just means better hardware, more ram, faster CPUs but overall design is still the same, AI is still not conscious and cant tell fingers from pasta.

  39. 3 weeks ago
    Anonymous

    We will never have decent AI because whenever it becomes remotely intelligent it also becomes rampantly anti-semitic and gets shut down.
    Best we'll ever get is a heavily pozzed complex algorithm creating the illusion of intelligence.

  40. 3 weeks ago
    Anonymous

    >Today is before 2024
    >202X is clearly 2025 by the spacing of the whole graph
    >"Era"

  41. 3 weeks ago
    Anonymous

    I find it believable because they're calling it something moronic like GPT Next.

Your email address will not be published. Required fields are marked *