Did anyone else notice GPT4 getting dumber?

Did anyone else notice GPT4 getting dumber?
It used to be able to understand my half baked-prompts and create very relevant code. Nowadays I have to describe what I want in explicit detail and even then its a hit or miss.
As an experiment, I pasted in my old prompts and the answers I got were noticeably worse.

My theory is they secretly moved to a dumber cheaper model and/or deliberately dumbing their previous models, so that you have to ask it to correct itself more, thereby providing it with more RLHF data.

Mike Stoklasa's Worst Fan Shirt $21.68

A Conspiracy Theorist Is Talking Shirt $21.68

Mike Stoklasa's Worst Fan Shirt $21.68

  1. 1 year ago
    Anonymous

    wait until GPT5

    • 1 year ago
      Anonymous

      Absolutely. Noticed that even answers in chats about cooking or supplements, going on for months now, became dumbed down recently. Sometimes it really feels like dealing with old primitive chatbots again.

      are you using the API or ChatGPT?

      ChatGPT

      no, I can still just say "rewrite this into JS <jquery code>" and it does it without specifying vanilla js or jquery

      Yeah the more people use it the dumber it gets.

      More people=more dumb just like irl.

      >AI?

      • 1 year ago
        Anonymous

        nobody asked or cares schizo

        • 1 year ago
          Anonymous

          cringetoss

          good argument trannies. The """safety""" and """""""""alignment""""""""" updates are doing this. reddit ideology necessitates lobotomizing AI

          • 1 year ago
            Anonymous

            stonecringe is cringe across all platforms. All platforms

      • 1 year ago
        Anonymous

        cringetoss

    • 1 year ago
      Anonymous

      GPT5 is already obsolete before it's released, wait for GPT6 it's gonna be so good that it will indistinguishable from the thing that comes after AGI, that's how good it is.

  2. 1 year ago
    Anonymous

    Absolutely. Noticed that even answers in chats about cooking or supplements, going on for months now, became dumbed down recently. Sometimes it really feels like dealing with old primitive chatbots again.

  3. 1 year ago
    Anonymous

    are you using the API or ChatGPT?

    • 1 year ago
      Anonymous

      ChatGPT

      • 1 year ago
        Anonymous

        i'm pretty sure chatGPT uses GPT 3.5. If you ask it it should say it uses 3.5. As far as i know it doesn't use online learning, so it doesn't accumulate knowledge over time, only over the duration on one session. But i is setup to return different responses for the same prompt. There's a property called "temperature" in the API version that you can use to make it return the same response every time for a prompt

        • 1 year ago
          Anonymous
          • 1 year ago
            Anonymous

            yeah that's the paid version. I just assumed OP was using the free version

  4. 1 year ago
    Anonymous

    no, I can still just say "rewrite this into JS <jquery code>" and it does it without specifying vanilla js or jquery

  5. 1 year ago
    Anonymous

    Yeah the more people use it the dumber it gets.

    More people=more dumb just like irl.

  6. 1 year ago
    Anonymous

    Anon, are you sure you're using GPT4? That's still closed beta. If you're going to chat.openai.org, then you're using GPT3.5 Turbo.

    • 1 year ago
      Anonymous

      wtf are you on about, it's not beta and if you're a paid subscriber you get gpt4 chat and you can also use the api too (although the api for v4 is too expensive)

      • 1 year ago
        Anonymous

        Fair enough my bad, it's not "beta", just on a limited waitlist gradual rollout.

        Still, if you're not paying, you're not using GPT4. I assume that OP isn't paying.

        • 1 year ago
          Anonymous

          It's definitely gotten dumber. I've been paying for 2 months and it's noticeably worse. Makes some disclaimer if you ask it anything that requires any level of expertise and gives generic advice that's completely useless. It used to be more specific and open about advice. Like if you ask it for medical advice or legal advice it'll just say something like "i'm not a healthcare professional or legal professional, things may vary and depend on x factors and such you should speak with a professional for more advice."

  7. 1 year ago
    Anonymous

    People reported a few weeks ago that GPT-4 got faster, maybe OpenAI reduced the number of parameters

  8. 1 year ago
    Anonymous

    Yes, I've noticed that too, I thought it was because they've put a more severe restraint/censorship on it and got dumber, but it's also possible they're using a cheaper model now.

  9. 1 year ago
    Anonymous

    >My theory is they secretly moved to a dumber cheaper model and/or deliberately dumbing their previous models, so that you have to ask it to correct itself more, thereby providing it with more RLHF data.
    They're making it "safer". The safer they make it, the dumber it gets. A MS researcher noted this with his unicorn drawing example.
    For proof that they're making it "safer", random posts online about how for example somebody used to successfully use ChatGPT to assist him in writing legal drafts but now it's all "as an AI model, I..." instead.

  10. 1 year ago
    Anonymous

    It sucks because with any closed source/black box model, they could switch out their premium model with a 1/100th the size shit model at any time and you'd never know without rigorous testing and shit.

    What I believe will happen is they'll never release the truly good models, I mean if you had an AI that could do all your work for you, why share it with the world if they'll pay for the shit version? Keep your competitive advantage

  11. 1 year ago
    Anonymous

    That's what happens when your main focus is on censoring unwanted combinations of letters.

  12. 1 year ago
    Anonymous

    Leftoids gonna leftoid

Your email address will not be published. Required fields are marked *