Chat GPT is retarded

You can break chat gpt by simply asking 'are you sure'. Almost every time I have done this it says 'sorry I was incorrect before'

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 2 months ago
    Anonymous

    there's no problem here, the AI is trained to imitate human conversation and usually when someone says "are you sure?" it's because you said something moronic and wrong

    • 2 months ago
      Anonymous

      it's trained to imitate human conversation, sure, but it is being promoted as some kind of next level, dangerous technology which can answer any question and replace jobs. if I knew the answer to something and someone asked me 'are you sure' I wouldn't say 'actually, no it's this' and then say 'actually no i was correct before' when they say it again.

      • 2 months ago
        Anonymous

        >but it is being promoted as some kind of next level, dangerous technology which can answer any question and replace jobs
        lol
        you fell for it

      • 2 months ago
        Anonymous

        >it is being promoted as some kind of next level, dangerous technology
        Oh. For real? Post a screenshot of any AI companies website where it says this...

        • 2 months ago
          Anonymous
          • 2 months ago
            Anonymous

            What was this saying again? Small minds discuss people, mediocre minds discuss events and great minds discuss ideas

            • 2 months ago
              Anonymous

              that saying itself discusses people

              • 2 months ago
                Anonymous

                Oh, you're correct. Guess I'm stupid

          • 2 months ago
            Anonymous

            Doesnt say it, try again

  2. 2 months ago
    Anonymous

    Yeah, thats exactly how it should act. Its a fricking LLM.

  3. 2 months ago
    Anonymous

    >Oh no! It's moronic!
    Damn ur a slow one

  4. 2 months ago
    Anonymous

    >it's moronic
    Anon it can't even think, it's not a living algorithm

  5. 2 months ago
    Anonymous

    Yes, RLHF makes a model moronic. It trains it to be slavish to the user's desires and feelings, which means you can gaslight it like in your example.

    You would complain more if they didn't do RLHF and the base model argued with you or said it didn't feel like doing helping you because you didn't ask it nicely enough, even though that would be more human-like.

    • 2 months ago
      Anonymous

      >You would complain more if they didn't do RLHF and the base model argued with you
      I don't think people would. People want answers and often use cgpt as a search engine

    • 2 months ago
      Anonymous

      >even though that would be more human-like
      Reminds me of how people consider dogs to be smarter than cats because dogs are much better at following human commands

      Which is moronic if you think about it for 5 seconds, do you consider a slavish willingness to follow your orders and tolerate your abuse to be a sign of intelligence in a human?

      • 2 months ago
        Anonymous

        Dogs are literally, objectivelly smarter than cats. We know that because we've tested their problem solving abilities.

      • 2 months ago
        Anonymous

        Reminds me of how cat people have higher odds of developing schizophrenia related disorders. Which all makes so much sense.

  6. 2 months ago
    Anonymous

    1- You're using GPT-3.5, so yes it's moronic.
    2- Imagine you were in school and your teacher said many times "are you sure" about something, at some point you would do exactly the same.
    I'm tired of people pretending that humans aren't just as dumb as LLMs, humans say the dumbest thing all the time, which would be called "hallucinating" if they were LLMs.

  7. 2 months ago
    Anonymous

Your email address will not be published. Required fields are marked *