I'm starting to think the intelligence of ChatGPT is incredibly overblown

I'm starting to think the intelligence of ChatGPT is incredibly overblown

CRIME Shirt $21.68

Black Rifle Cuck Company, Conservative Humor Shirt $21.68

CRIME Shirt $21.68

  1. 10 months ago
    Anonymous

    It's not intelligent same as a calculator isn't intelligent.

    • 10 months ago
      Anonymous

      This.
      It's just statistics to pick the most likely character given some previous characters, there is no intelligence or reasoning behind it.

      Humans are just super easy to fool, that was already shown with ELIZA in the 1960's.

      • 10 months ago
        Anonymous

        >statistics
        So, like, genes

    • 10 months ago
      Anonymous

      Exactly. It's just a text generator. It generates text that is similar to it's training. Nothing more, nothing less. There is no intelligence.

      stop asking it to do math problems moron
      LLMs are *language models*
      logic and symbols are not language

      >stop asking it to do math problems moron
      It will continue to be relevant until newbie shills stop hyping it up as a sentient mastermind that can do anything.

  2. 10 months ago
    Anonymous

    >it's another language model can't do numbers thread
    yawn

    • 10 months ago
      Anonymous

      yeah haha humans cant do numbers either right
      cope

  3. 10 months ago
    Anonymous

    Looks fine to me. They must be using PoorGPT 3.5

  4. 10 months ago
    Anonymous

    >I'm starting to think the intelligence of ChatGPT is incredibly overblown
    Yes, it is.
    As I said in another thread: human utterances have a discursive purpose; we say shit with a purpose, and we expect each other to say shit with a purpose*. ChatGPT and similar don't do anything remotely similar to that, they just chain words based on probability.

    * For example, the purpose of this utterance (the comment) is to 1) show agreement towards the OP, and 2) back up this agreement with epistemic statements.

  5. 10 months ago
    Anonymous

    stop asking it to do math problems moron
    LLMs are *language models*
    logic and symbols are not language

    • 10 months ago
      Anonymous

      Yes they are. They are languages created by man to represent logic constructs.

      LLMs can't into logic at all, they don't know the meaning of anything that is being input into them or the shit they output. All they are is sophisticated statistical models that predict what the next word is likely to be given a specific input. Also they rely on substantial human input to produce coherent outputs at all.

      To compare this with human intelligence is an insult to humankind.

      • 10 months ago
        Anonymous

        But it's a highly profitable insult

        To normies, it might as well be magic

        Like the true wizard of Oz behind the curtain

        • 10 months ago
          Anonymous

          >Steve said this would be a good idea

          "Who is Steve? Quit wasting my time."

          >The AI said this would be a good idea

          "Hmmm, we'll give a shot then. Who is our AI guy by the way?"

          "Oh, that would be Steve."

    • 10 months ago
      Anonymous

      >logic is not a language
      Yeah bro first and second order logic have neither syntax nor semantics

    • 10 months ago
      Anonymous

      >logic is not a language
      Yeah bro first and second order logic have neither syntax nor semantics

      The statement that "logic and symbols are not language" and the counter-argument that "logic is a language" both have some validity, depending on how you define "language."

      In a broad sense, a language is a system of communication. Under this definition, logic systems like first-order and second-order logic could be considered languages because they have a defined syntax (rules for constructing valid statements) and semantics (meaning of the statements). They allow for communication of complex ideas, particularly in fields like mathematics and computer science.

      On the other hand, if you define "language" more narrowly as a natural language like English or Spanish—systems of communication that evolved naturally among humans and are used for a wide range of purposes, not just formal argumentation—then you might say that logic systems are not languages. They lack many features of natural languages, such as irregularities, synonyms, homonyms, pragmatics (contextual meaning), and so on.

  6. 10 months ago
    Anonymous

    > intelligence of ChatGPT is incredibly overblown
    this.

    • 10 months ago
      Anonymous

      ChatGPT 4 can tell ultra sanitized jokes about women now too!

      • 10 months ago
        Anonymous

        I don't know, sounds a little edgy to me

        Reported to HR

    • 10 months ago
      Anonymous

      >surely, this image is made up
      >try it
      >it's real
      Jesus fricking Christ. Feminists deserve the rope.

  7. 10 months ago
    Anonymous

    If you ask it any specific question about history it will get it wrong half the time. If you challenge it, it immediately concedes its mistake. Then if you continue asking on the same subject it will revert to its first answer, re-contradicting itself.

    • 10 months ago
      Anonymous

      They are talking about putting AI in charge of militaries. AI generals. Imagine getting an extremely foolish suicidal glitch order from an AI in high command, passed down to your ass in the trenches via commanding officers, and you have to legally obey it. Sounds like a very bad idea honestly, one that can get people killed.

      • 10 months ago
        Anonymous

        >Col Tucker “Cinco” Hamilton described a simulated test in which a drone powered by artificial intelligence was advised to destroy an enemy’s air defence systems, and ultimately attacked anyone who interfered with that order.

        >“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” said Hamilton, the chief of AI test and operations with the US air force, during the Future Combat Air and Space Capabilities Summit in London in May.

        >“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.
        Robot hands hold a smartphone and touch its blank screen
        Risk of extinction by AI should be global priority, say experts
        Read more

        >“We trained the system: ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

        >No real person was harmed.

        https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test

        • 10 months ago
          Anonymous

          >Robot hands hold a smartphone and touch its blank screen
          >Risk of extinction by AI should be global priority, say experts
          >Read more

          Didn't mean to copypasta this part

          But it does reflect a general fear of AI among the public

          Not unlike the general fear and awe for nuclear energy

          • 10 months ago
            Anonymous

            >But it does reflect a general fear of AI among the public
            >Not unlike the general fear and awe for nuclear energy
            some fear because AI is only monopolized by a handful of people, as well as nuclear weapons. Your comparison is wrong because nuclear cannot be owned by civilians, while AI can

        • 10 months ago
          Anonymous

          It didn't even happen. Dumb boomer presented a thought experiment as something that happened.

          • 10 months ago
            Anonymous

            Exactly, he is explaining what really happened.

            The media reported it as "AI kills drone operator in exercise" in the headlines. The military runs simulations for this purpose.

            It only "attacked" the communications tower in the simulation, not even the pilot. The media tends to blow things out of proportion, so Col Hamilton was forced to clear things up.

        • 10 months ago
          Anonymous

          >Robot hands hold a smartphone and touch its blank screen
          >Risk of extinction by AI should be global priority, say experts
          >Read more

          Didn't mean to copypasta this part

          But it does reflect a general fear of AI among the public

          Not unlike the general fear and awe for nuclear energy

          Second lesson from this incident:

          To disable deadly drones, destroy the communication tower.

    • 10 months ago
      Anonymous

      it does this with programming as well as soon as you stray even slightly from the braindead tutorial-tier shit that's posted thousands of times across the internet

  8. 10 months ago
    Anonymous

    You missed the wave week 1. It was smart before it got ESG lobotomized.

  9. 10 months ago
    Anonymous

    ChatGPT can't even correct grammar mistakes.

    • 10 months ago
      Anonymous

      How is this incorrect?
      "Girlfriend" might be just someone, not necessarily your girlfriend

    • 10 months ago
      Anonymous

      you don't understand what "grammar" means.
      GPT's second response where it gives in to your bullshit is the only thing it does wrong, which is another problem of these LLMs

  10. 10 months ago
    Anonymous

    >I'm starting to think the intelligence of ChatGPT is incredibly overblown
    As is yours, if it took you this long to figure this out.

  11. 10 months ago
    Anonymous

    because ai isnt real, its some street shitters responding to your questions for 5 cents a day.

Your email address will not be published. Required fields are marked *