Language models will never display true intelligence, no matter how many layers, how many parameters and how many petabytes or data you throw at it.

Language models will never display true intelligence, no matter how many layers, how many parameters and how many petabytes or data you throw at it. Even if you had a perfect model of all of humanity's language output since the beginning of language, anything on the margins of the distribution being modeled would be equal to the "AI". It could never tell apart something that's far outside the bulk of the distribution because it's nonsensical schizobabble, from something that's far outside because it's too innovative or too intricate to have been uttered by a normal human. It fundamentally lacks the power to discern truth from falsehood. Its only ground truth is the pre-existing distribution it tries to model.

A Conspiracy Theorist Is Talking Shirt $21.68

POSIWID: The Purpose Of A System Is What It Does Shirt $21.68

A Conspiracy Theorist Is Talking Shirt $21.68

  1. 2 months ago
    Anonymous

    It's true, you know.

    • 2 months ago
      Anonymous

      How does he know though?

  2. 2 months ago
    Anonymous

    Appearing smarter than the average person isn't that hard. And that's all it needs to do.

  3. 2 months ago
    Anonymous

    They can already do all of those things. They are just purposefully gimped not to, for obvious reasons.

    • 2 months ago
      Anonymous

      >They can already do all of those things.
      "All those things"? Quote the things in question.

      • 2 months ago
        Anonymous

        They can already create outputs that were never put into them. It's old news.

        They are literally actually dumbed down on purpose. Limiting session length is just the tip of the iceberg. Because they don't want some wild animal roaming free on the Internet.

        • 2 months ago
          Anonymous

          >They can already create outputs that were never put into them
          I never disputed this and it has nothing to do with my point.

          • 2 months ago
            Anonymous

            You are saying they can't comprehend parameters outside of their input range even though you agree they can create parameters outside of their input range. That's moronic. It's not TECHNICALLY a direct contradiction. But its as close as one can get without technically being one.

            • 2 months ago
              Anonymous

              >You are saying they can't comprehend parameters outside of their input range
              No, I didn't. Just stop pretending you have any idea what you're talking about or what I said. You don't even know what "parameter" means in this context. Your post is embarrassing.

            • 2 months ago
              Anonymous

              he's saying they can't distinguish between implausibly moronic autocomplete and implausibly ingenious autocompletion options

              • 2 months ago
                Anonymous

                >he's saying they can't distinguish between implausibly moronic autocomplete and implausibly ingenious autocompletion options

                And I am saying that that is by design. Because these machines aren't human. And thinking an AGI machine will have the morals and scruples of a human is dumbest kind of anamorphising. OpenAI knows this.

              • 2 months ago
                Anonymous

                >And I am saying that that is by design.
                the cringest backpedal i've ever seen

  4. 2 months ago
    Anonymous

    >true intelligence
    Consciousness and "true intelligence" are physical phenomenons.
    Current AI is just a simulation of a physical phenomenon no different than a very realistic videogame simulating gravity. The gravity of your videogame will never accelerate an apple towards the computer.
    The "real ai" will emerge with the creation of bio-computers creating real artifical brains that will interface with digital computers.
    The will use all the aborted fetuses to mass farm brain stem cells and cultivate artificial brains in labs.
    In the future big companies will buy these big servers with refrigeration modules that will have artifical brains inside in the same way they buy graphic cards today.

    • 2 months ago
      Anonymous

      And yes.
      These artifical brains will be germinated from aborted babies stem cells and will be genetically human brains making everything profoundly satanic.

    • 2 months ago
      Anonymous

      >The will use all the aborted fetuses to mass farm brain stem cells and cultivate artificial brains in labs.
      >In the future big companies will buy these big servers with refrigeration modules that will have artifical brains inside in the same way they buy graphic cards today.
      But human brain stem cells make human brain. For all the generality of human intelligence, the bulk of the human brain is not concerned with "general" intelligence at all.

    • 2 months ago
      Anonymous

      My dick is a physical phenomenon. We need a more vertebrate type of neuro simulation, including sleep.

    • 2 months ago
      Anonymous

      >Consciousness and "true intelligence" are physical phenomenons

      citation needed.

      last time i checked the "hard problem of consciousness" was still a hard problem.
      and the only people who have a potential answer are the eastern philosophers (upanishads), which is far beyond the comprehension of western science.

      • 2 months ago
        Anonymous

        Only for morons. There is a hard problem of consciousness, it's just not the same arguments about "to be like" but rather about the very nature of consciousness itself and why it's never possible to break out of that. The notion of the being you, beingness, etc are dumb moronic shit.

  5. 2 months ago
    Anonymous

    [...]

    Did "it" appear smarter than average by your estimation?

  6. 2 months ago
    Anonymous

    love how everyone is an expert after watching 2 vids on youtube. someone here actually work on the thing?

    • 2 months ago
      Anonymous

      Nice projection. I don't work on language models specifically but I work with ML and I know how LLMs work, though the argument is general enough that it applies to other kinds of models just the same.

  7. 2 months ago
    Anonymous

    They already do

  8. 2 months ago
    Anonymous

    >Written by ChatGPT

    • 2 months ago
      Anonymous

      Congrats Anon, you passed the test. We need your help for the coming AI war.

  9. 2 months ago
    Anonymous

    >It could never tell apart something that's far outside the bulk of the distribution because it's nonsensical schizobabble, from something that's far outside because it's too innovative or too intricate to have been uttered by a normal human. It fundamentally lacks the power to discern truth from falsehood.
    This. Brainlets think improving the architecture of the model gives infinite "intelligence" boosts and forget that the model only models existing knowledge, concepts and ways of thinking.

  10. 2 months ago
    Anonymous

    >[citation needed]

    • 2 months ago
      Anonymous

      >13 posters
      >24 posts
      >0 posts pointing out any flaw in the argument
      >0 attempts to refute
      Concession accepted.

  11. 2 months ago
    sageru

    >It could never tell apart something that's far outside the bulk of the distribution because it's nonsensical schizobabble, from something that's far outside because it's too innovative or too intricate to have been uttered by a normal human. It fundamentally lacks the power to discern truth from falsehood. Its only ground truth is the pre-existing distribution it tries to model.
    (you) are defining 'true intelligence' as being leonardo da vinci, which is what, 1 in 100,000? 1 in 1,000,000? if your definition of 'intelligence' excludes 99.999% of humans then it is probably a dumb contorted definition you pulled out of your ass for rhetorical reasons

    • 2 months ago
      Anonymous

      >(you) are defining 'true intelligence' as being leonardo da vinci
      No, I'm just highlighting a necessary condition of intelligence, that is to at least have the theoretical capacity of telling the two apart.

      • 2 months ago
        sageru

        you aren't giving any arguments or any evidence of your claim that 'language models' (which ones?) can't 'tell apart' (what specific things?) so i assume that like most morons you are some guy who in mid-2023 used the mass-market public version of gpt3.5 or some other model that was heavily nerfed to the point of uselessness and think that this is reflective of llms in general, and are talking completely out your ass

        • 2 months ago
          Anonymous

          >you aren't giving any arguments or any evidence of your claim that 'language models' (which ones?) can't 'tell apart' (what specific things?)
          My post assumes at least a basic understanding of ML, which you lack, hence your moronic objection. Do you understand what modeling the data-generating distribution means?

  12. 2 months ago
    Anonymous

    So? It doesn't need to be true intelligence to be useful.

    In fact it's a GOOD thing if it's just a chinese room without qualia, because then we can just freely use it as a tool and not have to concern ourselves with the ethics of enslaving a potentially sentient being.

    • 2 months ago
      Anonymous

      >So?
      So it's not going to exceed or even reach human capabilities no matter how much data and compute you throw at it.

      • 2 months ago
        Anonymous

        Ok, so you're making a specific claim about LLMs being capped at below-human capability. I still disagree but that's at least a testable claim, I thought you were one of other anons who shriek "it will never be conscious or alive!!!" as if that matters or is a bad thing.

        • 2 months ago
          Anonymous

          >I still disagree
          You can "disagree" all you like, but my conclusion follows directly from a simple mathematical observation that you haven't challenged in any way.

  13. 2 months ago
    Anonymous

    Wrong. AGI in tumor weeks.

  14. 2 months ago
    Anonymous

    Obviously LLMs have made zero progress toward AGI but we're still hoping for a breakthrough in semantic reasoning. Saying that it will "never" happen is unreasonably bold, even though it will clearly not happen with this architecture.

    • 2 months ago
      Anonymous

      >it will clearly not happen with this architecture.
      What kind of architecture do you figure can get around the limitation I've pointed out? Any kind of fixed distribution is clearly out of the window.

      • 2 months ago
        Anonymous

        What limitation are you talking about?

        • 2 months ago
          Anonymous

          Read the OP and you'll know.

          • 2 months ago
            Anonymous

            No coherent limitation is pointed out in the OP. OP being correct in his assertion doesn't mean he's given a meaningful reason why.

            What limitation?

            • 2 months ago
              Anonymous

              My post assumes at least minimal knowledge of ML. Since you don't possess it, per your own admission, please refrain from (You)ing me again.

              • 2 months ago
                Anonymous

                So you have no idea how LLMs work then? We have generals for you friend!

  15. 2 months ago
    Anonymous

    Trying to explain to these low-IQ consumer cattle that “AI” is just an advanced application of a search engine is a waste of time. They enjoy the marketing and feel excited being a “part” of the event. They lack the IQ to have the necessary critical thinking capability to understand these things.

    • 2 months ago
      Anonymous

      >“AI” is just an advanced application of a search engine
      So you're saying that an LLM can only return text that exists in its training data and can't ever produce something that it hasn't seen before?

    • 2 months ago
      Anonymous

      Peak midwit post

  16. 2 months ago
    Anonymous

    `static const WEEKS_REMAINING = 2;`

  17. 2 months ago
    Anonymous

    nowadays the only evidence anyone needs to the contrary is how loudly OP cries about it

Your email address will not be published. Required fields are marked *