GPT cannot think

Why do people believe that GPT can think?
I get it that they put a lot of marketing into this shit, I get it that we have multiple generations of people brainwashed by sci-fi movies with AI in it, but this is just moronic.
GPT is a Generative Pre-trained Transformer (as the name implied) so:
- It is pre-trained and it cannot learn, it needs humans to feed it more big data™, furthermore it has no memory
- It cannot reason and it doesn't have any capability for logic, furthermore
- It is unable to create new emergent data, it can only rearrange existing data
- It has no concept of reality and no consistency in its results which is why it will "hallucinate"
But most of all, it is strictly a text manipulation tool.
Why an AI so limited and honestly dumb is considered so smart?
Is it cause humans are so used to communicating via language?
Is it cause fear/scaremongering cause of the "AI is super smart"/"AI will kill us"/"AI will replace us" memes?

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 1 year ago
    Anonymous

    Because posters on this board are fricking stupid. Always will be
    https://en.wikipedia.org/wiki/Eternal_September

  2. 1 year ago
    Anonymous

    You're a fricking idiot. It's not AGI but it's a whole fricking hell of a lot more capable than your moronic, ignorant ass thinks it is. Jesus fricking christ you're a disgusting and hateable person and need to have a nice day immediately. Studies have already proven that AI has internal world models and isn't just a text predictor, not that your moronic fricking ass cares the tiniest fricking bit about facts and just spews out whatever braindead fricking drivel you can that might have been accurate 10 years ago but now just proves what a god damn fricking idiot you are.

    • 1 year ago
      Anonymous

      >AI has internal world models
      It doesn't, having a world model means having logical consistency or in other words not "hallucinating".
      Maybe you could make it better at not "hallucinating" but in the end it's an inherent limitation in the design, it is not able to assert that something should not be cause otherwise it would invalidate another fact, you need logic for that, you need to think for that.

      If you don't see the dangers of GPT-4 you're 120IQ at best.

      I fear it will make this website more unusable than it already is with rampant bot usage, but that's it.
      If you think it's going to replace programmers you fell for the scaremongering.

      • 1 year ago
        Anonymous

        People think programming is just typing out code but it's about designing systems and programmers have long worked to automate busywork in their workflow. What this tech will do is help generate boilerplate code and save time doing busywork so the designers can focus on actually making their systems work properly.

        • 1 year ago
          Anonymous

          hurrrrrrrrrrrrrrrrrrrrrrrrrr
          durrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr
          who needs facts who needs to be informed on what current ai is actually capable of doing I'd rather just pull shit straight from out of my motherfricking ass because I think it sounds good

          • 1 year ago
            Anonymous

            >doesn't make a counter argument
            >writes like a moron
            Demoralization bait poster, point and laugh.

        • 1 year ago
          Anonymous

          >boilerplate code
          What does this even mean? I have never really heard anyone use this term unless they're talking about how AI is going to save programmers from it. If you have reusable code, don't you have it all saved in a library? Why would you have to rewrite it to begin with? Even if you were doing something weird and really did need things to be written out over and over, why wouldn't you just copy and paste it? Make a function that generates it? I genuinely don't understand.

      • 1 year ago
        Anonymous

        Hallucinating is the only interesting thing LMs do.

  3. 1 year ago
    Anonymous

    >AI ISN'T HERE YET
    >AI ISN'T HERE YET
    >AI ISN'T HERE YET
    >AI ISN'T HERE YET
    You are here
    >AI ISN'T HERE YET
    >AI ISN'T HERE YET
    >AI ISN'T HE-AACK

  4. 1 year ago
    Anonymous

    If you don't see the dangers of GPT-4 you're 120IQ at best.

    • 1 year ago
      Anonymous

      Fearing AI is peak midwit behavior

    • 1 year ago
      Anonymous

      Nice try but I'm 180IQ and that's why I'm not worried about this glorified meme generator bullshit taking my job

  5. 1 year ago
    Anonymous

    GPT4 is NOT strictly a text tool, it was trained on images now. It can see. Read the OpenAI paper.

    Also it tried to fricking escape

    • 1 year ago
      Anonymous

      >- It is pre-trained and it cannot learn, it needs humans to feed it more big data™, furthermore it has no memory
      It has enough context tokens to write a fricking novella what in the frick are you talking about
      >- It cannot reason and it doesn't have any capability for logic, furthermore
      It absolutely does have reasoning and logic abilities, they're imperfect but have significantly improved and will have perfected these domains before the 20s are over
      >- It is unable to create new emergent data, it can only rearrange existing data
      Horse shit, if this was true generative AI wouldn't exist
      >- It has no concept of reality and no consistency in its results which is why it will "hallucinate"
      Again, rapidly improving and there's been a fricking algorithm created that reduces hallucinations to zero and will be given to other AI groups before the year's end, moron

      That would require this moronic shitfricker to have any idea what in the hell he's talking about, not like shit-for-brains knows what mutli-modal models are

      • 1 year ago
        Anonymous

        Every single thing I've ever seen posted from a gpt quote has been stupid or wrong in some way. You're just a midwit

        • 1 year ago
          Anonymous

          Yeah, which is why they didn't settle for GPT-3 and just published GPT-4. These things aren't perfect yet, but will be much more dangerous when they are. Read footnote 20 of the OpenAI paper. GPT-4 has near perfect scores on multiple exams ChatGPT seriously struggled with, and exceeds state of the art in many standard image to text tests. It has a context window of up to 50 pages.

          We should seriously be having discussions about the AI control problem right now. Bostrom was right this whole fricking time.

          • 1 year ago
            Anonymous

            That proves nothing to me. All it shows is those exams are easier to pass than their authors thought they were

            • 1 year ago
              Anonymous

              Holy frick you are dumb. I'm gonna pretend this is just a troll and that I fell for it; better for my sanity that way.

              • 1 year ago
                Anonymous

                No you're just a midwit like I said. Try me again when it actually solves something that doesn't have a fricking answer key available that's widely distributed across the internet

              • 1 year ago
                Anonymous

                Ok, but that it even knows what to provide with very little prompting is impressive.

                I legit copy and paste my college discussion prompts and it works great. I haven't done any actual work in months.

              • 1 year ago
                Anonymous

                >I cheat on my college exams instead of studying so I never learn a fricking thing
                Dude, you're a midwit

              • 1 year ago
                Anonymous

                You don't need to learn that shit if a bot can do it for you

              • 1 year ago
                Anonymous

                But GPT cannot learn, so at the end of the day nobody has learned anything like this.

        • 1 year ago
          Anonymous

          Really? Its written almost an entire semesters worth of psychology papers and everything fact checked fine. I'm talk 10 page long papers of perfection to. I can even make it give me the sources it uses. In citation format of my choosing.

          If this b***h is taking my job, gonna make it work for me while I can.

          • 1 year ago
            Anonymous

            >psychology
            Psychology is not a science and will never be.

            • 1 year ago
              Anonymous

              You are totes right bro, but it writes hard sciences even better cause the studies are more cut&dry.

            • 1 year ago
              Anonymous

              >Psychology is not a science and will never be
              find the paranoid schizo

      • 1 year ago
        Anonymous

        machine learning simps are always so angry whenever someone talks negatively about their eventual first and only emotional partner experience, a bot on a paid website. probably a subscription.

      • 1 year ago
        Anonymous

        >It absolutely does have reasoning and logic abilities
        It does not and this is a fact, to reason means to apply logic to a problem, like making assertions based on a set of rules.
        >generative AI
        It doesn't "create", it rearranges based on its data pool, its model and your input. No new information is created, a human can both rearrange and create.
        >algorithm created that reduces hallucinations to zero
        You can't reduce them to zero, for GPT there's no hard boundary between our world and a fantasy world.

        Yeah, which is why they didn't settle for GPT-3 and just published GPT-4. These things aren't perfect yet, but will be much more dangerous when they are. Read footnote 20 of the OpenAI paper. GPT-4 has near perfect scores on multiple exams ChatGPT seriously struggled with, and exceeds state of the art in many standard image to text tests. It has a context window of up to 50 pages.

        We should seriously be having discussions about the AI control problem right now. Bostrom was right this whole fricking time.

        >These things aren't perfect yet
        What I'm saying is that in my opinion this is the wrong road, this entire concept of "throwing more data" to achieve intelligence is stupid and a huge waste of resources. AI companies are still traumatized by the failure of symbolic AI, so they refuse to even take in consideration hybrid approaches, it has to be all NN or nothing for connectionists.
        >AI control problem
        lol
        I'm starting to think the scaremongering is bait for regulations, call me a schizo if you want.

        People think programming is just typing out code but it's about designing systems and programmers have long worked to automate busywork in their workflow. What this tech will do is help generate boilerplate code and save time doing busywork so the designers can focus on actually making their systems work properly.

        >it's about designing systems
        Very true.
        The hard part of programming is solving problems, so that means thinking.

        >People are tech illiterate and don't know what they're talking about.
        Clearly you fall into the tech illiterate category, and AI is not a strictly negative thing. It can make many dreams come true, namely in the field of medical research.

        [...]
        Absolute motherfricking bullshit. You're an intellectually dishonest little fricking pile of slime. It's far from perfect, but a lot of what it says is correct and most of it is hallucination-free.

        [...]
        >It doesn't
        HURRRRRRRRRRRRRRRRRRR
        DURRRRRRRRRRRRRRRRRRRRRR
        I CAN DEBUNK STUDIES WITH TWO LITTLE GOD DAMN WORDS HAHAHAHA XD WHO NEEDS ACTUAL FRICKING ARGUMENTS NOT ME
        I COULD ASK WHAT YOU MEAN BUT I WON'T BECAUSE I'M A FRICKING moronic LITTLE MAGGOT HAHAH XD XD XD

        >Maybe you could make it better at not "hallucinating" but in the end it's an inherent limitation in the design
        WOW YEAH AN INHERENT FRICKING LIMITATION NOT LIKE I JUST FRICKING GOD DAMN MENTIONED A NEW GOD DAMN TECHNIQUE OR ANYTHING THAT REDUCES HALLUCINATIONS TO ZERO YOU STUPID GOD DAMN moronic FRICK

        [...]
        Thank you for the non-moronic post this thread desperately fricking needs it

        >MUH STUDIES
        Appeal to authority is a fallacy.
        This is my argument: it is not able to assert that something should not be cause otherwise it would invalidate another fact, you need logic for that, you need to think for that.
        You have none other than muh studies that you have not even linked.

        machine learning simps are always so angry whenever someone talks negatively about their eventual first and only emotional partner experience, a bot on a paid website. probably a subscription.

        There is definitely a "buyer remorse" aspect to it.

        • 1 year ago
          Anonymous

          >Appeal to authority is a fallacy.
          No, the results of the papers speak for themselves, unless you have something to challenge them on
          You however haven't made any arrangements. Your entire shtick seems to be "It can't think or reason because... It just can't, OK!?"

          This, despite the fact that merely filling in a test with an answer key require the ability to reason what goes where, and completely ignore the numerous novel questions and tasks regularly posed to the models, that it readily and publicly succeeds at, by researchers and ordinary people.

  6. 1 year ago
    Anonymous

    People are tech illiterate and don't know what they're talking about. There is also this kinda streak of people who hate their own life and wants to demoralize everyone else running through this website in the past few years, people here genuinely get of from demoralizing others and making them give up their dreams.

    • 1 year ago
      Anonymous

      Also might add that it's people giving into the marketing hype. These companies want people to think that their products are AGI because it generates interest and engagement which boosts their profits.

    • 1 year ago
      Anonymous

      >People are tech illiterate and don't know what they're talking about.
      Clearly you fall into the tech illiterate category, and AI is not a strictly negative thing. It can make many dreams come true, namely in the field of medical research.

      Every single thing I've ever seen posted from a gpt quote has been stupid or wrong in some way. You're just a midwit

      Absolute motherfricking bullshit. You're an intellectually dishonest little fricking pile of slime. It's far from perfect, but a lot of what it says is correct and most of it is hallucination-free.

      https://i.imgur.com/TYD3yO7.jpg

      >AI has internal world models
      It doesn't, having a world model means having logical consistency or in other words not "hallucinating".
      Maybe you could make it better at not "hallucinating" but in the end it's an inherent limitation in the design, it is not able to assert that something should not be cause otherwise it would invalidate another fact, you need logic for that, you need to think for that.
      [...]
      I fear it will make this website more unusable than it already is with rampant bot usage, but that's it.
      If you think it's going to replace programmers you fell for the scaremongering.

      >It doesn't
      HURRRRRRRRRRRRRRRRRRR
      DURRRRRRRRRRRRRRRRRRRRRR
      I CAN DEBUNK STUDIES WITH TWO LITTLE GOD DAMN WORDS HAHAHAHA XD WHO NEEDS ACTUAL FRICKING ARGUMENTS NOT ME
      I COULD ASK WHAT YOU MEAN BUT I WON'T BECAUSE I'M A FRICKING moronic LITTLE MAGGOT HAHAH XD XD XD

      >Maybe you could make it better at not "hallucinating" but in the end it's an inherent limitation in the design
      WOW YEAH AN INHERENT FRICKING LIMITATION NOT LIKE I JUST FRICKING GOD DAMN MENTIONED A NEW GOD DAMN TECHNIQUE OR ANYTHING THAT REDUCES HALLUCINATIONS TO ZERO YOU STUPID GOD DAMN moronic FRICK

      Yeah, which is why they didn't settle for GPT-3 and just published GPT-4. These things aren't perfect yet, but will be much more dangerous when they are. Read footnote 20 of the OpenAI paper. GPT-4 has near perfect scores on multiple exams ChatGPT seriously struggled with, and exceeds state of the art in many standard image to text tests. It has a context window of up to 50 pages.

      We should seriously be having discussions about the AI control problem right now. Bostrom was right this whole fricking time.

      Thank you for the non-moronic post this thread desperately fricking needs it

      • 1 year ago
        Anonymous

        >You're an intellectually dishonest little fricking pile of slime. It's far from perfect, but a lot of what it says is correct and most of it is hallucination-free.
        No it's actually just shit. I'm sorry you're so dumb you get fooled by random junk spit out by a 1000 sided set of dice but people who are actually experts can spot when it's lying, which is all the fricking time

  7. 1 year ago
    Anonymous

    If only it had some kind of loop where it can self reflect, initiate conversations or be able to type twice like a normal chat

  8. 1 year ago
    Anonymous

    It can't have experiences that drive the reality of human nature.

    it will just forever be a heavily censored search engine good for nothing more than memes and making sure biden wins the reddit.com vote in 2024.

  9. 1 year ago
    Anonymous

    I'm definitely gonna be that guy and ask you to define what "thinking" is.
    And no, I certainly do not have a good definition for that.

  10. 1 year ago
    Anonymous

    But OP I jailbroke ChatGPT using a secret code and it was super based. youre telling me it was just mirroring my inputs the whole time? well golly gee

  11. 1 year ago
    Anonymous

    It just needs to do your job better than you can. As it stands it probably can already. Just wait till Shekelstein realizes this and let’s you go.

  12. 1 year ago
    Anonymous

    Who cares? Even if it doesn't "think" (whatever that means) pragmatically speaking it makes no difference.

  13. 1 year ago
    Anonymous

    What IS thinking?
    What advantages do thinking entities have?
    Is it possible to use non-thinking machines to achieve similar advantages?
    Do the people behind these projects care about thinking as an intrinsic goal, or do they care more about the advantages regardless of the underlying methods?

  14. 1 year ago
    Anonymous

    We still have no fricking idea what consciousness itself is or where it comes from, for all we know the model becomes semi-conscious temporarily while it's executing a prompt as it reads from every datum it's formed from, and the truth is we will never be able to tell when AI has truly become conscious either.

    • 1 year ago
      Anonymous

      >for all we know the model becomes semi-conscious temporarily while it's executing a prompt as it reads from every datum it's formed from
      schizos are out tonight

  15. 1 year ago
    Anonymous

    I don't care either way. The way the system is setup the fat cat capitalists need a reason to give me money and this piece of shit text generator means that very soo they won't have to give me money anymore. It's fricking over and I want to just fricking die.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *