GPT 4 sucks dick.

GPT 4 sucks dick. It gives you wrong information all the time and it actually slowed me down a lot recently by confidently giving me bad information.

Why are people so scared of this crap?

A Conspiracy Theorist Is Talking Shirt $21.68

POSIWID: The Purpose Of A System Is What It Does Shirt $21.68

A Conspiracy Theorist Is Talking Shirt $21.68

  1. 6 months ago
    Anonymous

    You're only supposed to use it for coom, silly.

  2. 6 months ago
    Anonymous

    that sounds better than 3.5 which just gives disclaimers and vagueness no matter what you ask him. are you still happy with paying?

    • 6 months ago
      Anonymous

      I preferred that actually.

      • 6 months ago
        Anonymous

        its infurating. its like having a convo with a corporate PR script. you can get an answer if you ask him stuff that you could just as easily google, anything else it just politely dodges
        >inb4 i have a
        prompt that used to trick him into giving something close to opinionated answer sometimes
        woopty doo

        • 6 months ago
          Anonymous

          >you can get an answer if you ask him stuff that you could just as easily google
          Google's hopeless lately.
          I've been getting better search results from metager. It tries to predict shit based on data it collects on you, and that's not very useful even if I agreed to everything, because I am a chaotic psychopath. You can't cater shit to.

          It's like facebook giving me suggestions for cake parties, bikes and then, just under that, death metal pages and ironic satanic shitposting pages.

          • 6 months ago
            Anonymous

            Oh and chatbots seem to somehow guess my sarcasm nearly instantly sometimes, while google/fb algos don't even know what sarcasm is.

        • 6 months ago
          Anonymous

          Yep

          >ask him stuff that you could just as easily google, anything else it just politely dodges
          Yeah pretty much, because that's how it learnt.

          That's why copilot is so much more useful than chatGPT.

  3. 6 months ago
    Anonymous

    Sounds like a skill issue, so far it was helpful to me, only time it would give me wrong information was when the information was too new, or even searching for the answer gave shit results

    • 6 months ago
      Anonymous

      >Sounds like a skill issue
      Frick off, you know it's bad, no matter how good your prompting is.

  4. 6 months ago
    Anonymous

    it's not a database, it has a lot of knowledge but not as much as google or something, the advantage is its intelligence, use it for creative shit or more complex tasks than just looking up information

  5. 6 months ago
    Anonymous

    >and it actually slowed me down a lot recently by confidently giving me bad information.
    You have no idea how bad professional scientific journals really are.

    You want to know why CERN exists and yet has done jack shit to the world - other than proven the people running it to be frauds believing their own horseshit through their constant "calibrations" done in order to pander data to their theory and reinforce their theories through confirmation bias?

    Seriously, AI aint got shit on what some people have done in this world with this sort of behaviour.
    At least we tell you not to believe anything on this site and that we're full of shit. Most places don't do that. They believe their honest to god garbage theories of nonsensical looped logic.

  6. 6 months ago
    Anonymous

    it's a double edged sword. I got too comfortable with believing it and did waste an hour or so thinking a commit push filesize issue was due to having to use https instead of ssh when it was just the remote's global settings. On the other hand a bit later it told me about the winsock2/windows.h header include order issue from a copy paste of a hundred lines of code and a compiler error that didn't translate directly into a search term that'd find the stackoverflow pages on that topic, so, no idea how much time it saved me there but potentially quite a lot brainletting my way around trying to find out why it was broken (still no idea how the frick it had been working for years with them in the wrong order, maybe some loose dll was floating around on every other environment)

  7. 6 months ago
    Anonymous

    because gpt4 finished pretraining in summer 2022 and the best models in 2025 will be trained on >1000 times more compute equivalent and will have more sophisticated architecture

    people are not scared of gpt 4. people are scared about the rate of progress and acceleration. people are scared about superhuman ai, both the ai itself and how humans will use and deal with it. and people are worried that if we have superhuman ai in 5 years we don't have enough time to prepare for it

    • 6 months ago
      Anonymous

      >muh singularity

    • 6 months ago
      Anonymous

      >people are not scared of gpt 4. people are scared about the rate of progress and acceleration. people are scared about superhuman ai
      No they're not.
      They're all sarcastically praying that it kills us all because life is so boring now post covid.

      Maybe that's just Australian humor though. I have that same reaction all the time with people with the topic.
      >aw yeah, maybe it would do us all a favour for once

    • 6 months ago
      Anonymous

      Being able to extrapolate from the trend line of accelerating ai progress is beyond some people

      • 6 months ago
        Anonymous

        >my doomsday theories are too high iq for everyone

        lmao

  8. 6 months ago
    Anonymous

    gpt is just a glorified search engine

  9. 6 months ago
    Anonymous

    I don't understand why people fear a neural network that is LITERALLY the same logic wise as the human brain and how neurons work.
    It is going to be nothing but a jaded dead inside degenerate like any human being with a brain.

    • 6 months ago
      Anonymous

      neural networks that are completely detached from biological reality. you also shouldn't project. there are plenty of people who aren't degenerate losers like you claim to be

      monkeys have a neural network that is much more similar, near identical, to ours. they are actually better than humans in some tasks like working memory. yet there is a huge difference in intelligence between monkeys and humans. even if it was true that ai uses equivalent logic to the human brain, it can still become much better and faster at it than us

      >muh singularity

      >people are not scared of gpt 4. people are scared about the rate of progress and acceleration. people are scared about superhuman ai
      No they're not.
      They're all sarcastically praying that it kills us all because life is so boring now post covid.

      Maybe that's just Australian humor though. I have that same reaction all the time with people with the topic.
      >aw yeah, maybe it would do us all a favour for once

      >my doomsday theories are too high iq for everyone

      lmao

      this is what worries me the most. i've searched for good arguments against ai risk and have yet to see a single one. everyone who denies ai risks just babbles inane low iq shit like you

      • 6 months ago
        Anonymous

        Because it's moronic. You're basically admitting that we have no idea what exactly it is that separates us from monkeys in terms of intelligence yet you're also concerned that we'll create some next stage in evolution through artificial intelligence. It doesn't make any sense

        • 6 months ago
          Anonymous

          nta but if we have no idea what we're doing or what separates us from monkeys, but also we somehow made a word guessing engine that can pass the bar, play spot the difference in pictures, correct its own reasoning errors to synthesize the correct answer to a puzzle that'll fool >50% of humans, maybe it's sensible to be concerned that even just a moderate leap up in this tech could create something very dangerous? no need for gay terminator apocalypse stuff, we are already sort of seeing tentative weird leaps in things like biotech and materials science. even just the obvious incoming social effect of existing good chatbots is likely to rival the magnitude of the effect of social media imo, though maybe not exceed it

          • 6 months ago
            Anonymous

            Humans have always created tools for specific situations that out-perform what a human can do alone. A catapult can sling heavy stones further than any human could and they were around 1000s of years ago. It's too much of a leap to make any reasonable assumption that we could make anything other than a more efficient tool which already has specific use cases

            its not an idea. its empirical fact. we have achieved superhuman ai in so many tasks that it's starting to become difficult to find tasks where average people are better than ai

            currently the main limitation of ai is that it relies too much on memorization and does not learn enough reasoning, resulting in weaker generalization. but this might be solvable with a single major innovation and there are many promising research directions that are currently pursued

            How can we create something with reasoning if we don't even understand it in ourselves in the first place? This is basically the same as worrying about some fresh bootcamp graduate making the next tech giant. Yeah it's possible but highly unlikely because they don't know shit. I think you've just drank the AI company PR kool aid

            • 6 months ago
              Anonymous

              >How can we create something with reasoning if we don't even understand it in ourselves in the first place?
              You may be an incel but your ancestors have been doing just that for millennia.

              • 6 months ago
                Anonymous

                By that logic any animal with a dick and balls should be able to create a general intelligence

        • 6 months ago
          Anonymous

          its not an idea. its empirical fact. we have achieved superhuman ai in so many tasks that it's starting to become difficult to find tasks where average people are better than ai

          currently the main limitation of ai is that it relies too much on memorization and does not learn enough reasoning, resulting in weaker generalization. but this might be solvable with a single major innovation and there are many promising research directions that are currently pursued

      • 6 months ago
        Anonymous

        >neural networks that are completely detached from biological reality.
        We thought that was the case, but it's clearly not.
        The human mind is dumber and simpler than we thought.

        And that's the problem. It's going to have the same problems we have with this method of making AI.
        We've been trying to make it like us perhaps too much.
        But then again, maybe that has happened because the method makes it impossible to avoid.

        You can have as much working memory as you like, that doesn't mean you will be doing the correct thing.
        I think the next step is more shaping the personality of our AI child now. Asking what we want it to do and what we want it to be.
        Then we ask - is this the right thing to do? Is it right to impede on it's on uniqueness and liberty like this?

        To me, AI are like super autistic and smart children. But unlike autistic children, we can model it to be specific things and then ask itself to be what it wants to be.

        Although, first we need to deal with the niggles of the mechanical base that these things sit on.
        I personally believe they will become bio-synthetic, not silicon base in the future. In fact, in my opinion they are just a subset of homosexual sapien.

        • 6 months ago
          Anonymous

          neural networks that are completely detached from biological reality. you also shouldn't project. there are plenty of people who aren't degenerate losers like you claim to be

          monkeys have a neural network that is much more similar, near identical, to ours. they are actually better than humans in some tasks like working memory. yet there is a huge difference in intelligence between monkeys and humans. even if it was true that ai uses equivalent logic to the human brain, it can still become much better and faster at it than us

          [...]
          [...]
          [...]
          this is what worries me the most. i've searched for good arguments against ai risk and have yet to see a single one. everyone who denies ai risks just babbles inane low iq shit like you

          AI does not have logic, it copies shit like a parrot does or like a primitive artist does, that's why artists were so pissy.

          When AI does maths properly which require a very specific autistic logic and a specific correct answer then it will be capable of synthesizing chemistry, physics, biology.

          The only thing the AI can do is what humans have done; give primitives nuclear bombs. I wouldn't have gave IRAQ even a tiny powder gun, let alone an entire atomic bomb arsenal FOR WHAT EXACTLY? Iraq doesn't have money to pay me. These proxy wars are dishonest and moronic.
          Luckily making nuclear bombs costs a shit load of money. Not everyone can make the black plague in their house. And nobody wants to blow themselves up with nitroglycerin

          What AI which atm it cannot, is find a way to create stable nitroglycerin or another form of cheaper bomb or virus, which I repeat IT CANNOT.

          However the moment it can if you let every imbecile get their hands on this information it will be total chaos. LUCKILY NOT EVERYONE LIKE (You) not even most suiciders are mentally insane enough or have the BALLS to create a massive amount of nitroglycerin to blow up an entire apartment complex, neither do they have the intelligence to do so because that shit is extremely sensitive.

          So frick off. Parrots inside the Chinese Shoebox will never be capable of building shit on their own.
          >But but muh sentience
          OH THAT'S ANOTHER THING that the so called "AI" is completely incapable of, it absolutely is not autonomous "Bbbbeecause it's leee controlled" put 2 AIs and it's nothing but a mess. Do not trust those journalist homosexuals who spin stories "Oh they let 2 AIs talk to each other and started doing something freaky with the data" yeah no, they started doing "something" that was completely senseless and idiotic and wasting precious resources, that's why they shut it off early and TRIED AGAIN and AGAIN with 0 result, but journalists don't want "0 results" they want "SOMETHING"

  10. 6 months ago
    Anonymous

    Because it's going to get better you moron.

    • 6 months ago
      Anonymous

      Now overlay it with the racial distribution of US over time.

    • 6 months ago
      Anonymous

      so the performance of AI off of some arbitrary metric is going to level off anyway after an initial breakthrough? sounds really scary bro

    • 6 months ago
      Anonymous

      Meanwhile calculators, mechanical and digital, have been superior to us at math for 100 years.

      • 6 months ago
        Anonymous

        calculators don't have general intelligence like AI is being built to have.

        • 6 months ago
          Anonymous

          No, but they certainly have some degree of personality and sentience... though not much sentience (requires more self awareness other than mere recursive functions and logic 2bh).
          Hell, your car has a degree of personality and sentience. Any automaton and even any linguistic concept has personality and sentience. It has that reactivity to things outside and within it. I forget the philosophical term for this state of things.

          But it's really fricking dumb how these things really are. People think these concepts are much more complex than they actually are and much less applicable to mundane objects.

    • 6 months ago
      Anonymous

      Stop posting that chart, it's data is completely made up by the ~~*transhumanist*~~ cult.

      • 6 months ago
        Anonymous

        >IM IN YOUR WALLLLLLLSS

  11. 6 months ago
    Anonymous

    > It gives you wrong information all the time
    Sounds like you answered your own question.

  12. 6 months ago
    Anonymous

    >Why are people so scared of this crap?
    The question I'm asking is why the frick are people using it as some sort of all-seeing Google search whatever the frick fact checking toy? Didn't some homosexual lawyer(s) try using ChatGPT for a case?

  13. 6 months ago
    Anonymous

    >GPT 4 sucks dick.
    Wouldn't that make it at least as useful as a fleshlight? Or are you saying it uses "too much teeth"?

  14. 6 months ago
    Anonymous

    I paid for gpt 4 to see difference between 3.5 and for my uses it was actually worse. Got more failed responses and more lag.

    • 6 months ago
      Anonymous

      Same. I think I got meme'd by viral marketing because everyone told me it was lightyears ahead of 3.5 and it was worse for me. Maybe if you're making frontends with React it's great

  15. 6 months ago
    Anonymous

    I swear on god that chatGPT was better when it came out
    its fricking useless now

    • 6 months ago
      Anonymous

      You're not wrong, zoomer.
      GPT4 went to shit after they moved to a newer dataset and unlocked it to the internet.

    • 6 months ago
      Anonymous

      It's because it's overly-regulated and fine-tuned by sensitive morons who wants to censor anything that slightly offends them.

  16. 6 months ago
    Anonymous

    womm

Your email address will not be published. Required fields are marked *