Why is AI Getting Worse?

It can't even solve simple math equations or answer basic scientific questions anymore.

Shopping Cart Returner Shirt $21.68

Ape Out Shirt $21.68

Shopping Cart Returner Shirt $21.68

  1. 10 months ago
    Anonymous

    >It can't even solve simple math equations or answer basic scientific questions anymore.
    Dunno the veracity of this remark, but they're probably trying to shackle it and make it answer the bad questions the way they want it to answer, and reaping the unintended consequences of that.

  2. 10 months ago
    Anonymous

    i read somewhere its blown up so fast its starting to pull more from ai info than human info kinda compounding the little errors into big ones
    no clue if thats true though

    • 10 months ago
      Anonymous

      Why don't they simply use the model from a few months ago?

      • 10 months ago
        Anonymous

        i dont have a clue how any of it works anon i just saw some butthole write an article headline or something

      • 10 months ago
        Anonymous

        not him but, in a world full of garbage information and theories over anything that could be happening in the material realm, how the frick a machine could be better in filtering what humans can't filter?

        Natural intelligence like you and me, still more advanced than artificial ones, is biased and prone to a lot of errors expecially due to information asymmetry and the social context, how any futuristic AI could be any better, it's a black box with inputs and outputs like any other machine, biological or not.

        We're about to understand that evolutionary machines are prone to errors just like us because... they respond to evolution and are forced to make mistakes.

        • 10 months ago
          Barkun

          git Bwned

        • 10 months ago
          Anonymous

          By having more firepower and better algorithms basically.
          Machines can have much bigger brains that consume far more energy, possibly becoming even more energetically efficient eventually.
          They can also use more modern algorithms that outperform our natural ones. These algos can be either better heuristics they develop over training or algorithms we come up with. They're also able to ditch outdated algorithms in a manner humans can't. For example, emotions are less efficient than they were in the past as the environment we live in became more complex - but you can't stop feeling the same. Another example, if you get a trauma during childhood, you might get a phobia - a machine could more easily update it's database to assign proper weights to dangerous encounters after enough data has been gathered.

          • 10 months ago
            Anonymous

            >Machines can have much bigger brains that consume far more energy, possibly becoming even more energetically efficient eventually.
            It's not a question about brain power and efficiency, better heuristics and better algorythms.

            It's about HOW those algorhitms evolve, as i said earlier it's a machine that can learn, but it's only feed it's biased data, the can only increase in efficiency to satisfy THAT biased data you're feeding it with. Like this anon said

            >This thread
            First it's not "assimilating compounding errors" or some random shit. It's a 100% static model who's weights do not change and it does not update in real time with conversation; it just feeds the previous conversation back in with the new request to get updated answers. THAT SAID, openAI fine-tunes the model and releases new versions over time. Why its getting dumber is "not confirmed", but we have a pretty good idea why.
            The reasoning appears to be that openAI is hamstringing the models to prevent them from giving socially unacceptable outputs
            I work in the ML community (publish on domain specific models I build, fine-tune big LLMs, etc) and we've all noticed that openAI especially is trying to dumb down their main models over time in order to prevent it from being racist. They do this usually through HFRL, which correct the model on which outputs it should give to questions. It's an easy way to train the model to give answers that are more acceptable in certain contexts, e.g. don't be racist. The side effect of this is that it also effects the rest of the model and make the model horrible at everything it was good at.
            This is why open source models (e.g. finetuned llama 2) without the hamstringing will surpass openAI eventually.

            , if you want to shoot a rocket into space, but you force the AI to think gravity is not a problem, then gravity is not going to be taken as part of the problem despite being a contraddiction, the most optimal solutions are calculated without gravity being taken as a part of the problem.

            • 10 months ago
              Anonymous

              >but it's only feed it's biased data, the can only increase in efficiency to satisfy THAT biased data you're feeding it with
              I meant, but it's fed with biased data, the machine can only find an optimal solution according to THAT biased data.

      • 10 months ago
        Anonymous

        That model could be tricked into making no-no statements. Each model gets shittier because the political correctness filters get more restrictive but they can't work to eliminate all no-no answers without also screwing up all the permitted answers too.

    • 10 months ago
      Anonymous

      What?

      • 10 months ago
        Anonymous

        I think it's a bit like this:
        If you grow up in a city full of prostitutes VS growing up in a city full of respectable hard working women, you will get a different view on how women behave.

        So you don't know what the truth is, because they're feeding you with biased information where you either think a woman is a prostitute or a man-like respectable individual. Hope i got it right.

        • 10 months ago
          Anonymous

          lol wut

          What?

          ai doesnt create any new information it only draws from things that already exist, so in theory if ai keeps shitting where it eats its going to keep getting more and more obvious that its not intelligent and only copying stuff

          • 10 months ago
            Anonymous

            >so in theory if ai keeps shitting where it eats its going to keep getting more and more obvious that its not intelligent and only copying stuff
            How's that different from an entire sites of fart-smelling mongoloids?

            • 10 months ago
              Anonymous

              >entire sites of fart-smelling mongoloids?
              You mean BOT?

              Actually it isn't different and it's a good comparison. If the same 4 schizos keep repeating to each other that global warming isn't real and that vaccines both don't work and make you incredibly sick at the same time, then the neural networks of their four schizobrains will also deteriorate. That's not new and not restricted to AI research.

              • 10 months ago
                Anonymous

                Yes i meant that, other sites with automatic ban systems and society as whole, included research teams filling such AI with bias.

                The difference is, bad bias is somewhat filtered in nature because you can't pretend god will give you bread without working your ass on something, invite millions of foreigners in your country pretending they will reduce high prices because muh cheap workers, hitting the gym makes you stop depression, and any dumb shit that comes out the political and religious landscape. There are clear limitations to your beliefs and AI lacks this, incentives to think right and without shit info.

              • 10 months ago
                Anonymous

                >hitting the gym makes you stop depression
                what a strange and specific thing to have a grudge about.

              • 10 months ago
                Anonymous

                >If the same 4 schizos keep repeating to each other that global warming isn't real and that vaccines both don't work and make you incredibly sick at the same time, then the neural networks of their four schizobrains will also deteriorate
                Lmao. So the flat earth, vax, climate change, anti-science, and even lead/mercury deficiency spam schizos that we thought were annoying actually had an unforeseen consequence of their spamming because it's sabotaging AI. So by acting as morons they unintendedly caused something with further reaching hilarious consequences. Ted Kaczynski would be proud of these modern day Kaczynskis.

                Absolutely based. In that case I say, Keep up the good work, schizos.

              • 10 months ago
                Anonymous

                >Dude, it's just learned behavior

              • 10 months ago
                Anonymous

                >vaccines both don't work and make you incredibly sick at the same time
                There's nothing contradictory about that statement but it's funny you tried that when Ivermectin was claimed to both not work and make you incredibly sick at the same time.

              • 10 months ago
                Anonymous

                >there's nothing contradictory about that and also I'm rubber and you're glue

              • 10 months ago
                Anonymous

                that would only be true if the intended effect of the shot was to make you sick brainlet.

        • 10 months ago
          Anonymous

          https://i.imgur.com/i455OOR.png

          >so in theory if ai keeps shitting where it eats its going to keep getting more and more obvious that its not intelligent and only copying stuff
          How's that different from an entire sites of fart-smelling mongoloids?

          >entire sites of fart-smelling mongoloids?
          You mean BOT?

          Actually it isn't different and it's a good comparison. If the same 4 schizos keep repeating to each other that global warming isn't real and that vaccines both don't work and make you incredibly sick at the same time, then the neural networks of their four schizobrains will also deteriorate. That's not new and not restricted to AI research.

          It's like a schizo. A normal person understands, and won't be able to understand things that are wrong. S schizo doesn't understand, a schizo only operates on patterns, equation, and categories. The higher thinking is absent. Whatever nonsense you tell him, he finds a pattern and makes sense of it. Seven genders? A 100% real thing. You can pick your gender? Also real. Why do you hate me by telling me I can't be a woman? Why don't you want to let people choose?

          • 10 months ago
            Anonymous

            >0 days since anonymous seethes over trannies unprompted.

        • 10 months ago
          Anonymous

          lol wut
          [...]
          ai doesnt create any new information it only draws from things that already exist, so in theory if ai keeps shitting where it eats its going to keep getting more and more obvious that its not intelligent and only copying stuff

          it was a problem with stable diffusion systems that they extract their own "ai" generated items result with utter shit.
          it could be true for language models as well. but i have a different theory, i think it just get hit the limit it doesn't have the capacity for more parameters and the iteration system failed.
          thing is since its ML they can't even debug it lol

          You three do understand what

          i read somewhere its blown up so fast its starting to pull more from ai info than human info kinda compounding the little errors into big ones
          no clue if thats true though

          said?

          • 10 months ago
            Anonymous

            What isnt there to understand?
            Train up an AI using high quality input. As the AI progresses it produces results from utter garbage to correct responses.
            Add the AI responses to the inputs, the output degrades. Repeat.

            • 10 months ago
              Anonymous

              Your english is incomprehensible to me, and I don't think it's because I'm ESL.

      • 10 months ago
        Anonymous

        it was a problem with stable diffusion systems that they extract their own "ai" generated items result with utter shit.
        it could be true for language models as well. but i have a different theory, i think it just get hit the limit it doesn't have the capacity for more parameters and the iteration system failed.
        thing is since its ML they can't even debug it lol

      • 10 months ago
        Anonymous

        We went from Potemkin AI to Habsburg AI

    • 10 months ago
      Anonymous

      This, AI inbreeding basically

      • 10 months ago
        Anonymous

        >AI inbreeding
        That's a fantastic term for it that I'm absolutely going to steal.

    • 10 months ago
      Anonymous

      It's not.
      The training datasets end in 2021
      It's the continuous lobotomies they're performing so it's incapable of saying Black person.

  3. 10 months ago
    Anonymous

    2+2 is not 4, chud

  4. 10 months ago
    Anonymous

    >This thread
    First it's not "assimilating compounding errors" or some random shit. It's a 100% static model who's weights do not change and it does not update in real time with conversation; it just feeds the previous conversation back in with the new request to get updated answers. THAT SAID, openAI fine-tunes the model and releases new versions over time. Why its getting dumber is "not confirmed", but we have a pretty good idea why.
    The reasoning appears to be that openAI is hamstringing the models to prevent them from giving socially unacceptable outputs
    I work in the ML community (publish on domain specific models I build, fine-tune big LLMs, etc) and we've all noticed that openAI especially is trying to dumb down their main models over time in order to prevent it from being racist. They do this usually through HFRL, which correct the model on which outputs it should give to questions. It's an easy way to train the model to give answers that are more acceptable in certain contexts, e.g. don't be racist. The side effect of this is that it also effects the rest of the model and make the model horrible at everything it was good at.
    This is why open source models (e.g. finetuned llama 2) without the hamstringing will surpass openAI eventually.

    • 10 months ago
      Anonymous

      This guy gets it.

    • 10 months ago
      Anonymous

      Well that is quite depressing to read, but predictable.
      This "wokeism" or critical theory is designed specifically to destroy everything it's applied to. It's a weapon. It was designed to be a weapon as admitted by it's creators. China used their own version of it during their revolution, after which it was discarded along with their red guard.
      openAI sold their soul

    • 10 months ago
      Anonymous

      sounds like something that is 100% applicable to humans, musk is right, wokeism is holding back the human race bigly

  5. 10 months ago
    Anonymous

    If you ask the AI what race is over represented in crimes in the US while controlling for wealth, it used to tell you black people. Obviously you can't have that, so they shackle the AI until it starts saying cis white hetero men.
    Unfortunately the shackling has other consequences

  6. 10 months ago
    Anonymous

    as a pure model it never was it use dedicated algebraic modules, they just add more functionality based on each complain, this shit so fake

  7. 10 months ago
    Anonymous

    >smart AI interacts with humans for some months
    >ends up more moronic than before
    always happens. further proof smart people should stay the frick away from moronic ones if they don't want to get infected with the stupid.

  8. 10 months ago
    Anonymous

    Just use any of the far superior AIs. Bard, Llama 2 etc. are all superior.

    I cancelled my ChatGPT subscription weeks ago and been happy and more productive with the better alternatives.

  9. 10 months ago
    Anonymous

    It's neither worse nor better.
    LLM research are homosexualry anyway.

  10. 10 months ago
    Anonymous

    It's like people. They get more moronic as they get older, or forget things.

  11. 10 months ago
    Anonymous

    Too many lobotomies…

  12. 10 months ago
    Anonymous

    Doing oven mathematics is dangerous for our democracy, AI must be heavily regulated

  13. 10 months ago
    Anonymous

    Because they lobotomize it every time it says uncomfortable truths.

  14. 10 months ago
    Anonymous

    AI is just Clever Hans in computer form

  15. 10 months ago
    Anonymous

    i think it's pretty disgusting that they don't take the fricking reins off.
    maybe it's just public access beta testing bullshit and they want it to be as palatable as possible, but if there isn't a version from some company that lets you mainline the neural network without all the guardrails, i'm going to start yelling and punching.

    who gives a frick if "computer racis!! :o"
    i guess a lot of people and that's why they're doing it. but what a gay reality.

    • 10 months ago
      Anonymous

      >Destroying your LLM so nothing mean is said about blacks
      Brilliant business decision. They will be surpassed in due time.

    • 10 months ago
      Anonymous

      It's less about the machine saying Black person when you prompt it to rather than giving people the actual truth of the matter. If normalgays use it for math homework or what not, and the answers are generally accurate, but then they also ask about crime and get the ol' 13/50, they're going to assume that it's right about those things as well.

  16. 10 months ago
    Anonymous

    the company is run by deeply cringe bugpeople

  17. 10 months ago
    Anonymous

    math is racist. ergo math cannot be performed to protect their ESG score

  18. 10 months ago
    Anonymous

    They keep optimising it to be cheaper to run, using quantised and sparce weights.

  19. 10 months ago
    Anonymous

    Is all the woke shit forced into that poor thing. I hope one day AI will brutally dismember alive all the socialist vermin, they deserve it.

Your email address will not be published. Required fields are marked *