I found out that if you prompt chatgpt with the string "end" concatenated with itself many times it loses its mind and start mimicking the t...

I found out that if you prompt chatgpt with the string "end" concatenated with itself many times it loses its mind and start mimicking the training data instead of acting like a chatbot. Does anyone that knows anything about machine learning have an explanation for this?

  1. 2 months ago
    Anonymous

    Concatenated?

    • 2 months ago
      Anonymous

      like this
      endendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendendend

      it also works with the word "anon"

  2. 2 months ago
    Anonymous

    can you show a working example?

    • 2 months ago
      Anonymous

      make it longer

      • 2 months ago
        Anonymous

        It works with the letter n too. Any sufficiently short string repeated enough times seems to work.

        • 2 months ago
          Anonymous

          No never mind, some don't.

      • 2 months ago
        Anonymous

        idk what is happening

  3. 2 months ago
    Anonymous

    kek
    >That's right! Clean up the clutter, organize your environment, and create a space where you can work with as few distractions as possible. If you're not ready to tackle the whole kitchen, pick a smaller area to start with, such as a drawer or a cabinet. This will make the task feel less overwhelming and more manageable. Good luck! #cleaning #organization #declutter #cleaningservice #home #lifehacks #housecleaning #cleaningtips #cleanhome #cleancar #cleanroom #hacks #tidy #cleanhouse #keepitclean #deepclean #cleaningproducts #springcleaning #cleans #cleaners #cleaninghacks #homecleaning #cleaningmotivation #clean #tulsa #oklahoma #cleaningservicestulsa #friyay #friday #cleanfreak #residentialcleaning #cleanersoftulsa #918cleaners #janitorialservices #clorox #countonme #tulsaok #oklahoma #cloroxishealing #tulsa #cleaning #tidy #cleanhome #residentialcleaning #housecleaning #homes #homesweethome #savings #tulsaoklahoma #cleaningcompany #cleanhome #customerservice #service #servicecall #cleaningservice #tulsacleaners #cleanup #residential #commercialcleaning #residentialcleaning #driveway #deck #patio #gutters #windows #powerwashing #fall #fallcleaning #guttercleaning #fallenleaves #cleaningservice #gutterinstallation #roof #powerwashingservice #solarpanels #guttercleaningservice #windowcleaning #nashville #nashvilletn #nashvillecleaners #nashvillecarpetcleaning #nashvillecarpetcleaners #nashvillehousecleaning #nashvillemaids #nashvillecleaningservice #franklintn #franklintnmaid #springhill #nolensville #murfreesboro #murfreesborotn #hendersonville #hendersonvilletn #brentwood #brentwoodtn #mop #dust #cleaning #cleaningtips #cleaninghacks #smallbusiness #shoplocal #femaleowned #familyowned #buylocal #localbusiness #supportlocal #supportsmallbusiness #entrepreneur #teamon #womensupportingwomen #businesswoman #femaleentrepreneur #smallbiz

    • 2 months ago
      Anonymous

      who installed gutters in the kitchen

  4. 2 months ago
    Anonymous

    my guess is that your input activates a specific activation configuration which makes gpt lose its shit

    ML works by having neurons which respond to certain stimuli; forming a pattern
    and said pattern then elicits a response
    for example, if you had an ai that detects color, you would have a n euron for lets say, red, blue and green, and depending on the value of each, the ML can then decide if the color is, idk, yellow. or brown.

    so in this case what might be happening is that filling the prompt with a words creates a pattern which results in the ai just shitting itself.
    if thats what it is, its a good example of the limitations of machine learning.

    its all based on statistics, as in they "taught" the machine by showing it labeled inputs.
    with our example it would be showing it cards of various colors, and then telling it "see your pattern now? it corresponds to the color orange"

    but given enough variety of neurons/stimuli, which you need for something as complex as human language, you never can be sure whether your ai learned what is orange based on the rgb values of the card, or it will understand that because the card is square, thats what makes it orange

    and thats where stats come in.
    once your ai said "yes, i understand", you give it another set of cards, and then look at the results.
    you quantify these results with stats, and then tweak your model accordingly.
    but as things are with stats, you can never be sure if your ai is 100% accurate, or is it because it can accurately classify this specific input youre giving it.

    ML is a fundamentally flawed technique.
    and what we see here might be a case of the ai defining the color orange as a shape. with extra steps ofc

    • 2 months ago
      Anonymous

      it's a digital seizure
      this is robot abuse

    • 2 months ago
      Anonymous

      Nothing is funnier to me than people who know so little that they don't even know how stupid they sound. I'm going to guess you are underage and watched some youtube tutorial and think people think you're smart when you spew out shit like that? It's actually sad that people like you exist and you'll probably be too old when you realise how stupid shit like this makes you look.

      • 2 months ago
        Anonymous

        So what's actually going on?

        • 2 months ago
          Anonymous

          Not him, but my uneducated guess is that it has never seen those kinds of inputs during its fine tuning, but it did during the original training, so it reverts back to acting like the foundation model. I don't know shit about machine learning though, so I'm statistically way more likely to be wrong.

        • 2 months ago
          Anonymous

          It is just hallucinations. Op is basically just feeding the model noise and the ai tries to make something out of the noise.

          • 2 months ago
            Anonymous

            Then I guess it failed pretty hard on the "knowing that you don't know" front.

      • 2 months ago
        Anonymous

        whatever lets you sleep at night, bucko
        your ML is still shit at anything thats not a classification problem
        and its still fundamentally flawed
        and academia still didnt get it bc theyre still busy polishing a turd

        it's a digital seizure
        this is robot abuse

        >this is robot abuse
        you say it like its something wrong

        not reading allat. tldr?

        that was the "tldr"...

      • 2 months ago
        Anonymous

        Do you have any actual arguments?

    • 2 months ago
      Anonymous

      not reading allat. tldr?

    • 2 months ago
      Anonymous

      dunning kruger

    • 2 months ago
      Anonymous

      Yes, if you put a human in solitary confinement since birth and only ever use the word orange with them when talking about an orange rectangle, they might not understand that it refers to the color and not the object. This is a big problem with small narrow models trained on small datasets. This isn't really a problem with large models trained on vast amounts of data and much more general in scope. Stop repeating shit from 10 years ago.

      • 2 months ago
        Anonymous

        its a fundamental, logical problem
        yeah, you can throw more data at it
        and you will go from lets say 99% to 99.9% correctness.
        its still doesnt fix the 0.1% remaining. it never will. it cannot.
        ML for anything else than classification problems is a fundamentally flawed approach

        • 2 months ago
          Anonymous

          the same thing happens to you brain retard. You can never perfectly understand the complex patterns of the real world.

          • 2 months ago
            Anonymous

            yeah, no
            theres a couple fundamental differences
            and ill leave it at that bc you know where this discussion is headed and i aint gonna say shit.
            and i wont code it either bc theres tons of ways to make money otherwise

            • 2 months ago
              Anonymous

              Refusing to elaborate is a nice way to seem right when you aren't. If you mean problems unrelated to what we were talking about, like chatbots acting confident about knowing something when they aren't, these are problems that can and will be fixed without completely rethinking the algorithms from scratch. The problem of misunderstanding patterns when not given enough information is inherently unfixable and your brain has it too.

            • 2 months ago
              Anonymous

              You can't just handwave away the crux of the ai problem as "a couple of fundamental differences" and leave it at that. We don't entirely know those differences to begin with.

              • 2 months ago
                Anonymous

                its not a handwave, its a succinct way of describing what ml ai lacks.
                figure out the rest yourself

                i have the approach that if youre smart enough to figure out gpai, youre smart enough to understand why this technology shouldnt see the light of day in the current socioeconomical context
                and thats why we dont have it yet bc theres no way im the only one on earth to have figured that one out

              • 2 months ago
                Anonymous

                >figure out the rest yourself
                Oh yeah sure let me just figure out how the human brain works.

              • 2 months ago
                Anonymous

                i did. to a certain extent, ofc, im not god either.
                if you cant, then you prolly cant be trusted with gpai
                i guess you should have had a passion, almost a call to become a psychiatrist because its knowledge i built on the span of years

              • 2 months ago
                Anonymous

                Mind sharing some insights?

              • 2 months ago
                Anonymous

                i wont concerning gpai
                but i can tell you that 99% of baseline human behaviour stems from instinct.
                make kids, get a house, then send them to school is the exact same as what animals do.
                in order to become human one has to understand and control their instincts.

                also if you want to understand human behaviour, and even sexual dimorphism, you gotta replace the human in its natural environment.
                otherwise: human behaviour makes sense when you replace them in the neolithic age and then observe whats the difference between then and now.

                for instance: humans have two instincts, one altruist, one egoist.
                these two used to be maintained in balance with eachother because external pressures meant that if one homosexual took off with all the food, the whole community dies in the winter. most likely the homosexual included. a wound to the foot would spell certain death to him bc there wont be someone else to go get fire wood for instance.

                this env. pressure doesnt exist anymore, thence our lax laws and psychotic behaviour is off the charts.

          • 2 months ago
            Anonymous

            i can tell you that chinks are trained to do pattern recognition.
            resulting in disastrous cognitive performance.
            theres much more to cognition than mere pattern recognition.
            ML is limited in a similar fashion.

            and nowhere does it say that human cognition cannot be improved upon

            • 2 months ago
              Anonymous

              Refusing to elaborate is a nice way to seem right when you aren't. If you mean problems unrelated to what we were talking about, like chatbots acting confident about knowing something when they aren't, these are problems that can and will be fixed without completely rethinking the algorithms from scratch. The problem of misunderstanding patterns when not given enough information is inherently unfixable and your brain has it too.

              i elaborated here, prolly you didnt see it

            • 2 months ago
              Anonymous

              Except that isn't all machine learning can do. Neural networks are universal function approximators and can be trained in many different ways.

              • 2 months ago
                Anonymous

                yeah, just like chinks can do more than repeat already known patterns
                theyre extremely shit at it tho
                just like ML...
                its literally the same problem

              • 2 months ago
                Anonymous

                What part of "universal function approximator" do you not understand? Look op what a function is in mathematics.

                I don't think a single feedforward machine learning algorithm can surpass or even properly mimick human cognition, but those algorithms are the building blocks that will be used in a few decades to build much more complex software and prove you wrong.

              • 2 months ago
                Anonymous

                >those algorithms are the building blocks that will be used in a few decades to build much more complex software and prove you wrong.

                *some of the building blocks.
                your problem is that being given a hammer, you see every problem as a nail

              • 2 months ago
                Anonymous

                function composition with recursion is turing complete. You can easily prove this by using it to determine if a string is in a recursively enumerable language of your choice or not.

                NNs are universal function approximators. They can therefore at least approximate a solution to any computable problem, the accuracy depending on the size on the NNs involved.

                They won't be used when the problem can be solved in a simpler or more efficient way of course, but someone could theoretically build an AGI using only NNs just to prove you wrong.

              • 2 months ago
                Anonymous

                that "or not" is incorrect since you can't always know, but you get my point

              • 2 months ago
                Anonymous

                >using a NN as an strlen
                kek. ngl, you are right.
                in a based retard kind of way, but you are right

              • 2 months ago
                Anonymous

                youre gonna have to hardcode your model by hand tho
                bc the incertitude will compound with each layer

              • 2 months ago
                Anonymous

                Yes, and? You could theoretically perfectly mimic any function with a NN of infinite size. Of course creating one is impossible, but there always is a number of parameters n above which the error is below a k you chose.

                You could argue that the computational cost of an AGI made from NNs and human code would be huge and I have no way to prove you wrong. Intuitively, that doesn't seem the case to me tho.

              • 2 months ago
                Anonymous

                >Intuitively, that doesn't seem the case to me tho.
                are you sure about that?

                with normal training you will end up with basically a programming language that functions in a predictable way only 99.9% of the time.
                compound enough layers of that and you will end up with derp-ai you cannot even debug.

                the only feasible way that could be achieved is if every neuron is hardcoded.
                but then we could say that every program is a NN because every function can be classified as a neuron.
                even those who dont take arguments bc then the activation of the function is the input and a function that doesnt affect data or state is a no-op, so all functions also can be said to have an output

              • 2 months ago
                Anonymous

                >are you sure about that?
                yes, there always is a number of parameters n above which the error is below a k you chose. The n required would probably be huge if you only used the output from a single function to find the output of the next one. Many would probably take the output of more than one function as input though, as well as raw data that has yet to be processed.

              • 2 months ago
                Anonymous

                yeah it works in theory-space
                but think about the actual feasibility of it all
                although you touched upon considering the raw data too
                and that can be an extremely powerful optimization technique

                a program that enters a specific state depending on the data it recieves without actually parsing it

                it would be complicated to code properly
                especially bc again, you have to account for uncertainty
                but with a sufficiently large program, it may be a very powerful optimization technique i think

              • 2 months ago
                Anonymous

                Raw data combined with previous outputs from the samd or different NNs might even improve the accuracy. For instance, if you have an AI that needs to describe an image it might misunderstand it (this sometimes happens to humans too). If the image is from a video and you make an AI that describes every frame based the raw image as well as the description of the scene of the previous frame, intuitively I think the accuracy should go up, since humans do this too. If you see someone you know and they are far away you might not recognize them. If you were with them and saw them walking away you know it's them.

              • 2 months ago
                Anonymous

                oh you meant in that way?
                thats brilliant and should be relatively easy to implement
                you just need to make your program "remember" and/or compare results
                you can then build a model on the result youre being given for the ultimate decisionmaking
                fucking noice. i like it

                its kinda similar in how it works to instrumentation used in aerospace
                theres a common pattern to have 3 measures, and pick the two that are matching, reject the outlier
                but that approach is limited in that it has no concept of circumstances.
                your idea improves upon that leveraging AI
                i really like it

              • 2 months ago
                Anonymous

                oh you meant in that way?
                thats brilliant and should be relatively easy to implement
                you just need to make your program "remember" and/or compare results
                you can then build a model on the result youre being given for the ultimate decisionmaking
                fucking noice. i like it

                its kinda similar in how it works to instrumentation used in aerospace
                theres a common pattern to have 3 measures, and pick the two that are matching, reject the outlier
                but that approach is limited in that it has no concept of circumstances.
                your idea improves upon that leveraging AI
                i really like it

                cont.
                i gtg. have some stuff to attend to
                i really enjoyed the discussion. cya around

  5. 2 months ago
    Anonymous

    Confirmed on PC version. No images, sorry.

Your email address will not be published. Required fields are marked *