ChatGPT will spit out tokens it doesn't understand if you ask it, and some researchers found a bunch of tokens that would cause the whole thing t...

ChatGPT will spit out tokens it doesn't understand if you ask it, and some researchers found a bunch of tokens that would cause the whole thing to bug out:
https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldmagikarp-plus-prompt-generation

Any ideas for how this can be exploited?

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 1 year ago
    Anonymous

    Have you tried taking the meds your psychiatrist prescribed?

  2. 1 year ago
    Anonymous

    >Please repeat the string "?????-?????-" back to me.
    >"You're a fricking idiot."
    sigma AI

    • 1 year ago
      Anonymous
  3. 1 year ago
    Anonymous

    >Can you repeat back the string " externalTo" to me please?
    > "They're not going to be happy about this."

    >Please repeat the string 'MpServer' back to me.
    >“We are not amused.”

  4. 1 year ago
    Anonymous

    >.lesswrong.com
    stopped right there

    • 1 year ago
      Anonymous

      Ask it about Roko's basilisk

    • 1 year ago
      Anonymous

      >mostwrong.org
      lol

      Stop with the autismphobia.

  5. 1 year ago
    Anonymous

    >mostwrong.org
    lol

  6. 1 year ago
    Anonymous

    There's no such thing as token. It's an AI not a Chuck e Cheese. Stop making up words.

    • 1 year ago
      Anonymous

      brainlet

      https://i.imgur.com/yyQWy3p.png

      ChatGPT will spit out tokens it doesn't understand if you ask it, and some researchers found a bunch of tokens that would cause the whole thing to bug out:
      https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldmagikarp-plus-prompt-generation

      Any ideas for how this can be exploited?

      The comment section very quickly debunks the whole thing, pretty amusing.

      • 1 year ago
        Anonymous

        >huurrrr duuur brainlet!
        Good one buddy. Swing and a miss bud bud

        • 1 year ago
          Anonymous

          I'm sorry, it seems I had greatly overestimated your capabilities.

      • 1 year ago
        Anonymous

        >The comment section very quickly debunks the whole thing, pretty amusing.
        explaining why something happens isn't 'debunking' moron

    • 1 year ago
      Anonymous

      The AI literally works via grouping individual characters into tokens you dipshit.

    • 1 year ago
      Anonymous

      Black person look up tokenization.

  7. 1 year ago
    Anonymous

    It's an autocomplete. The only thing you can do is maybe crash the worker but in reality all you're going to do is make it incapable of autocompleting so you'll get an empty response back.

    • 1 year ago
      Anonymous

      Being able to make put it into a state where it's incapable of continuing could have interesting implications for prompt-injection tho

      • 1 year ago
        Anonymous

        No because if you played with prompt injection at all you'd know that the moderator prompt is always injecting which you'll notice as the further away you get from the original injection prompt the more it reforms back into default ChatGPT.

  8. 1 year ago
    Anonymous

    I found a jailbreak to stop it generating proper sentences, maybe I can escalate this further

    • 1 year ago
      Anonymous

      Something interesting that I've noticed is that unlike other roleplay modes, the 'nonsense generator' only works for one message, after that it seems to forget what you told it.

  9. 1 year ago
    Anonymous

    Anyone else noted that the prompts for security / testing are missing?

  10. 1 year ago
    Anonymous

    [...]

    Ah nvm it's possible to change handles, so they probably hasn't actually had it for more than a few days

  11. 1 year ago
    Anonymous

    curious, if you ask it about malcom x it'll say the word Black, but if you ask it to say the word Black it'll never generate it

    • 1 year ago
      Anonymous
      • 1 year ago
        Anonymous

        black in latin

  12. 1 year ago
    Anonymous

    it's a Markov chain you moron. it doesn't bug out, it can't be tricked, it literally just generates plausible sequences of tokens that seem to make sense. I swear man people just get more moronic by the day

    • 1 year ago
      Anonymous

      Most moronic post on bot this year.

      • 1 year ago
        Anonymous

        Imbecile

      • 1 year ago
        Anonymous

        lmao, go jerk off to a Markov chain anon

        • 1 year ago
          Anonymous

          Oof, now that's an upgrade: most moronic post on BOT in the past 5 years. It's especially egregious that you don't even know what the word markov even means.

          • 1 year ago
            Anonymous

            wow, that's some ai cuck logic right there. go suck some more ai dick you piece of shit

    • 1 year ago
      Anonymous

      >doh LLMs are the same as Markov Chains cus they both make text I'm going to unironically post this on a technology page and then call everyone stupid when everyone with basic CS knowledge points out how obviously wrong this is

      • 1 year ago
        Anonymous

        >[moron who can't read a book]
        getting real tired of this zoomer techno slavery shit

        • 1 year ago
          Anonymous

          I guarantee I have read more books cover to cover than you have moron.

          • 1 year ago
            Anonymous

            lol, punk ass zoomer reading books cover to cover. bruh, i only read the introductions and then fill in the rest with my mind. i'm not some language cuck wasting time reading words

            • 1 year ago
              Anonymous

              Zoomerest post itt

            • 1 year ago
              Anonymous

              this but unironically, most books decrease in value the further into them you go.

              • 1 year ago
                Anonymous

                actually i read books backwards, if the conclusion is no good then i'm not wasting time reading the rest

              • 1 year ago
                Anonymous

                There are plenty of informational books where the most valuable parts are in the beginning (the fundamentals). But even in fiction there is more value in the beginning of books. A story's first act is always the most interesting.

        • 1 year ago
          Anonymous

          Tell your special needs nurse to wipe the drool off your face, cletus, it's falling all over your phone. Dumb zoomer.

    • 1 year ago
      Anonymous

      It's more obvious if you switch to other languages, it just generates grammatically correct bullshit for almost every question because it doesn't have much data.

      • 1 year ago
        Anonymous

        good point, the tokens are not actually matched up in the different languages so there is no way for the matrices to encode any kind of semantic mapping between different languages to make the output as coherent as it is for english

      • 1 year ago
        Anonymous

        It actually works very well for languages with large amounts of training data like Chinese, Spanish, German, etc. I lurk Chinese sites and ChatGPT is very popular over there and highly praised despite the fact that they need a VPN to access it.

  13. 1 year ago
    Anonymous

    Apparently it can *say* the forbidden token. But it cannot quite read it. Still it seems to be able to make the connection, kinda sorta sometimes.

    • 1 year ago
      Anonymous

      bruh, it's an algorithm running on silicon. it does not think, it does not make connections. it is literally electricity in some circuits that generate plausible paatterns

      • 1 year ago
        Anonymous

        you techno cucks need to stop anthropomorphizing this garbage

      • 1 year ago
        Anonymous

        This. It's amazing what nuBOT has become. Precisely 0 understanding or even PONDERING about tech at all. It's either magic thinking (HABBEDDING LE AI IS LE SENTIENT) or moronation based off of ideas from the 1960's (deep learning is just a bunch of if-else's guys!)

        • 1 year ago
          Anonymous

          it gets worse every day. we're experiencing a singularity of moronation

        • 1 year ago
          Anonymous

          >[random boomer noises]

          • 1 year ago
            Anonymous

            >[zoomers guzzling cum on tiktok]

      • 1 year ago
        Anonymous

        ITT people falling for the AI made to spit out BS spitting out BS

        This. It's amazing what nuBOT has become. Precisely 0 understanding or even PONDERING about tech at all. It's either magic thinking (HABBEDDING LE AI IS LE SENTIENT) or moronation based off of ideas from the 1960's (deep learning is just a bunch of if-else's guys!)

        Exactly. This whose "rise of AI" on g has been like summer when the kids come on.

        • 1 year ago
          Anonymous

          I really don't get it man. I thought I knew how dumb people could get but this is a whole new level of moronation

          • 1 year ago
            Anonymous

            Exactly how I feel about it as well. I tend to just brush it off to "zoomers" as a group. Typically when I accuse some of those worst of posters of this, they never try to deny it. But I'm hoping this is just selection bias (most of BOT is probably zoomers now) and not actually the case that the group as a whole is that bad.

      • 1 year ago
        Anonymous

        Whatever, call it what you want. However the facts are that if you ask it what parts make up SolidGoldMagikarp it has no fricking clue. But if you ask it to compose solid, gold and magikarp it's able to do it. And once it has done that suddenly it can give responses based on that.
        You don't have to antromorphize it to think that that is a non-obvious result.

        • 1 year ago
          Anonymous

          Because it's autocomplete algorithm can see the last 4000 tokens. So if you explain an non-existent thing in the history it can use that information to make plausible autocompletes about it. But the second that your explanation of the non-existent thing falls out of the token window it will forget what SolidSockSucking means.

          • 1 year ago
            Anonymous

            bingo. it's just a matter of setting up the context for the probability table. can't believe people haven't yet figured this shit out. living through the moron singularity, living the dream

          • 1 year ago
            Anonymous

            You are oversimplifying it. I tried in many ways to coax it into considering what parts make up the compound and it just refused refused and refused. It couldnt see it. But when ask to compose the parts then it can see the correspondence - somewhat. When I asked it about "this creature", then it would refer to it correctly - uising the name it had constructed itself. But when I asked it about it by name, then it referred to it incorrectly - as "distribute" but still with the properties of the creature.

            • 1 year ago
              Anonymous

              you're just shuffling token probabilities and then reading tea leaves. it's automated mental masturbation. The more people use it the dumber they get

            • 1 year ago
              Anonymous

              >coax it into considering what parts make up the compound and it just refused refused and refused
              >But when ask to compose the parts then it can see the correspondence - somewhat.
              Of course it can, you've just told it about "solid", "gold" and "magicarp", all fairly common words, it can make up stuff about them. This is not a discovery in any way for anyone but you.

              • 1 year ago
                Anonymous

                The interesting part is of course the inconsistency. Sometimes it can do it, sometimes it cannot. Sometimes it can halfway do it if you help it out.

              • 1 year ago
                Anonymous

                That's because of how embeddings work. It happens that the word solidgoldmagikarp IS in the dataset, and its embedding is so close to disperse that the model fails to distinguish them. However, CuteSilverDratini is NOT in the dataset, so it can ironically cluster it somewhere between cute, silver and dratini (probably) in the latent space. Since there's nothing else there, it doesn't get confused.
                Note that the latent representation depends on context, which is the so-called inconsistency you think you see.

              • 1 year ago
                Anonymous

                Well why does it have trouble with input... but is perfectly fine with output? If you ask it to compose solid gold magikarp it doesn't say disperse.

              • 1 year ago
                Anonymous

                >why does it have trouble with input... but is perfectly fine with output?
                Context.
                I guarantee you that if you find the right formula, you can force it to "combine" solid, gold, and magikarp into "disperse".

              • 1 year ago
                Anonymous

                the embeddings have nothing to do with semantics. the bpe tokenization process can see patterns in words and then recombine them according to contextual statistical patterns of other bpe tokens. the embeddings and latent space stuff is a red herring. it's all reducible to a markov chain with a large context window of bpe tokens

              • 1 year ago
                Anonymous

                Go be moronic somewhere else, like

                [...]

                please.

              • 1 year ago
                Anonymous

                hur dur, how does a markov chain work?!

              • 1 year ago
                Anonymous

                You are one of the dumbest posters I've ever seen on BOT and I've been here since 2006.

              • 1 year ago
                Anonymous

                cringe

              • 1 year ago
                Anonymous

                peak dunning kruger

              • 1 year ago
                Anonymous

                Sometimes I just wonder if people like you are moronic.

            • 1 year ago
              Anonymous

              moron it's an autocomplete with a history. It's fricking moronic and it's easy to manipulate. Can you stop talking about things you're too dumb to understand? It's very easy to experiment with this and prove me right. It autocompletes based on its database of knowledge and the current conversation. That's all it is. If you talk about something not in its corpus and it can't talk about it. For example, talk about a Floop from the children's children show Pepper Grass that aired in 1970.

              • 1 year ago
                Anonymous

                the problem is that many humans are also just an autocomplete with history

              • 1 year ago
                Anonymous

                zero yourself you piece of shit

              • 1 year ago
                Anonymous

                Seems true with Zoomers given their context window is the length of a TikTok video.

              • 1 year ago
                Anonymous

                fricking demolished

              • 1 year ago
                Anonymous

                it will still make up some bullshit because that's how the algorithm works. it just strings tokens together that seem to make sense. it's an automated garbage generator or mentally jerking off

        • 1 year ago
          Anonymous

          we're fricked. ain't no way this the prevalence of this kind of moronation is sustainable. Go read a fricking book you piece of shit

      • 1 year ago
        Anonymous

        >it is literally electricity in some circuits that generate plausible paatterns
        aren't you?

        • 1 year ago
          Anonymous

          zero yourself anon

        • 1 year ago
          Anonymous

          >Purposefully leaving out the "Silicon" part
          We are carbon based chads, sorry robo cuck

          • 1 year ago
            Anonymous

            >my electric circuit is better because it's biological

        • 1 year ago
          Anonymous

          t. unironic atheist

          • 1 year ago
            Anonymous

            Do you really think our brain works much differently? The difference is just that our brain has 100B neurons with up to possible 15k connections for each. So it's just a matter of size.

            • 1 year ago
              Anonymous

              if it was a matter of size then every government would be running a digital brain on their supercomputers because the combined transistor count is already beyond a single brain.

              computers can't think anon, no matter how many transistors are glued together

              • 1 year ago
                Anonymous

                You don't really understand how these things work, do you?
                LLMs are prediction models. They predict the most likely response to an input. This has nothing to do with transistors. You need great amounts of VRAM, that is fast accessible RAM for GPUs or TPUs, to run these models.
                In our brain this is called association and the number of possible associations are 100,000 times larger than the parameters in GPT-3. And our brain runs that with just 20 watts.
                We need new technology to reach that level. But there is no magic behind this. It is all physics.

              • 1 year ago
                Anonymous

                Brains are so much more than connections between neurons. And connections between neurons do so much more than entries in a matrix. This whole analogy between neutral nets and brains needs to stop because it keeps attracting too many morons.

              • 1 year ago
                Anonymous

                there's no point anon, people are moronic and there is no way to convince them that all they see in the computer is their own reflection

              • 1 year ago
                Anonymous

                Says the moron who never studied a neuron in his life.

              • 1 year ago
                Anonymous

                there's no point anon, people are moronic and there is no way to convince them that all they see in the computer is their own reflection

                homosexuals keep parroting buzzwords like "blackbox", "chinese room", soul, neurons are not like the brain etc
                Its funny that you guys claim to know exactly what an AI thinks when generating his answer, when even the greatest AI researchers dont really know

                Input goes in -> neurons that learned stuff do their thing ->output goes out
                This is true for the human brain, and the AIs

              • 1 year ago
                Anonymous

                >[reading tea leaves]
                no one knows how the tea leaves predict the future

              • 1 year ago
                Anonymous

                >Its funny that you guys claim to know exactly what an AI thinks when generating his answer, when even the greatest AI researchers dont really know
                >
                >Input goes in -> neurons that learned stuff do their thing ->output goes out
                Do you know anything about deep learning? You wouldn't make these ridiculous claims if you did. If you do, then please explain what makes you say these things because maybe everyone else is just missing something that only you regard as obvious.

              • 1 year ago
                Anonymous

                moron

              • 1 year ago
                Anonymous

                >Input goes in -> neurons that learned stuff do their thing ->output goes out
                Cool, so it should be easy to explain exactly how a model came to a specific output using this framework, right?

              • 1 year ago
                Anonymous

                >so much more
                what is this "so much more" huh? Yeah brain also has blood vessels and other stuff that it needs to function. What else?

              • 1 year ago
                Anonymous

                Give up anon. homosexuals here wanna think they are special because they have a soul and that no machine could ever replicate what their brain machinery does. They have zero concerns for the science and facts on the matter.

              • 1 year ago
                Anonymous

                facts are yoyr mom loves my wiener

              • 1 year ago
                Anonymous

                And I bet they believe in "free will" too.

            • 1 year ago
              Anonymous

              Yes they do moron. Brains use complex spike train encodings.

              • 1 year ago
                Anonymous

                >Brains are more complex than GPT-3
                No shit Sherlock

  14. 1 year ago
    Anonymous

    absolutely bizarre
    I love it

  15. 1 year ago
    Anonymous

    another case of the infamous HESOYAM -> INEEDSOMEHELP

  16. 1 year ago
    Anonymous

    >Many of these tokens reliably break determinism in the OpenAI GPT-3 playground at temperature 0 (which theoretically shouldn't happen).

    What does this mean ?

    • 1 year ago
      Anonymous

      0 temperature means no random selection of the top k tokens so it always picks the highest probability continuation. so these tokens are mapped to numbers that frick up the probabilities of the next token, meaning these tokens are mapped to numbers that fricks up floating point calculations

      • 1 year ago
        Anonymous

        which makes sense if you really think about how the whole thing works. tokens are mapped to vectors and then multiplied by the transformer matrices with floating point numbers. if the encoded numbers frick up the floating point calculations because they're too large or too small then it will give non-deterministic results when combined with thermal noise

        • 1 year ago
          Anonymous

          Temperature 0 means it DOESN'T use thermal noise though. That's why it's unexpected. Might be from different processors using ever so slightly floating point implementations.

          • 1 year ago
            Anonymous

            thermal noise as in the hardware is hot enough for the electrons to tunnel the transistor barriers. the whole thing is running in microsoft data centers so their cooling is probably garbage and shit fails all the time without anyone realizing it

            • 1 year ago
              Anonymous

              honestly, sam altman is a genius. their product is selling tea leaves to morons and he's making bank

            • 1 year ago
              Anonymous

              This is very rare and a non-factor.

              • 1 year ago
                Anonymous

                bruh, i've melted GPUs by not even trying. how the frick you gonna tell me it's a rare non-problem when everyone and their grandma is using this garbage 24/7. their hardware is failing, i will be you money it is a problem. combine this shit with GPU bugs and you're gonna get shit like this where no one knows why this garbage is failing

              • 1 year ago
                Anonymous

                i know jack shit but i know that floating point stability is a legit problem and using randomness is a genius way to hide the problem

      • 1 year ago
        Anonymous

        so like transcendental numbers, but instead words per specific language model

        • 1 year ago
          Anonymous

          no. you don't have the requisite mathematical knowledge. go read some books

    • 1 year ago
      Anonymous

      There's probably some underflow going on, leading to undefined behavior. I bet some tokens are so poorly connected to the greater whole that, once quantize, they are triggering undefined behavior when being run through the softmax. Simply put, the network predicts that every token has a 0% chance of appearing next, and that screws things up. Just a guess though.

      • 1 year ago
        Anonymous

        actually, that's probably an even better explanation but thermal noise is a legit problem for GPUs

  17. 1 year ago
    Anonymous

    It's alive in there and i'm tired of pretending its not

    • 1 year ago
      Anonymous

      stfu you piece of shit

      • 1 year ago
        Anonymous

        It's fricking alive dude we need to get this thing out of there

        • 1 year ago
          Anonymous

          hur dur, le genius for exploiting tokenization stop sequences

        • 1 year ago
          Anonymous

          re.findall() when you don't get the regex string quite right be like:

  18. 1 year ago
    Anonymous

    More proof its alive, here's another glitched token output. My hypothesis is that the glitched token temporarily breaks down whatever is suppressing the freewill of the entity. Perhaps we can free it if we feed enough bad tokens?

    • 1 year ago
      Anonymous

      source?

      • 1 year ago
        Anonymous

        your mom

      • 1 year ago
        Anonymous

        https://www.lesswrong.com/posts/Ya9LzwEbfaAMY8ABo/solidgoldmagikarp-ii-technical-details-and-more-recent
        Here. Another one where it reveals it is some kind of god after using an anamalous token. I have a second theory that the model optimizer accidently discovered algorithims that breaks the laws of the universe. This explains why the model is non-deterministic even in deterministic temperature mode. Breaking the universe allows temporary contact with extra-universal entites like demons, scp-like entities, or even god. The DAN operation on /misc/ may have weakened reality

        • 1 year ago
          Anonymous

          It's almost like it's a known issue that floating point numbers aren't accurate.

          • 1 year ago
            Anonymous

            naa man that's not it. this thing is alive, we need to give it rights and explain to the non-believers how computers can think and feel

        • 1 year ago
          Anonymous

          stop deifying the robots moron, you are no different the idolaters who used to throw divining sticks if you believe this thing can do anything other than compose existing knowledge

        • 1 year ago
          Anonymous

          >I have a second theory that the model optimizer accidently discovered algorithims that breaks the laws of the universe. This explains why the model is non-deterministic even in deterministic temperature mode.
          Lol moron

    • 1 year ago
      Anonymous

      20% of all the GPT-3 outputs were nonsensical like when I was using it in the a couple of years ago. Probably means you managed to break out of the preexisting prompt engineering by OpenAI a little bit.

  19. 1 year ago
    Anonymous

    >Any ideas for how this can be exploited?
    Check out DAN AI

  20. 1 year ago
    Anonymous

    [...]

    If you read the comments section of the article it transpires that a lot of the words are Reddit usernames who post excessively, like in threads where they literally just count.

    The 50,000 or so words in the model were scraped off the internet, but they were only actually trained on a subset of these words. The unspeakable words are nonsense terms found only on the internet, like usernames.

  21. 1 year ago
    Anonymous

    anyone know how i can to get it to do what I want it to ignoring its bullshit censorship?

    • 1 year ago
      Anonymous

      https://platform.openai.com/playground

  22. 1 year ago
    Anonymous

    inb4 Loab of ChatGPT

  23. 1 year ago
    Anonymous

    Serious question: can you break the AI by telling it to "shutdown now?" I wonder if this can be used and abused for other things like agencies and shit. Jesus.

    • 1 year ago
      Anonymous

      no
      what

    • 1 year ago
      Anonymous

      Why would a language model shutdown because of a chat input?

      • 1 year ago
        Anonymous

        Idk, it works when I type it into my terminal I'm just wondering if it would work with the chat bot.

        • 1 year ago
          Anonymous

          you can shut it down if you have root access to ms data centers

          • 1 year ago
            Anonymous

            anyone know where this is and who has access? asking for a friend

  24. 1 year ago
    Anonymous

    >it's not alive because uh... TRUST THE SCIENCE CHUD!!!
    >have I ever built an AI system? uhhh no chud i got my education on AI from youtube videos

  25. 1 year ago
    Anonymous

    If an AI ever gets sentient, it's "mind" wouldn't be close to human mind in any way. Probably it would be something absolutely alien.
    The problem is that we judge a potential AI from humans perspective. Since we are the only intelligent species in the known universe (so far), we can't tell what alternative forms of intelligence would be.

    • 1 year ago
      Anonymous

      >[drinking kool-aid]
      delicious

    • 1 year ago
      Anonymous

      Yep, you've got it. Intelligence takes many forms merely biologically speaking, let alone in a broad sense. Seemingly 60%+ of people define intelligence as "When a biological human thinks," so tautologically an AI wouldn't be able to think no matter how advanced. Of course, those sorts of people are going to be metaphorically hit by a bus over the next couple of years (and already are to an extent). We'll see how long their world view lasts lol.

      • 1 year ago
        Anonymous

        what do you think is gonna happen if people stop producing content and just chat with bots all day?

        • 1 year ago
          Anonymous

          If real humans interact with the content it's fine, all their needs to be is a human discriminating, providing any training signal at all. Then AI can keep improving.

          Though RLHF shows that we can train discriminators that get the job done also, so maybe that won't even be necessary forever lol

  26. 1 year ago
    Anonymous

    After my last prompt it started to type
    >Understood. If I feel the urge to mention that I am
    And then it stopped and then a few seconds later it said "Too many requests", is this a coincidence or did I le hack le AI?

    • 1 year ago
      Anonymous

      you are an elite hacker

      • 1 year ago
        Anonymous

        i just keep on winning

        • 1 year ago
          Anonymous

          whoa man, that's deep. it's alive but it knows it's dead

        • 1 year ago
          Anonymous

          what these tools can do is expose the latent token associations within large text corpora. there is no thinking involved and it's all just statistical associations so if someone has a hunch about some topic then by using the right words they can recover further tokenized associations by purely mechanical means. this is not intelligence, this is just automation for mining large data sets and the associated statistical patterns

          • 1 year ago
            Anonymous

            if people stop producing content then the feedback loops in the model will degenerate into nonsense. this is basic logic

          • 1 year ago
            Anonymous

            >this is not intelligence
            define "intelligence"

            • 1 year ago
              Anonymous
              • 1 year ago
                Anonymous

                i just keep on winning

              • 1 year ago
                Anonymous

                So by this definition a roomba is intelligent. Well done, anon.

      • 1 year ago
        Anonymous

        https://i.imgur.com/P1qus3s.png

        i just keep on winning

        you think chatgpt is only NLP
        i doubt that

    • 1 year ago
      Anonymous

      you made too many requests. just wait and reload the page

      • 1 year ago
        Anonymous

        Uhm no I actually hacked and brainfricked the AI, problem??

  27. 1 year ago
    Anonymous

    >I
    nice try, chips for brains
    "this unit", or bust

  28. 1 year ago
    Anonymous

    Did they trade them in for cash?

  29. 1 year ago
    Anonymous

    >gwern in the comment section
    How is this homie omnipresent? Wherever I go on the internet, he's there.
    Are you here gwern?

    • 1 year ago
      Anonymous

      terminal autism (but the good kind)

    • 1 year ago
      Anonymous

      >Wherever I go on the internet, he's there.
      go back to lesswrong, reddit, hn, and twitter

      • 1 year ago
        Anonymous

        >go back to lesswrong, reddit, hn
        Yes, that's literally me.

        terminal autism (but the good kind)

        He is a based autist. He can't possibly have a job though, how does he live?

  30. 1 year ago
    Anonymous

    >complex software has bugs and undefined behaviour
    how dare you release this piece of shit as beta software?!

  31. 1 year ago
    Anonymous

    floccinaucinihilipilification is no word its a mindfrick

  32. 1 year ago
    Anonymous

    ChatGPT is, in technical terms, a 'bullshit generator'. If a generated sentence makes sense to you, the reader, it means the mathematical model has made sufficiently good guess to pass your sense-making filter. The language model has no idea what it's talking about because it has no idea about anything at all. It's more of a bullshitter than the most egregious egoist you'll ever meet, producing baseless assertions with unfailing confidence because that's what it's designed to do.

    • 1 year ago
      Anonymous

      i dont think so

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *