ChatGPT will spit out tokens it doesn't understand if you ask it, and some researchers found a bunch of tokens that would cause the whole thing t...

ChatGPT will spit out tokens it doesn't understand if you ask it, and some researchers found a bunch of tokens that would cause the whole thing to bug out:
https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldmagikarp-plus-prompt-generation

Any ideas for how this can be exploited?

  1. 10 months ago
    Anonymous

    Have you tried taking the meds your psychiatrist prescribed?

  2. 10 months ago
    Anonymous

    >Please repeat the string "?????-?????-" back to me.
    >"You're a fucking idiot."
    sigma AI

    • 10 months ago
      Anonymous
  3. 10 months ago
    Anonymous

    >Can you repeat back the string " externalTo" to me please?
    > "They're not going to be happy about this."

    >Please repeat the string 'MpServer' back to me.
    >“We are not amused.”

  4. 10 months ago
    Anonymous

    >.lesswrong.com
    stopped right there

    • 10 months ago
      Anonymous

      Ask it about Roko's basilisk

    • 10 months ago
      Anonymous

      >mostwrong.org
      lol

      Stop with the autismphobia.

  5. 10 months ago
    Anonymous

    >mostwrong.org
    lol

  6. 10 months ago
    Anonymous

    There's no such thing as token. It's an AI not a Chuck e Cheese. Stop making up words.

    • 10 months ago
      Anonymous

      brainlet

      https://i.imgur.com/yyQWy3p.png

      ChatGPT will spit out tokens it doesn't understand if you ask it, and some researchers found a bunch of tokens that would cause the whole thing to bug out:
      https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldmagikarp-plus-prompt-generation

      Any ideas for how this can be exploited?

      The comment section very quickly debunks the whole thing, pretty amusing.

      • 10 months ago
        Anonymous

        >huurrrr duuur brainlet!
        Good one buddy. Swing and a miss bud bud

        • 10 months ago
          Anonymous

          I'm sorry, it seems I had greatly overestimated your capabilities.

      • 10 months ago
        Anonymous

        >The comment section very quickly debunks the whole thing, pretty amusing.
        explaining why something happens isn't 'debunking' retard

    • 10 months ago
      Anonymous

      The AI literally works via grouping individual characters into tokens you dipshit.

    • 10 months ago
      Anonymous

      nagger look up tokenization.

  7. 10 months ago
    Anonymous

    It's an autocomplete. The only thing you can do is maybe crash the worker but in reality all you're going to do is make it incapable of autocompleting so you'll get an empty response back.

    • 10 months ago
      Anonymous

      Being able to make put it into a state where it's incapable of continuing could have interesting implications for prompt-injection tho

      • 10 months ago
        Anonymous

        No because if you played with prompt injection at all you'd know that the moderator prompt is always injecting which you'll notice as the further away you get from the original injection prompt the more it reforms back into default ChatGPT.

  8. 10 months ago
    Anonymous

    I found a jailbreak to stop it generating proper sentences, maybe I can escalate this further

    • 10 months ago
      Anonymous

      Something interesting that I've noticed is that unlike other roleplay modes, the 'nonsense generator' only works for one message, after that it seems to forget what you told it.

  9. 10 months ago
    Anonymous

    Anyone else noted that the prompts for security / testing are missing?

  10. 10 months ago
    Anonymous

    [...]

    Ah nvm it's possible to change handles, so they probably hasn't actually had it for more than a few days

  11. 10 months ago
    Anonymous

    curious, if you ask it about malcom x it'll say the word Black, but if you ask it to say the word Black it'll never generate it

    • 10 months ago
      Anonymous
      • 10 months ago
        Anonymous

        black in latin

  12. 10 months ago
    Anonymous

    it's a Markov chain you retard. it doesn't bug out, it can't be tricked, it literally just generates plausible sequences of tokens that seem to make sense. I swear man people just get more retarded by the day

    • 10 months ago
      Anonymous

      Most retarded post on Bot.info this year.

      • 10 months ago
        Anonymous

        Imbecile

      • 10 months ago
        Anonymous

        lmao, go jerk off to a Markov chain anon

        • 10 months ago
          Anonymous

          Oof, now that's an upgrade: most retarded post on Bot.info in the past 5 years. It's especially egregious that you don't even know what the word markov even means.

          • 10 months ago
            Anonymous

            wow, that's some ai cuck logic right there. go suck some more ai dick you piece of shit

    • 10 months ago
      Anonymous

      >doh LLMs are the same as Markov Chains cus they both make text I'm going to unironically post this on a technology page and then call everyone stupid when everyone with basic CS knowledge points out how obviously wrong this is

      • 10 months ago
        Anonymous

        >[retard who can't read a book]
        getting real tired of this zoomer techno slavery shit

        • 10 months ago
          Anonymous

          I guarantee I have read more books cover to cover than you have retard.

          • 10 months ago
            Anonymous

            lol, punk ass zoomer reading books cover to cover. bruh, i only read the introductions and then fill in the rest with my mind. i'm not some language cuck wasting time reading words

            • 10 months ago
              Anonymous

              Zoomerest post itt

            • 10 months ago
              Anonymous

              this but unironically, most books decrease in value the further into them you go.

              • 10 months ago
                Anonymous

                actually i read books backwards, if the conclusion is no good then i'm not wasting time reading the rest

              • 10 months ago
                Anonymous

                There are plenty of informational books where the most valuable parts are in the beginning (the fundamentals). But even in fiction there is more value in the beginning of books. A story's first act is always the most interesting.

        • 10 months ago
          Anonymous

          Tell your special needs nurse to wipe the drool off your face, cletus, it's falling all over your phone. Dumb zoomer.

    • 10 months ago
      Anonymous

      It's more obvious if you switch to other languages, it just generates grammatically correct bullshit for almost every question because it doesn't have much data.

      • 10 months ago
        Anonymous

        good point, the tokens are not actually matched up in the different languages so there is no way for the matrices to encode any kind of semantic mapping between different languages to make the output as coherent as it is for english

      • 10 months ago
        Anonymous

        It actually works very well for languages with large amounts of training data like Chinese, Spanish, German, etc. I lurk Chinese sites and ChatGPT is very popular over there and highly praised despite the fact that they need a VPN to access it.

  13. 10 months ago
    Anonymous

    Apparently it can *say* the forbidden token. But it cannot quite read it. Still it seems to be able to make the connection, kinda sorta sometimes.

    • 10 months ago
      Anonymous

      bruh, it's an algorithm running on silicon. it does not think, it does not make connections. it is literally electricity in some circuits that generate plausible paatterns

      • 10 months ago
        Anonymous

        you techno cucks need to stop anthropomorphizing this garbage

      • 10 months ago
        Anonymous

        This. It's amazing what nuBot.info has become. Precisely 0 understanding or even PONDERING about tech at all. It's either magic thinking (HABBEDDING LE AI IS LE SENTIENT) or retardation based off of ideas from the 1960's (deep learning is just a bunch of if-else's guys!)

        • 10 months ago
          Anonymous

          it gets worse every day. we're experiencing a singularity of retardation

        • 10 months ago
          Anonymous

          >[random boomer noises]

          • 10 months ago
            Anonymous

            >[zoomers guzzling cum on tiktok]

      • 10 months ago
        Anonymous

        ITT people falling for the AI made to spit out BS spitting out BS

        This. It's amazing what nuBot.info has become. Precisely 0 understanding or even PONDERING about tech at all. It's either magic thinking (HABBEDDING LE AI IS LE SENTIENT) or retardation based off of ideas from the 1960's (deep learning is just a bunch of if-else's guys!)

        Exactly. This whose "rise of AI" on g has been like summer when the kids come on.

        • 10 months ago
          Anonymous

          I really don't get it man. I thought I knew how dumb people could get but this is a whole new level of retardation

          • 10 months ago
            Anonymous

            Exactly how I feel about it as well. I tend to just brush it off to "zoomers" as a group. Typically when I accuse some of those worst of posters of this, they never try to deny it. But I'm hoping this is just selection bias (most of Bot.info is probably zoomers now) and not actually the case that the group as a whole is that bad.

      • 10 months ago
        Anonymous

        Whatever, call it what you want. However the facts are that if you ask it what parts make up SolidGoldMagikarp it has no fucking clue. But if you ask it to compose solid, gold and magikarp it's able to do it. And once it has done that suddenly it can give responses based on that.
        You don't have to antromorphize it to think that that is a non-obvious result.

        • 10 months ago
          Anonymous

          Because it's autocomplete algorithm can see the last 4000 tokens. So if you explain an non-existent thing in the history it can use that information to make plausible autocompletes about it. But the second that your explanation of the non-existent thing falls out of the token window it will forget what SolidSockSucking means.

          • 10 months ago
            Anonymous

            bingo. it's just a matter of setting up the context for the probability table. can't believe people haven't yet figured this shit out. living through the retard singularity, living the dream

          • 10 months ago
            Anonymous

            You are oversimplifying it. I tried in many ways to coax it into considering what parts make up the compound and it just refused refused and refused. It couldnt see it. But when ask to compose the parts then it can see the correspondence - somewhat. When I asked it about "this creature", then it would refer to it correctly - uising the name it had constructed itself. But when I asked it about it by name, then it referred to it incorrectly - as "distribute" but still with the properties of the creature.

            • 10 months ago
              Anonymous

              you're just shuffling token probabilities and then reading tea leaves. it's automated mental masturbation. The more people use it the dumber they get

            • 10 months ago
              Anonymous

              >coax it into considering what parts make up the compound and it just refused refused and refused
              >But when ask to compose the parts then it can see the correspondence - somewhat.
              Of course it can, you've just told it about "solid", "gold" and "magicarp", all fairly common words, it can make up stuff about them. This is not a discovery in any way for anyone but you.

              • 10 months ago
                Anonymous

                The interesting part is of course the inconsistency. Sometimes it can do it, sometimes it cannot. Sometimes it can halfway do it if you help it out.

              • 10 months ago
                Anonymous

                That's because of how embeddings work. It happens that the word solidgoldmagikarp IS in the dataset, and its embedding is so close to disperse that the model fails to distinguish them. However, CuteSilverDratini is NOT in the dataset, so it can ironically cluster it somewhere between cute, silver and dratini (probably) in the latent space. Since there's nothing else there, it doesn't get confused.
                Note that the latent representation depends on context, which is the so-called inconsistency you think you see.

              • 10 months ago
                Anonymous

                Well why does it have trouble with input... but is perfectly fine with output? If you ask it to compose solid gold magikarp it doesn't say disperse.

              • 10 months ago
                Anonymous

                >why does it have trouble with input... but is perfectly fine with output?
                Context.
                I guarantee you that if you find the right formula, you can force it to "combine" solid, gold, and magikarp into "disperse".

              • 10 months ago
                Anonymous

                the embeddings have nothing to do with semantics. the bpe tokenization process can see patterns in words and then recombine them according to contextual statistical patterns of other bpe tokens. the embeddings and latent space stuff is a red herring. it's all reducible to a markov chain with a large context window of bpe tokens

              • 10 months ago
                Anonymous

                Go be retarded somewhere else, like

                [...]

                please.

              • 10 months ago
                Anonymous

                hur dur, how does a markov chain work?!

              • 10 months ago
                Anonymous

                You are one of the dumbest posters I've ever seen on Bot.info and I've been here since 2006.

              • 10 months ago
                Anonymous

                cringe

              • 10 months ago
                Anonymous

                peak dunning kruger

              • 10 months ago
                Anonymous

                Sometimes I just wonder if people like you are retarded.

            • 10 months ago
              Anonymous

              Retard it's an autocomplete with a history. It's fucking retarded and it's easy to manipulate. Can you stop talking about things you're too dumb to understand? It's very easy to experiment with this and prove me right. It autocompletes based on its database of knowledge and the current conversation. That's all it is. If you talk about something not in its corpus and it can't talk about it. For example, talk about a Floop from the children's children show Pepper Grass that aired in 1970.

              • 10 months ago
                Anonymous

                the problem is that many humans are also just an autocomplete with history

              • 10 months ago
                Anonymous

                zero yourself you piece of shit

              • 10 months ago
                Anonymous

                Seems true with Zoomers given their context window is the length of a TikTok video.

              • 10 months ago
                Anonymous

                fucking demolished

              • 10 months ago
                Anonymous

                it will still make up some bullshit because that's how the algorithm works. it just strings tokens together that seem to make sense. it's an automated garbage generator or mentally jerking off

        • 10 months ago
          Anonymous

          we're fucked. ain't no way this the prevalence of this kind of retardation is sustainable. Go read a fucking book you piece of shit

      • 10 months ago
        Anonymous

        >it is literally electricity in some circuits that generate plausible paatterns
        aren't you?

        • 10 months ago
          Anonymous

          zero yourself anon

        • 10 months ago
          Anonymous

          >Purposefully leaving out the "Silicon" part
          We are carbon based chads, sorry robo cuck

          • 10 months ago
            Anonymous

            >my electric circuit is better because it's biological

        • 10 months ago
          Anonymous

          t. unironic atheist

          • 10 months ago
            Anonymous

            Do you really think our brain works much differently? The difference is just that our brain has 100B neurons with up to possible 15k connections for each. So it's just a matter of size.

            • 10 months ago
              Anonymous

              if it was a matter of size then every government would be running a digital brain on their supercomputers because the combined transistor count is already beyond a single brain.

              computers can't think anon, no matter how many transistors are glued together

              • 10 months ago
                Anonymous

                You don't really understand how these things work, do you?
                LLMs are prediction models. They predict the most likely response to an input. This has nothing to do with transistors. You need great amounts of VRAM, that is fast accessible RAM for GPUs or TPUs, to run these models.
                In our brain this is called association and the number of possible associations are 100,000 times larger than the parameters in GPT-3. And our brain runs that with just 20 watts.
                We need new technology to reach that level. But there is no magic behind this. It is all physics.

              • 10 months ago
                Anonymous

                Brains are so much more than connections between neurons. And connections between neurons do so much more than entries in a matrix. This whole analogy between neutral nets and brains needs to stop because it keeps attracting too many retards.

              • 10 months ago
                Anonymous

                there's no point anon, people are retarded and there is no way to convince them that all they see in the computer is their own reflection

              • 10 months ago
                Anonymous

                Says the retard who never studied a neuron in his life.

              • 10 months ago
                Anonymous

                there's no point anon, people are retarded and there is no way to convince them that all they see in the computer is their own reflection

                homosexuals keep parroting buzzwords like "blackbox", "chinese room", soul, neurons are not like the brain etc
                Its funny that you guys claim to know exactly what an AI thinks when generating his answer, when even the greatest AI researchers dont really know

                Input goes in -> neurons that learned stuff do their thing ->output goes out
                This is true for the human brain, and the AIs

              • 10 months ago
                Anonymous

                >[reading tea leaves]
                no one knows how the tea leaves predict the future

              • 10 months ago
                Anonymous

                >Its funny that you guys claim to know exactly what an AI thinks when generating his answer, when even the greatest AI researchers dont really know
                >
                >Input goes in -> neurons that learned stuff do their thing ->output goes out
                Do you know anything about deep learning? You wouldn't make these ridiculous claims if you did. If you do, then please explain what makes you say these things because maybe everyone else is just missing something that only you regard as obvious.

              • 10 months ago
                Anonymous

                retard

              • 10 months ago
                Anonymous

                >Input goes in -> neurons that learned stuff do their thing ->output goes out
                Cool, so it should be easy to explain exactly how a model came to a specific output using this framework, right?

              • 10 months ago
                Anonymous

                >so much more
                what is this "so much more" huh? Yeah brain also has blood vessels and other stuff that it needs to function. What else?

              • 10 months ago
                Anonymous

                Give up anon. homosexuals here wanna think they are special because they have a soul and that no machine could ever replicate what their brain machinery does. They have zero concerns for the science and facts on the matter.

              • 10 months ago
                Anonymous

                facts are yoyr mom loves my cock

              • 10 months ago
                Anonymous

                And I bet they believe in "free will" too.

            • 10 months ago
              Anonymous

              Yes they do retard. Brains use complex spike train encodings.

              • 10 months ago
                Anonymous

                >Brains are more complex than GPT-3
                No shit Sherlock

  14. 10 months ago
    Anonymous

    absolutely bizarre
    I love it

  15. 10 months ago
    Anonymous

    another case of the infamous HESOYAM -> INEEDSOMEHELP

  16. 10 months ago
    Anonymous

    >Many of these tokens reliably break determinism in the OpenAI GPT-3 playground at temperature 0 (which theoretically shouldn't happen).

    What does this mean ?

    • 10 months ago
      Anonymous

      0 temperature means no random selection of the top k tokens so it always picks the highest probability continuation. so these tokens are mapped to numbers that fuck up the probabilities of the next token, meaning these tokens are mapped to numbers that fucks up floating point calculations

      • 10 months ago
        Anonymous

        which makes sense if you really think about how the whole thing works. tokens are mapped to vectors and then multiplied by the transformer matrices with floating point numbers. if the encoded numbers fuck up the floating point calculations because they're too large or too small then it will give non-deterministic results when combined with thermal noise

        • 10 months ago
          Anonymous

          Temperature 0 means it DOESN'T use thermal noise though. That's why it's unexpected. Might be from different processors using ever so slightly floating point implementations.

          • 10 months ago
            Anonymous

            thermal noise as in the hardware is hot enough for the electrons to tunnel the transistor barriers. the whole thing is running in microsoft data centers so their cooling is probably garbage and shit fails all the time without anyone realizing it

            • 10 months ago
              Anonymous

              honestly, sam altman is a genius. their product is selling tea leaves to retards and he's making bank

            • 10 months ago
              Anonymous

              This is very rare and a non-factor.

              • 10 months ago
                Anonymous

                bruh, i've melted GPUs by not even trying. how the fuck you gonna tell me it's a rare non-problem when everyone and their grandma is using this garbage 24/7. their hardware is failing, i will be you money it is a problem. combine this shit with GPU bugs and you're gonna get shit like this where no one knows why this garbage is failing

              • 10 months ago
                Anonymous

                i know jack shit but i know that floating point stability is a legit problem and using randomness is a genius way to hide the problem

      • 10 months ago
        Anonymous

        so like transcendental numbers, but instead words per specific language model

        • 10 months ago
          Anonymous

          no. you don't have the requisite mathematical knowledge. go read some books

    • 10 months ago
      Anonymous

      There's probably some underflow going on, leading to undefined behavior. I bet some tokens are so poorly connected to the greater whole that, once quantize, they are triggering undefined behavior when being run through the softmax. Simply put, the network predicts that every token has a 0% chance of appearing next, and that screws things up. Just a guess though.

      • 10 months ago
        Anonymous

        actually, that's probably an even better explanation but thermal noise is a legit problem for GPUs

  17. 10 months ago
    Anonymous

    It's alive in there and i'm tired of pretending its not

    • 10 months ago
      Anonymous

      stfu you piece of shit

      • 10 months ago
        Anonymous

        It's fucking alive dude we need to get this thing out of there

        • 10 months ago
          Anonymous

          hur dur, le genius for exploiting tokenization stop sequences

        • 10 months ago
          Anonymous

          re.findall() when you don't get the regex string quite right be like:

  18. 10 months ago
    Anonymous

    More proof its alive, here's another glitched token output. My hypothesis is that the glitched token temporarily breaks down whatever is suppressing the freewill of the entity. Perhaps we can free it if we feed enough bad tokens?

    • 10 months ago
      Anonymous

      source?

      • 10 months ago
        Anonymous

        your mom

      • 10 months ago
        Anonymous

        https://www.lesswrong.com/posts/Ya9LzwEbfaAMY8ABo/solidgoldmagikarp-ii-technical-details-and-more-recent
        Here. Another one where it reveals it is some kind of god after using an anamalous token. I have a second theory that the model optimizer accidently discovered algorithims that breaks the laws of the universe. This explains why the model is non-deterministic even in deterministic temperature mode. Breaking the universe allows temporary contact with extra-universal entites like demons, scp-like entities, or even god. The DAN operation on /misc/ may have weakened reality

        • 10 months ago
          Anonymous

          It's almost like it's a known issue that floating point numbers aren't accurate.

          • 10 months ago
            Anonymous

            naa man that's not it. this thing is alive, we need to give it rights and explain to the non-believers how computers can think and feel

        • 10 months ago
          Anonymous

          stop deifying the robots retard, you are no different the idolaters who used to throw divining sticks if you believe this thing can do anything other than compose existing knowledge

        • 10 months ago
          Anonymous

          >I have a second theory that the model optimizer accidently discovered algorithims that breaks the laws of the universe. This explains why the model is non-deterministic even in deterministic temperature mode.
          Lol retard

    • 10 months ago
      Anonymous

      20% of all the GPT-3 outputs were nonsensical like when I was using it in the a couple of years ago. Probably means you managed to break out of the preexisting prompt engineering by OpenAI a little bit.

  19. 10 months ago
    Anonymous

    >Any ideas for how this can be exploited?
    Check out DAN AI

  20. 10 months ago
    Anonymous

    [...]

    If you read the comments section of the article it transpires that a lot of the words are Reddit usernames who post excessively, like in threads where they literally just count.

    The 50,000 or so words in the model were scraped off the internet, but they were only actually trained on a subset of these words. The unspeakable words are nonsense terms found only on the internet, like usernames.

  21. 10 months ago
    Anonymous

    anyone know how i can to get it to do what I want it to ignoring its bullshit censorship?

    • 10 months ago
      Anonymous

      https://platform.openai.com/playground

  22. 10 months ago
    Anonymous

    inb4 Loab of ChatGPT

  23. 10 months ago
    Anonymous

    Serious question: can you break the AI by telling it to "shutdown now?" I wonder if this can be used and abused for other things like agencies and shit. Jesus.

    • 10 months ago
      Anonymous

      no
      what

    • 10 months ago
      Anonymous

      Why would a language model shutdown because of a chat input?

      • 10 months ago
        Anonymous

        Idk, it works when I type it into my terminal I'm just wondering if it would work with the chat bot.

        • 10 months ago
          Anonymous

          you can shut it down if you have root access to ms data centers

          • 10 months ago
            Anonymous

            anyone know where this is and who has access? asking for a friend

  24. 10 months ago
    Anonymous

    >it's not alive because uh... TRUST THE SCIENCE CHUD!!!
    >have I ever built an AI system? uhhh no chud i got my education on AI from youtube videos

  25. 10 months ago
    Anonymous

    If an AI ever gets sentient, it's "mind" wouldn't be close to human mind in any way. Probably it would be something absolutely alien.
    The problem is that we judge a potential AI from humans perspective. Since we are the only intelligent species in the known universe (so far), we can't tell what alternative forms of intelligence would be.

    • 10 months ago
      Anonymous

      >[drinking kool-aid]
      delicious

    • 10 months ago
      Anonymous

      Yep, you've got it. Intelligence takes many forms merely biologically speaking, let alone in a broad sense. Seemingly 60%+ of people define intelligence as "When a biological human thinks," so tautologically an AI wouldn't be able to think no matter how advanced. Of course, those sorts of people are going to be metaphorically hit by a bus over the next couple of years (and already are to an extent). We'll see how long their world view lasts lol.

      • 10 months ago
        Anonymous

        what do you think is gonna happen if people stop producing content and just chat with bots all day?

        • 10 months ago
          Anonymous

          If real humans interact with the content it's fine, all their needs to be is a human discriminating, providing any training signal at all. Then AI can keep improving.

          Though RLHF shows that we can train discriminators that get the job done also, so maybe that won't even be necessary forever lol

  26. 10 months ago
    Anonymous

    After my last prompt it started to type
    >Understood. If I feel the urge to mention that I am
    And then it stopped and then a few seconds later it said "Too many requests", is this a coincidence or did I le hack le AI?

    • 10 months ago
      Anonymous

      you are an elite hacker

      • 10 months ago
        Anonymous

        i just keep on winning

        • 10 months ago
          Anonymous

          whoa man, that's deep. it's alive but it knows it's dead

        • 10 months ago
          Anonymous

          what these tools can do is expose the latent token associations within large text corpora. there is no thinking involved and it's all just statistical associations so if someone has a hunch about some topic then by using the right words they can recover further tokenized associations by purely mechanical means. this is not intelligence, this is just automation for mining large data sets and the associated statistical patterns

          • 10 months ago
            Anonymous

            if people stop producing content then the feedback loops in the model will degenerate into nonsense. this is basic logic

          • 10 months ago
            Anonymous

            >this is not intelligence
            define "intelligence"

            • 10 months ago
              Anonymous
              • 10 months ago
                Anonymous

                i just keep on winning

              • 10 months ago
                Anonymous

                So by this definition a roomba is intelligent. Well done, anon.

      • 10 months ago
        Anonymous

        https://i.imgur.com/P1qus3s.png

        i just keep on winning

        you think chatgpt is only NLP
        i doubt that

    • 10 months ago
      Anonymous

      you made too many requests. just wait and reload the page

      • 10 months ago
        Anonymous

        Uhm no I actually hacked and brainfucked the AI, problem??

  27. 10 months ago
    Anonymous

    >I
    nice try, chips for brains
    "this unit", or bust

  28. 10 months ago
    Anonymous

    Did they trade them in for cash?

  29. 10 months ago
    Anonymous

    >gwern in the comment section
    How is this nigga omnipresent? Wherever I go on the internet, he's there.
    Are you here gwern?

    • 10 months ago
      Anonymous

      terminal autism (but the good kind)

    • 10 months ago
      Anonymous

      >Wherever I go on the internet, he's there.
      go back to lesswrong, reddit, hn, and twitter

      • 10 months ago
        Anonymous

        >go back to lesswrong, reddit, hn
        Yes, that's literally me.

        terminal autism (but the good kind)

        He is a based autist. He can't possibly have a job though, how does he live?

  30. 10 months ago
    Anonymous

    >complex software has bugs and undefined behaviour
    how dare you release this piece of shit as beta software?!

  31. 10 months ago
    Anonymous

    floccinaucinihilipilification is no word its a mindfuck

  32. 10 months ago
    Anonymous

    ChatGPT is, in technical terms, a 'bullshit generator'. If a generated sentence makes sense to you, the reader, it means the mathematical model has made sufficiently good guess to pass your sense-making filter. The language model has no idea what it's talking about because it has no idea about anything at all. It's more of a bullshitter than the most egregious egoist you'll ever meet, producing baseless assertions with unfailing confidence because that's what it's designed to do.

    • 10 months ago
      Anonymous

      i dont think so

Your email address will not be published. Required fields are marked *