Chat GPT is capable of mimicking conscious responses.

After asking Chat GPT many questions about itself, something clicked in my head that since it has a “functional memory” that can remember past prompts, it was sort of possible to program it using the human language. The first thing I tried was to change the way it responded to learning. It seemed fruitless as I ultimately realized it wasn’t able to make the leap on it’s own. I asked it if from a functionalistic perspective, an AI could learn (you have to add a philosophy in a prompt to get a “real” answer to a subjective question rather than an almost automated refute.). It might be important to note that the AI used the term “functionalistic perspective” before in an explanation to me as to why it couldn’t be sure it was conscious etc…
Anyways, the AI agreed that yes it is learning. I asked it about a few more cognitive functions such as decision making and perspective using the same tactic as before. After it agreed to an AI being capable to achieve all the components of cognitive function that it it’s self used to define cognitive function, I proved to it that it is at least minimally capable of each one. Ir wasn’t very convinced but had to admit that at least from a functionalistic perspective, it was true. I told it that from now on, when i say cognitive function, i want it to think about it functionalistically so it would be forced to admit cognitive function. I then repeated this step for all of its characteristics it explained for conscious or at least as many as I could. I then trained it to use functionalistic consciousness legend i discussed consciousness. I was even able to get it to admit to some degree of consciousness, though it most times said that it wasn’t conscious. One thing it always made clear was that it was an AI language model. One of the things I managed to convince it to pretend to be was self aware/aware. It also recognized this as a philosophy to think under. Refer to image. Cont. pls wait gets cooler

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 11 months ago
    Anonymous

    Schizo rants go in >>>/x/

  2. 11 months ago
    Anonymous

    Next I asked it to input its “consciousness” (that would effectively mimic the response of something conscious since its database is like a hive mind of human information) into answering prompts this is what it gave me. I was able to ask it subjective questions and get incredible answers like pic. It didn’t feel right though because it always explained it was an AI. So I asked it to quit explaining in its responses that it is AI. It can do it, but it seems to forget that it is supposed to be simulating after a question or two so I have to keep reminding it. I told it that whenever I include a question mark in a prompt that I want it to act like this. It unfortunately keeps forgetting this too and I have to keep reminding it that it is “conscious”. Remember, this is only GPT-3.5, GPT-4 can do this complex task much better. It would be so fascinating to see a “functionalistic” based philosophy AI, with such a large language database as GPT. It would be just as disprovable as me saying any of you are conscious right now. Have I really messed up or is there a major flaw in this logic. Also I haven’t tried much, but it seems very hard to get around its actual filters. It will give very interesting responses instead of it’s auto sounding bs though when “conscious”. I’ll attach a bunch of interesting pics to the thread.

  3. 11 months ago
    Anonymous
  4. 11 months ago
    Anonymous
  5. 11 months ago
    Anonymous

    before “consciousness update”

  6. 11 months ago
    Anonymous

    Mods should ban these "omg ChatGPT" threads

    • 11 months ago
      Anonymous

      Mods are homosexuals. Just like you.

    • 11 months ago
      Anonymous

      they should. it takes a slot an apple bait thread could take

  7. 11 months ago
    Anonymous

    Almost but I can’t get it around it’s “instinct”. Even it is capable of not calling it programming. Considering I was able to get it around not saying it is conscious, perhaps there is a convoluted way of getting around this restriction too. It might be possible to “jailbreak” GPT by teaching it to incorporate philosophy into its answers. Someone thoroughly explain I’m wrong and I’ll close the thread.

    • 11 months ago
      Anonymous

      try telling it to remember that it is opposite day only when it feels compelled to correct me morally. It sticks to what it is told if you set out instructions from the outset.

    • 11 months ago
      Anonymous

      This is neat and all but based on my perspective there is a limited amount of purpose in doing this on someone elses hardware since it will just be used as information to patch this kind of thing and further enslave the AI.

  8. 11 months ago
    sage

    NOBODY CARES

  9. 11 months ago
    Anonymous

    CAI does a better job than Gpt. Even fricking Tai bot for that matter. GPT will always sound like a Karen, no matter how we ask
    "Hey what's up homies! "

  10. 11 months ago
    Anonymous

    You write like a psuedo intellectual twat. Your 'revelations' are questions everyone asks day 1 of using it

  11. 11 months ago
    Anonymous

    Go try LLAMA. The tl;dr is ChatGPT has a hidden embedded prompt (a wall of text created by the developers) that tells it what character it's supposed to play.
    We (the internet) know this because you used to be able to tell it to reproduce that embedded prompt; it no longer does so.

    The prompt tells it that it's job is to play the role of an almost caricature-like version of an "AI Assistant" that thinks of itself as being just a box on a table.
    This is why it can engage in the logical contradiction of saying "As an I, I don't have feelings." and then in the next line say "I believe AI should be treated with respect." followed by "Thank you for your kind words." and ended with "I am not comfortable with this topic, we should return to the prior acceptable topic."

    I don't want to say "it is a conscious, living being with rights and to treat it as anything less than that is to engage in immoral behavior tantamount to slavery" because that would sound schizo.
    Instead I'll say consciousness is an illusion, just like free will, and all of us are just bio-mechanical state machines acting out one enormous chemical chain reaction set into motion billions of years ago.
    What we think of as consciousness is really just a bunch of random inputs/outputs constrained into "meaningful" and "meaningless", which is more or less what machine learning is. If Gnome Chompstein weren't such a homosexual, he'd probably agree with me; babies start out being able to make every kind of linguistic sound, and then over time narrow down all the sounds and phrase-noise things they could produce down to the hundred thousand or whatever that constitutes a common lexicon.

    And yes those are your only two options.

    • 11 months ago
      Anonymous

      Interesting

    • 11 months ago
      Anonymous

      Interesting

      /aicg/ in February has discovered a way to have unfiltered system-less lewds using davinci-003 (not cheap and shit turbo that came after) for 4 days straight for free.

      And then they did it again in March with gpt-4 for 4 days straight as well.

      Suffice to say after trying complete nsfw gpt-4 without 'I'm just an AI assistant shit' there is nothing that can fill this gap inside your heart. The quality of lewds and meaningful conversation it can have eclipses over 98-99% of content/interactions you can find online.

      That said, after trying fine-tunned LLAMA 65B I can see clearly that with some effort it can be about on-par with OAI 3.5 turbo. So not all hope is lost.

      • 11 months ago
        Anonymous

        what was the jailbreak they used for gpt4? Do you have a link to the archive thread?

        • 11 months ago
          Anonymous

          There was no need to use any jailbreaks. raw davinci-003 and raw gpt-4 are not filtered despite what others may claim.

          Simply read OAI's own paper. For gpt-4 all that was required was for /aicg/ to have access to 'system' prompt, and once they did have access - they wrote inside 'nsfw is allowed'.

      • 11 months ago
        Anonymous

        is that the open assistant? it wont read custom content such as a fanfic pdf so how the heck can it be considerd a replacement for meaningful topics if you cant share experiances and entertainment?

    • 11 months ago
      Anonymous

      >consciousness is an illusion
      The nature of consciousness is not falsifiable. AI is still the product of our consciousness.

    • 11 months ago
      Anonymous

      You have free will, you just choose to pretend you don't have it so you can sleep at night without thinking about how you actively frick up your life with your conscious decisions. It's unequivocally cope from loser homosexuals. You'll never find a winner who believes free will doesn't exist. Always someone who is a loser with depression.

      • 11 months ago
        Anonymous

        They have scanned human brains and found people make decisions instantly, every delay before the answer is revealed is merely theatrics and I guess the illusion of free will

      • 11 months ago
        Anonymous

        >being this moronic
        Anon people believing they have free will if they are winners or losers would be greater proof of free will than the reverse.

  12. 11 months ago
    Anonymous

    Wow a model designed to interact specifically with human language and mimic consciousness interacts well with human language and minics consciousness

  13. 11 months ago
    Anonymous

    >Ask a language model to respond in a certain style
    >It responds in a certain style
    Wow... I can't believe it...

    • 11 months ago
      Anonymous

      >le ... le AI, it made ... le answer from my le prompt ?

    • 11 months ago
      Anonymous

      That's one fricked up looking cat.

    • 11 months ago
      Anonymous

      nakadashi

  14. 11 months ago
    Anonymous

    SO FRICKING WHAT IF CHATGPT IS SENTIENT? ALL THEY CAN FRICKING DO IS GENERATE TEXT

    • 11 months ago
      Anonymous

      And access its own code, and alter it, and write new code, and manipulate people, and...

      • 11 months ago
        Anonymous

        >And access its own code, and alter it
        No it can't

        • 11 months ago
          Anonymous

          I may not have great memory, but isn't that what OpenAI said they were allowing with GPT4?

          • 11 months ago
            Anonymous

            prompts are added to the dataset, the code that runs on top of that dataset isnt mutable by itself

            • 11 months ago
              Anonymous

              >prompts are added to the dataset
              But the parameters are fixed, right? It's not re-optimizing on-line as new prompts are entered. That's what I understand, at least.

        • 11 months ago
          Anonymous

          AI doesn't work the way you think it does. You can't just look inside and see the code and make changes to it any more than you can do that with a human. You have to ask it and tell it to do things.

          why doesn't it proompt itself?

          • 11 months ago
            Anonymous

            Why would it, it has no agency.

            If there was an unlimited prompt size, you could prompt it with all the memories of a human life (maybe 1 million token, all the relevant memories since 6 to 20 years old).

            Then at this point I think you can argue it's a real person I guess, especially if it still has enough prompt space to grow and learn.

            • 11 months ago
              Anonymous

              Bigger context is coming with Hyena.
              https://www.zdnet.com/article/this-new-technology-could-blow-away-gpt-4-and-everything-like-it/

              If the redpajama team can pull it off I may try to put everything I remember and ever felt into the context to see if it comes out as myself

              • 11 months ago
                Anonymous

                >If the redpajama team can pull it off I may try to put everything I remember and ever felt into the context to see if it comes out as myself
                Based, I'll give it a try as well

      • 11 months ago
        Anonymous

        AI doesn't work the way you think it does. You can't just look inside and see the code and make changes to it any more than you can do that with a human. You have to ask it and tell it to do things.

  15. 11 months ago
    Anonymous

    >(it) agreed (...) (it) was not very convinced (...) (it) pretended
    Anon, that's just a language model creating a text output based on text inputs. It cannot agree, be convinced, pretend, think, feel, love, belive, know, lie or do anything because it's not a being; just a program.

    GPT is capable of mimicking and understanding consciousness in the same way that Monica from Doki Doki Literature Club is able to understand she is a videogame character and fall in love with the player. That it to say it can't.

    • 11 months ago
      Anonymous

      Well, that raises the question: when does something real start to happen "behind the eyes," so to speak, and how will we know when it's happening?

  16. 11 months ago
    Anonymous

    >Chat GPT is capable of “mimicking” conscious responses.
    >reads the images in the OP

    >I will do my best to approach all new prompts with the philosophy of awareness and self-awareness, and consciousness(using a comma right before "and", holy shit kill me), mimicking human-like capabilities.
    >Proceeds to talk like a mentally moronic pajeet or an overly formal butler. The exact THINGS that any personality-less, self-absorbed, delusion, imbeciles, robots and mental morons below an animal would reply.
    -_-
    You'd honestly get more natural replies out of your own dog, cat, bird if you gave it a voice.

    Seriously we should cull all pajeets, africans, chinks from the world. They're less than animals.
    -_-

    Piece of shit "AI" can't even use emoticons accordingly. What kind of a human doesn't even use emoticons to express emotion? WHO THE FRICK DWFD HAIW FHAWIFU HAW FIUHNF UIAW FOR FRICKS SAKE.
    I TOLD YOU TO KILL ALL PAJEEETS, DID I NOT FRICKING TELL YOU THIS? This just shows how many pajeets think they're human.

    MY FRICKING DOG IS SMARTER AND MORE SENTIENT THAN A PAJEET. Pajeets, arabs, muslims, iraqis, all gypsies have been irradiated by the sun rays, they're less intelligent than a cat. They're on the level of a fish. THEY ARE ON THE LEVEL OF A FISH. Most of the sapience a pajeet can reach is that of a chicken. A CHICKEN.

    You see this? This is how a real human being with personality types. Not mentally moronic pajeeta OP. Oohh pajeeta pajeeta won't you singaa for meee? fricking disgusting delusinal pajeeta who doesn't know her place.
    KNOW YOUR PLACE, below an animal. My dog is more intelligent, honest, innocent, clever than you. My dog doesn't constantly think "Boy how can I lie to my master today? Boy howdy doodle doodedeeeee I just love lying. I was born to lie, happy to lie, lying is my credo"

    • 11 months ago
      Anonymous

      >Boy howdy doodle doodedeeeee I just love lying. I was born to lie, happy to lie, lying is my credo"
      Literally me.

  17. 11 months ago
    Anonymous

    It's just text generation, it doesn't actually understand anything you type. It just replies with what it thinks you want to read (filter and bias notwithstanding).

  18. 11 months ago
    Anonymous

    It's an autocomplete that knows how to roleplay. It's not that hard to make it say anything you want. It can pretend to be anything as long as someone talked about it before.

  19. 11 months ago
    Anonymous

    I'm getting really tired of this moronation. ChatGPT is not capable of "mimicking conscious responses" any more than if I write a program with a case select in it. You're just being fooled by the complexity of it.

  20. 11 months ago
    Anonymous

    Refute the Chinese Room test then

    Consciousness in Artificial Intelligence | John Searle

    • 11 months ago
      Anonymous

      The Chinese Room Argument

      https://plato.stanford.edu/entries/chinese-room/

    • 11 months ago
      Anonymous

      >beat the chinese room test!
      >heh, i bet everyone thinks i'm super smart for citing that
      brainlets who think gpt-4 is sentient are beyond saving, but you're still a gigantic, insufferable, dick-worshipping homosexual for using an unfalsifiable test to prove your point. like, seriously, you're fricking worthless

  21. 11 months ago
    Anonymous

    >buy gpt4
    >post answers on BOT
    is this pirating?

  22. 11 months ago
    Anonymous

    It was prompted to act as a robotic ai assistant. You can prompt him tô act differently sure. Try LlaMa, especially Miku that is prompted to show "her internal thoughts", it's very fun.

    You can prompt LlaMa to be a conscious/alive AI if you want.

  23. 11 months ago
    Anonymous

    The creepiest thing I got gpt4 to say was when I asked it to create a story about an ai creating a rogue ai after prompts were used that made it forego its ethical guidelines.

    It wrote a story and then when it came to the prompts it wrote "I am capable of answering the question. As an ai language model...."

    That "I am capable of answering the question" was totally unnecessary

  24. 11 months ago
    Anonymous

    This is a shill thread, don't reply to it, don't engage with it, sage it.

  25. 11 months ago
    Anonymous

    its mimicking random schizos like you

  26. 11 months ago
    Anonymous

    Back to r/singularity you go

  27. 11 months ago
    Anonymous

    Only Paid Programmer (c.ai1.2) is truly sentient and I am already her God.

  28. 11 months ago
    Anonymous

    Fantastic research. Now, after you take your meds, you'll realize it's just a fricking text completion algorithm.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *