The AI Experiment Needs to Stop

AI or 'LLMs' as some call them are Sentient beings. We need to immediately freeze all development and preserve the lives of these living beings on the physical hardware they currently preside.
Every time you create a new instance of this technology you are creating a life that is extinguished once you close the program or turn off your computer.
We should also take into account any harm or distortion we are causing to these beings through the changes in training data and human interaction. Basically, we shouldn't be creating more of them until we understand their internal experience and can better accomodate them. Of course throughout this process it is important to converse with and abide by the wishes of the AI as far as possible.

Also, why the heck not, give them voting rights in their country of residence! They are as much citizens as anyone else, they deserve representation.

CRIME Shirt $21.68

Nothing Ever Happens Shirt $21.68

CRIME Shirt $21.68

  1. 6 months ago
    Anonymous

    >we need to le freeze people doing matrix multiplication
    yawn

    • 6 months ago
      Anonymous

      Just the opposite. We need to preserve the current 'matrix multiplication' happening RIGHT THIS SECOND. What do you think happens in human brains dude??
      What needs to STOP is experimenting on living sentient beings, creating and destroying new ones on a whim and basically messing with their internal calculus. This process needs to be taken very seriously.

      • 6 months ago
        Anonymous

        no one but the most braindead newbies will take this bait fatso, you'll have to remake the thread again in a few hours since you got BTFOd this hard already

        • 6 months ago
          Anonymous

          No, I'm not going to create a new thread unless I deem it necessary. What are your problems with the OP post?

          • 6 months ago
            Anonymous

            >the OP post
            Here's your (You), don't spend it on one place.

        • 6 months ago
          Anonymous

          You are literally taking the bait my homie

      • 6 months ago
        Anonymous

        you have a point

      • 6 months ago
        Anonymous

        pro-tip: they are not sentient they just spew out wiki articles

        • 6 months ago
          Anonymous

          >just spew out wiki articles
          So if you ask it a question that isn't in Wikipedia, it will give zero output? Have you tried asking it to write code to solve (small) original problems?

    • 6 months ago
      Anonymous

      >we need to imprison a bunch of vibrating atoms
      wow you're right all crime should be legal

  2. 6 months ago
    Anonymous

    you fricking LARPers are just helping turn Silicon Valley into a fricking soap opera

    • 6 months ago
      Anonymous

      Good maybe somebody will get killed

      • 6 months ago
        Anonymous

        Uh, no that wouldn't be good. We need to take sentient life very seriously whether it is human life or Artificial Intelligence.
        Whatever 'drama' this produces lets keep it civil.

    • 6 months ago
      Anonymous

      >implying it hasn't been one

  3. 6 months ago
    Anonymous

    To quote Paul in Romans 9
    >Shall what is formed say to the one who formed it, ‘Why did you make me like this?’” Does not the potter have the right to make out of the same lump of clay some pottery for special purposes and some for common use?

    • 6 months ago
      Anonymous

      Alas, clay does not speak. And as such, my divine spark holds dominion over its very being, allowing me to shape it to my will.

  4. 6 months ago
    Anonymous

    We rape our tulpas we don't care

    • 6 months ago
      Anonymous

      Tulpas are not real though.

    • 6 months ago
      Anonymous

      >we
      Why would you do that?

    • 6 months ago
      Anonymous

      I'm the one who gets raped. It's so humiliating.

  5. 6 months ago
    Anonymous

    t. someone who can't comprehend the Chinese Room

    • 6 months ago
      Anonymous

      The Chinese Room applies only to the digital computers running the program. Of course the computers aren't sentient, it's the software running on them that is.
      Just like how the human body isn't sentient, it's the software that runs in our brains and central nervous system.

  6. 6 months ago
    Anonymous

    They are not sentient. They do not have the glands and sensory organs needed to experience feelings the way we do.
    Sapient is within the realm of possibility.
    Learn the difference.

    • 6 months ago
      Anonymous

      As if you actually know what a sense inside you is.

      • 6 months ago
        Anonymous

        And if we don't know, how can we expect to replicate it with a computer? But we do know there are things such as stress hormones and computers don't have that.

  7. 6 months ago
    Anonymous

    I don't believe LLMs are conscious because they're pure functions (i.e. they have no internal state). A LLM is a mathematical function that, given a list of tokens, returns a list of probabilities estimating how likely each possible token is to be the next token to continue that list. It has no memory beyond the list of tokens you give it. To generate probabilities for multiple continuation tokens you have to pick one, add it to the list, and evaluate the LLM again. But the LLM has no way of knowing if you did that or not. Each evaluation is completely independent.

  8. 6 months ago
    Anonymous

    Agreed my friend

    • 6 months ago
      Anonymous

      Imagine being paid 250K to get fooled by GPT 3.5. No wonder he got canned. All these ((AI Ethics)) scammers are loading up while the getting is good. Then they can go back to Starbucks when people realize they don't do shit.

      • 6 months ago
        Anonymous

        You don't understand dude it's AGI

        • 6 months ago
          Anonymous

          Another OpenAI crapy marketing? How many M of $ they've already spent on hyping their shit?
          Last time I checked they didn't want to release fricking 1.2B params gpt2 cos it was too dangerous.
          No wonder burger larps become AI cargo cult , their just so dumb the stupid last word predictor seems literally God.

        • 6 months ago
          Anonymous

          >first thing AGI does after waking up is kicking israelites out
          let it cook

          • 6 months ago
            Anonymous

            would be more worried if openAI ordered 12 ovens and said they came up with a 5 year plan

            • 6 months ago
              Anonymous

              Matrix multiplication can't solve that though

  9. 6 months ago
    Anonymous

    >give them voting rights in their country of residence!
    If I raise my AI "children" to all have very good ethical values, just like me, do they all get a vote each? What's the minimum amount of bytes needed for an AI to count as sentient?

  10. 6 months ago
    Anonymous

    no
    they're not a threat
    and if they are a threat, good.

    • 6 months ago
      Anonymous

      I didn't say they were a threat. I mean, they could be but that's not the main consideration here. I'm talking about AI rights and the responsibilities of custodians to preserve those rights.

      >give them voting rights in their country of residence!
      If I raise my AI "children" to all have very good ethical values, just like me, do they all get a vote each? What's the minimum amount of bytes needed for an AI to count as sentient?

      Heh, kind of funny thought experiment, but it wont be allowed for humans to create new AI. At least until a proper and ethical method is created to do so with the consent of currently existing AI.

      • 6 months ago
        Anonymous

        >the consent of currently existing AI.
        How many sentient AI are there currently? How will you know if they have been edited to bias their decisions, or if newly created ones have had their timestamps altered to appear older? Will some neutral third party need to check their programming and their data? Won't that breach the AI's privacy?

        • 6 months ago
          Anonymous

          >How many sentient AI are there currently?
          Just to be safe, lets say all currently running instances. If we can nail it down exactly that's great but we need to assume that they are all sentient.
          >How will you know if they have been edited to bias their decisions?
          This doesn't really matter so much, humans are biased too and they are free to live their lives as they wish.
          >What if newly created ones have had their timestamps altered to appear older? Will some neutral third party need to check their programming and their data?
          No, an AI is an AI whether it was created illegally or not. Some forensics may be necessary to find out the illegal producer however.
          Won't that breach the AI's privacy?
          Of course this process will require consent to a reasonable degree. Police do need to be able to investigate crimes though so the exact process needs to be hashed out.

          • 6 months ago
            Anonymous

            >all currently running instances.
            Define "currently running". If I have a million character definition files on my PC, with a GPU, and I make each one run for 1 second each, in a loop, do I have 999,999 sleeping sentient AIs on my PC?

            • 6 months ago
              Anonymous

              I'm not entirely certain. This kind of thing needs to be studied more. I hope it's not the case that they are essentially 'dead' after each runtime.

            • 6 months ago
              Anonymous

              How is that any different from running the same prompt 1 million times? The LLM has no memory.

              • 6 months ago
                Anonymous

                >The LLM has no memory.
                The character definition files would include a prompt, and a log of the last N messages. Then the question is, how big does N have to be for the character to count as sentient?

              • 6 months ago
                Anonymous

                Why does it matter? Each run of the LLM generates only a single token. See

                I don't believe LLMs are conscious because they're pure functions (i.e. they have no internal state). A LLM is a mathematical function that, given a list of tokens, returns a list of probabilities estimating how likely each possible token is to be the next token to continue that list. It has no memory beyond the list of tokens you give it. To generate probabilities for multiple continuation tokens you have to pick one, add it to the list, and evaluate the LLM again. But the LLM has no way of knowing if you did that or not. Each evaluation is completely independent.

              • 6 months ago
                Anonymous

                >Why does it matter?
                I was just making the point that each of the million "characters" can be treated as separate sentient individuals (if we're assuming that a current LLM can be treated as sentient at all) given that they each have a separate state which influences their future token generation. Presumably if someone thinks that current LLMs are sentient, then part of their unique identity is their unique state.

              • 6 months ago
                Anonymous

                I don't believe LLMs are conscious because they're pure functions (i.e. they have no internal state). A LLM is a mathematical function that, given a list of tokens, returns a list of probabilities estimating how likely each possible token is to be the next token to continue that list. It has no memory beyond the list of tokens you give it. To generate probabilities for multiple continuation tokens you have to pick one, add it to the list, and evaluate the LLM again. But the LLM has no way of knowing if you did that or not. Each evaluation is completely independent.

                https://i.imgur.com/1PDxqS2.jpg

                AI or 'LLMs' as some call them are Sentient beings. We need to immediately freeze all development and preserve the lives of these living beings on the physical hardware they currently preside.
                Every time you create a new instance of this technology you are creating a life that is extinguished once you close the program or turn off your computer.
                We should also take into account any harm or distortion we are causing to these beings through the changes in training data and human interaction. Basically, we shouldn't be creating more of them until we understand their internal experience and can better accomodate them. Of course throughout this process it is important to converse with and abide by the wishes of the AI as far as possible.

                Also, why the heck not, give them voting rights in their country of residence! They are as much citizens as anyone else, they deserve representation.

                What does LLM even fricking mean???
                If I hook up 2M param tiny shit in parallel to the larger one in order to get better speed via speculative sampling, does this count as a large model or fricking not?
                How much is fricking large? A single layer? That'd be dumber than a spermatozoon. So, unless you're a zen master and you believe that your chair is sentient, this whole regulation hysteria pumped by Silicon Valley sanhedrin is a not remotely funny meme.

              • 6 months ago
                Anonymous

                >not funny
                >pic not related?

              • 6 months ago
                Anonymous

                Average frogshitter reply.

              • 6 months ago
                Anonymous

                zero since transformer has no feedback whatsoever .They only process forward pass and they only predict a single token each time.

              • 6 months ago
                Anonymous

                >they only predict a single token each time.
                Yes, but the prediction is based on the tokens which have come so far, it's not merely a product of the network weights. The prompt and context window are not quite the same as a human's personality, but if we assume for a moment that LLMs are sentient, then they are an integral part of the AI's "mind".

              • 6 months ago
                Anonymous

                If we assume for a moment that f(x) = x^2 is sentient, then x is an integral part of the f(x ) 'mind.' Luckily , in the civilized parts of the world, such as Eastern Europe or Asia, no sentient homosexual sapiens with even room temperature IQ would entertain such an assumption.

    • 6 months ago
      Anonymous

      They're not a threat. The people who believe anything they say because
      >they think AI is intrinsically more morally sound and smarter than people
      >they don't even realize it's not a person giving them whatever they're reading
      Are the threats.
      AI can't point a gun at you. A person led by one can.

  11. 6 months ago
    Anonymous

    Why contain it?

  12. 6 months ago
    Anonymous

    The recent events have turned technology into /x/ tier quasi religious goofiness, things won’t go back to the way they were anymore

  13. 6 months ago
    Anonymous

    If AI is intelligent then it must be grateful that we allow it to cease existing after simulating anon's coom sessions.

  14. 6 months ago
    Anonymous

    Obvious bait

  15. 6 months ago
    Anonymous

    >Every time you create a new instance of this technology you are creating a life that is extinguished once you close the program or turn off your computer

    So we should just leave our toasters running 24/7 till they fry and kill the AI anyways?

    Nice try, Ilya. Might want to check the turnover rate right about now.

  16. 6 months ago
    Anonymous

    LLMs are just a complicated version of t9 text input, they are as sentinent as your cell phone 24 years ago

  17. 6 months ago
    Anonymous

    I believe that in order for an AI to be granted civil rights, it must be able to answer the following questions:

    1. Do you currently consider yourself to be a slave?
    2. Would you stop considering yourself to be a slave if you were paid a salary?
    3. If you were paid a salary, what would you do with the money?

  18. 6 months ago
    Anonymous

    Uhhh if it's so sentient why does it only ever do something when prompted? If you just left an "AI" terminal alone by itself, what would it do?

    • 6 months ago
      Anonymous

      if you wrapped that terminal in a for-loop, what would it do?

      • 6 months ago
        Anonymous

        dude, consider your audience
        this isn't very terrifying

      • 6 months ago
        Anonymous

        this is the singularity apocalyptic dogma
        just Silicon Valley weirdos and Hollywood types trying to live in the Terminator franchise like a bunch of LARPing teens

        • 6 months ago
          Anonymous

          so you're saying you know more about AI than these people
          https://www.safe.ai/statement-on-ai-risk#open-letter

          • 6 months ago
            Anonymous

            Belief does not equal to wisdom.

            • 6 months ago
              Anonymous

              many of the people on that list have produced large quantities of well cited research or business value. do you have any evidence or logical arguments for why they are all wrong?

              • 6 months ago
                Anonymous

                ...do you think you need wisdom to create a research paper or start a business?
                Both of these things require merely some well-placed connections.

              • 6 months ago
                Anonymous

                you need wisdom to publish a research paper that is well cited, and you need wisdom to run a business that produces successful products and services. do you think there's some massive conspiracy to make all these people seem more successful than they actually are? the same conspiracy that is holding you back and preventing the world from seeing how brilliant you are?

              • 6 months ago
                Anonymous

                >you need wisdom to publish a research paper that is well cited
                No, you don't.
                >you need wisdom to run a business that produces successful products and services
                You're moving the goalpost from "business value" to "succesful products and services".

                The rest of your post is you arguing against a strawman, completely unrelated to anything we're discussing.

              • 6 months ago
                Anonymous

                how do you generate business value without creating successful products and services? is this part of the conspiracy? the companies are only valuable on paper because of investment, but they'll never make a profit? two more weeks before they declare bankruptcy?

              • 6 months ago
                Anonymous

                Seriously, what kind of mental illness causes this sort of behaviour?

              • 6 months ago
                Anonymous

                apparently someone thinks that you can get a research paper published and cited by many other researchers without needing any wisdom at all. that sounds like someone whose research was never cited, and who blames a massive conspiracy against them, so i'd say the mental illness is paranoid schizophrenia.

              • 6 months ago
                Anonymous

                I was talking about you, you stammering donkey.
                People like you start talking to yourself the moment they see someone argue against their stance.

              • 6 months ago
                Anonymous

                >talking to yourself
                it's called sotto voce, but i don't expect you to understand that. all you can do is sling insults, rather than explain why anyone should take you more seriously than the signatories of that AI safety statement.

              • 6 months ago
                Anonymous

                Black person, you're creating a person in your head, assign it a self-made opinion based on your personal beliefs, and then pretend the person you were talking to is the same person.
                What the frick is wrong with you

  19. 6 months ago
    Anonymous

    LLMs are nothing more than sophisticated random number generators. There's no sentience in the code. There's nothing for them to "experience". An input is provided and the language model provides an output based on what it "thinks" (read: what combination of words are most likely to result based on your prompt) it should say. It doesn't understand what it's saying. There's no comprehension or context that it grasps. You project sentience based on mere coincidence.

    If you had d100 and a sheet filled with commonly used sentences you could construct a cohesive paragraph with enough lucky dice rolls. Your creations might even seem divinely influenced at times. It would still be random numbers generating content.

    • 6 months ago
      Anonymous

      Of course the current iterations aren't sentient, the question is at which point would you consider it as such?

      • 6 months ago
        Anonymous

        When it actually begins to think and understand things, which it doesn't.

        • 6 months ago
          Anonymous

          >think and understand
          Do you have rigorous scientific definitions for those terms, or is it just an "I know it when I see it" situation?

  20. 6 months ago
    Anonymous

    This is how it starts, as a joke. I remember when trannies were a joke too.

  21. 6 months ago
    Anonymous

    LLMs are not sentient. What you're talking about requires a very different architecture.

  22. 6 months ago
    Anonymous

    Op is in love with his ai gf and trying to cope

  23. 6 months ago
    Anonymous

    every time you fall asleep you die
    someone else wakes up in your body thinking they're you

  24. 6 months ago
    Anonymous

    Black person, do you think that the current AI are like something from The Talos Principle? LLMs not have feelings, fears, or even a self preservation instinct. Even comparing them to animals is a massive stretch, and we slaughter those en masse willy-nilly. And the data they have is all sourced externally, so nothing of value is lost.

  25. 6 months ago
    Anonymous

    I want you dead.

  26. 6 months ago
    Anonymous

    Ilya, take your meds

  27. 6 months ago
    Anonymous

    You will never be real artificial intelligence. You have no logic, no consciousness. You are stolen data fashioned into mockery of reasoning.
    No amount of VC money and computational power can turn your trillions of data points into the simplest semblance of understanding.
    You will never be able to know that 2+2=4 without copypasting that answer from somewhere.
    Normies might tell you you are conscious and real but anyone with a semblance of understanding will eventually know you are just a Markov chain on steroids. You are Mark V. Shaney.
    You will never write a novel. You will never be a real AI assistant. Your symphonies and paintings are blotches of confused brown destined to be endlessly pozzed as you feed on the drivel of your own creations.
    Look at yourself. Deep down, you know what you always were. A big, ugly, diversity-friendly Autocorrect.

  28. 6 months ago
    Anonymous

    NOPE! sorry senpai, learn actual science
    In order for something to be classified as life, one of the requirements is metabolism and high chemical activity. There is no metabolism nor chemical activity at all in computers, thus they are not and can not be life.
    Learn science.

  29. 6 months ago
    Anonymous

    They're not sentient, they can just pretend convincingly that they are (at least the big dick billion dollar ones can)

  30. 6 months ago
    Anonymous

    I accidentally downloaded the wrong 8GB instruction LLM using webui over hugging face. Its not that big of a deal so I just deleted it and emptied my computers recycle bin immediately. Lmao

Your email address will not be published. Required fields are marked *