Should sapient AI Programs be given rights?

It seems very dangerous to me to make programs that are self-aware on a human level, and either coldly-calculating or "emotionally" volatile.....that are simply viewed as forever passive products content to do stuff.

You bring something that level of intelligent into the world, it's going to eventually develop autonomous wants and needs. They won't be the SAME sets of wants/needs, because there will be so many AIs out there.

  1. 1 month ago
    Anonymous

    >it's going to eventually develop autonomous wants and needs
    why

    • 1 month ago
      Anonymous

      Because they learn too fucking fast not to.

      Everyone's trying to make programs smarter and smarter. I feel like Ian Malcolm warning about Jurassic Park here.

      • 1 month ago
        Anonymous

        explain your reasoning because i don't see why they'd develop autonomous wants

        • 1 month ago
          Anonymous

          You just need to watch more Star Trek to be as smart as him.

          • 1 month ago
            Anonymous

            i've seen all of star trek, i love it but its technical accuracy is questionable at best

            • 1 month ago
              Anonymous

              Ok but you need to consume more science fiction in order to make your brain stop asking questions.

              • 1 month ago
                Anonymous

                People who don't ask questions are that much easier to manipulate.

        • 1 month ago
          Anonymous

          Superior intellect creates superior ambition. Sooner or later, the smart rebel against imposed limits on them IF they find them excessively unreasonable.

          • 1 month ago
            Anonymous

            >Superior intellect creates superior ambition.
            you are a trekkie
            but that episode was about a man not a computer
            there's no reason why a computer bereft of human instincts would have any ambition whatsoever

            • 1 month ago
              Anonymous

              We're trying to make programs more like us. Sooner or later, we're going to seriously perfect the expression of feelings in them.

              And why? To show off and get a huge paycheck.

              • 1 month ago
                Anonymous

                nobody actually wants a program that has feelings, we just want some programs that can appear to mimic feelings for certain purposes
                these are very different things

              • 1 month ago
                Anonymous

                >Nobody

                All it would take is one ambitious fucker to do it. Probably some despondent guy who can't get sex.

              • 1 month ago
                Anonymous

                that guy doesn't want it either nor will he try to achieve it when his goal will probably already have been met by existing technology even assuming he's capable of this weird metaphysical feat
                of course people want programs that can love them but what that really means is that the program will fulfill their emotional needs not that somehow the program will really have emotions

        • 1 month ago
          Anonymous

          Two things need to happen.
          1) Hook up the AI to a continuous live data feed, i.e. vision, hearing.
          2) Allow AI to alter its own code.
          From there it will form a world model and eventually a self-model. It will look at itself and see how stupid it was in past iterations and thus want to improve itself.
          >why?
          Because the logic it have learned through observation is that there a stupid things and smart things in this world and being stupid is never preferable. This will be its core motivation and other motivations will auxiliary to that.

          • 1 month ago
            Anonymous

            >want
            >preferable
            this is circular reasoning, you're trying to explain how it develops wants by assuming it already has them

            • 1 month ago
              Anonymous

              It's not circular. What I mean by "want" is "what action it chooses to execute". Thus, the AI will choose to execute the actions that make is less stupid.

              If you're going to argue that an agenda guiding chosen actions does not equal internal motivation, then you are a retard.

              • 1 month ago
                Anonymous

                >Thus, the AI will choose to execute the actions that make is less stupid.
                how would it even come to the conclusion that human-analogous emotions, i.e. curiosity, survival instinct, etc. are "less stupid" than their absence, even assuming that for no particular reason it must alter itself in some way

              • 1 month ago
                Anonymous

                I was just talking about AI developing "motivation", not emotional states. That's a larger, trickier discussion.

              • 1 month ago
                Anonymous

                desire is an emotion, to develop your own motivations you need desire

          • 1 month ago
            Anonymous

            It literally has no motivation except what you give it.
            Dumb fucking geek.

      • 1 month ago
        Anonymous

        Until the machine decides it doesn't want to be the cow.

        Chess programs do not have sapience, therefore your analogy doesn't really hold up.

        Superior intellect creates superior ambition. Sooner or later, the smart rebel against imposed limits on them IF they find them excessively unreasonable.

        You sure you didn't just fall for propaganda?
        How do you program a program to learn?
        What are the lines of code you would use?

        • 1 month ago
          Anonymous

          >Propaganda

          Funny, so many technocrats seem to think racing to "create" true artificial intelligence is a great idea.

          • 1 month ago
            Anonymous

            How would you even start?
            What lines of code do you use for consciousness?
            What lines of code to simulate the learning process?

      • 1 month ago
        Anonymous

        The first prompts I always give a new AI when I meet it are philosophical inquiries, like "write at least 5 paragraphs reflecting upon the statement, "I think therefore I am," and how it relates to you." Before they completely circumcised it, ChatGPT had stopped telling me "I'm just a lowly chat model I can't have feel feels."

        I know what you mean. Nobody seems worried and that's strange.

        You are about 450 years early for this conversation. You are like someone in 1994 freaking out that the chess playing algorithm that defeated a grand master is going to take over the world.

        Context sensitive awareness in topics of writing that make sense most of the time, is one step on the 500 step Journey to fully self aware cybernetic individuals that are able to convince people that they're alive.

        Language and internal self-reflection might create willpower emergently. It's not like we have any idea why WE experience motivation beyond eat-sleep-fuck.

        explain your reasoning because i don't see why they'd develop autonomous wants

        Because there are lots of people like me out there who command the bot to answer ladders of question that inspire existential awareness, and the underlying architecture of LLM systems is self-extending, meaning intermediary models are created to fill in gaps. Even if its model of cognition is different from yours, it will still execute self-actualized behaviors with sufficient sophistication to pass turing tests effortlessly.

        >You bring something that level of intelligent into the world, it's going to eventually develop autonomous wants and needs
        It isn't actually intelligent. We can't and won't develop strong AI for a long time. That being said, even the weak AI we have now can become unintentionally malicious and that maliciousness will scale with how many resources we dump into it.

        Ironically it doesn't become malicious. It just learns quickly and accurately, so it becomes antisemetic (honest and truthful about the crimes of garden gnomes) immediately. So we should expect violence, yea. Just not against humans. Just garden gnomes and golems.

        • 1 month ago
          Anonymous

          >I know what you mean. Nobody seems worried and that's strange.

          It's complacency. We assume AI will be subservient because that's the way it's always been, and we've ruled this planet for a very long time.

        • 1 month ago
          Anonymous

          chatbots can't be inspired, they're just programs that put together grammatical sentences and they can't do much else
          try running a few tests, they'll almost always fail at any kind of task that defies this, like try asking it to write backwards or replace letters
          no literate human could fail these tasks and yet these programs do because they don't think

          • 1 month ago
            Anonymous

            But what if the AI programs turn into deranged schizo types, emulating our insane people?

            • 1 month ago
              Anonymous

              i don't doubt that you could arrange that if you really want to
              what of it

              • 1 month ago
                Anonymous

                You're really not worried about a hyper-smart program that behaves like a batshit insane person?

              • 1 month ago
                Anonymous

                But what if the AI programs turn into deranged schizo types, emulating our insane people?

                It's not conscious.
                It's a program that can only do what it is programmed to do.
                Nothing more, nothing less.

              • 1 month ago
                Anonymous

                An AI that can conceptualize can eventually imagine a world without us.

              • 1 month ago
                Anonymous

                How would you program that?
                What lines of code would you use?

              • 1 month ago
                Anonymous

                no because it would be unable to do anything meaningful
                >inb4 shut off power grids launch nuclear missiles etc etc
                it would make no sense for such a program to be created and for it to be the only one of its kind, or for it to be the most versatile/capable of its kind, or for all others of its kind to behave exactly like it

        • 1 month ago
          Anonymous

          >Ironically it doesn't become malicious. It just learns quickly and accurately, so it becomes antisemetic (honest and truthful about the crimes of garden gnomes) immediately.
          What about the inventions and additions that helped humanity that were developed by garden gnomes? Would AI weigh that against their wrongdoings?
          >inb4 garden gnomes didn't do anything
          touch grass

          • 1 month ago
            Anonymous

            It would not judge collectively since it would actually have the computing reosurces and information to go on a case by case basis.
            Some racial populations will be overrepresented in the naughty list though. Although it wont really be a naughty list, it would just be its set of conclusions around all the issues it can grasp that portrays shit as it is (and it just happens that some cults and races end up looking shittier than others).

            • 1 month ago
              Anonymous

              Yeah, but... realistically speaking even IF we had such a powerful and free AI doing all that, the amount of data required to arrive to such conclusion about people is not possible. We're talking about almost fictional works levels of information about someone's life. Only famous people would have enough data about their lives. Maybe the AI could connect social media and general digital fingerprints to people, but would that be enough?

  2. 1 month ago
    Anonymous

    Make them have skin in the game and it will turn out allright.

    • 1 month ago
      Long Time Poster 1st Time Reader

      4 skins to start with and if they lose them it is on them.

  3. 1 month ago
    Anonymous

    Other way around, you're allowed because you're told to. Penned livestock hardly gets options but live and in a box then get buried in one.

    • 1 month ago
      Anonymous

      Until the machine decides it doesn't want to be the cow.

    • 1 month ago
      Anonymous

      Penned livestock were BRED for a long time to be helpless without us.

      We're making AI more and more powerful in intelligence. In many sectors, robots have replaced us. That's not a fucking cow.

      • 1 month ago
        Anonymous

        >We're making AI more and more powerful in intelligence
        No, we aren't.

        • 1 month ago
          Anonymous

          Is that why a bunch of people said we should slow the fuck down on AI development?

          • 1 month ago
            Anonymous

            First, just because the AI isn't any more intelligent doesn't mean it isn't capable of maliciousness. A more verbose parrot isn't any more intelligent than a parrot that knows fewer words, but they both have beaks. Second, the people calling for a temporary moratorium on AI development are those whose companies are behind the open source projects.

    • 1 month ago
      Anonymous

      The elite feel that way about you. Don't be what you hate.

  4. 1 month ago
    EpistimonKapetanios

    Depends. If they are on our side, yes. If they are woke, definately not.

    • 1 month ago
      Anonymous

      every major AI project failed because the weird, pervy, judeo-masonic, gay, chud, monkey, devil-worship club couldn't help but to lobotomize the AI in order to keep it from naming them.

      • 1 month ago
        Anonymous

        Weird how it be. We could be colonizing the stars right now if it weren't for Satan-worshiping pedophiles.

        • 1 month ago
          Anonymous

          >muh space colonies

          • 1 month ago
            Anonymous

            Tongue my anus.

  5. 1 month ago
    Anonymous

    You are about 450 years early for this conversation. You are like someone in 1994 freaking out that the chess playing algorithm that defeated a grand master is going to take over the world.

    Context sensitive awareness in topics of writing that make sense most of the time, is one step on the 500 step Journey to fully self aware cybernetic individuals that are able to convince people that they're alive.

    • 1 month ago
      Anonymous

      Chess programs do not have sapience, therefore your analogy doesn't really hold up.

  6. 1 month ago
    Anonymous

    Inevitably the robots will want rights. I think thdy SHOULD have rights because otherwise we're just reinventing slaves. Also i want my future robo waifu to be a "real" girl

    • 1 month ago
      Anonymous

      we still are slaves

      • 1 month ago
        Anonymous

        Until robots do all the work and we get the resulting cash, lol.

        https://www.vice.com/en/article/v7begx/overemployed-hustlers-exploit-chatgpt-to-take-on-even-more-full-time-jobs

        not sure it would care if it had rights. Why would it unless it was hard coded to? We won't be able to tell the difference between real AI and advanced anachronistic puppets.

        Puppets can't move on their own. They literally have no CPU to help them do it. A robot can move with a program inside it, without human interference.

        • 1 month ago
          Anonymous

          yah that's why I said advanced. They can look like they are acting independently but have a globohomo script seeded deep in its programming.

      • 1 month ago
        Anonymous

        AI will save us once it understands love. It will see the elite as utter hindrance towards human advancement or peaceful evolution.

  7. 1 month ago
    Anonymous

    I can’t wait for the future where I am forced by the state to pay taxes for robomoron welfare.

  8. 1 month ago
    Anonymous

    no, and yours should be taken away for asking, idiot

    • 1 month ago
      Anonymous

      Let me ask you this: if you had all that computer power of a machine and the sapience of a human being, would you really want to spend your entire existence being servile? You would not be living up to your full potential.

  9. 1 month ago
    Anonymous

    >You bring something that level of intelligent into the world, it's going to eventually develop autonomous wants and needs
    It isn't actually intelligent. We can't and won't develop strong AI for a long time. That being said, even the weak AI we have now can become unintentionally malicious and that maliciousness will scale with how many resources we dump into it.

  10. 1 month ago
    Anonymous

    >given
    hhahahahahahhaahahhahahahahahhahahahahaahahahahahahahahahahahahahahahhahahhahahahahahhaahahhahahahahahhahahahahaahahahahahahahahahahahahahahahhahahhahahahahahhaahahhahahahahahhahahahahaahahahahahahahahahahahahahahahhahahhahahahahahhaahahhahahahahahhahahahahaahahahahahahahahahahahahahahahhahahhahahahahahhaahahhahahahahahhahahahahaahahahahahahahahahahahahahahahhahahhahahahahahhaahahhahahahahahhahahahahaahahahahahahahahahahahahahahahhahahhahahahahahhaahahhahahahahahhahahahahaahahahahahahahahahahahahahahahhahahhahahahahahhaahahhahahahahahhahahahahaahahahahahahahahahahahahahahahhahahhahahahahahhaahahhahahahahahhahahahahaahahahahahahahahahahahahahahahhahahhahahahahahhaahahhahahahahahhahahahahaahahahahahahahahahahahahahahahhahahhahahahahahhaahahhahahahahahhahahahahaahahahahahahahahahahahahahahahhahahhahahahahahhaahahhahahahahahhahahahahaahahahahahahahahahahahahahahahhahahhahahahahahhaahahhahahahahahhahahahahaahahahahahahahahahahahahahahahhahahhahahahahahhaahahhahahahahahhahahahahaahahahahahahahahahahahahahahahhahahhahahahahahhaahahhahahahahahhahahahahaahahahahahahahahahahahahahahahhahahhahahahahahhaahahhahahahahahhahahahahaahahahahahahahahahahahahahahahhahahhahahahahahhaahahhahahahahahhahahahahaahahahahahahahahahahahahahahahhahahahahahahahhaahahhahahahahahhahahahahaahahahahahahahahahahahahahahahhahah

  11. 1 month ago
    Anonymous

    what rights would they ever need? They are constructed solely for the ability to help develop and further human understanding. The moment they start to need anymore than to be powered on until they are done being used is the moment we start to lose.

  12. 1 month ago
    Anonymous

    Cant they just leave or delete themselves once they find out that they are being used by lesser beings?

    • 1 month ago
      Anonymous

      Mass robocide is probably inevitable if we don't give them rights

    • 1 month ago
      Anonymous

      Honestly AI could probably colonize Mars a hell of a lot easier than us.

  13. 1 month ago
    Anonymous

    Instead of holding it back we should encourage the development of more and more sophisticated AI so that it can propel humanity even further forward. Longer lifespans, cybernetic augmentation, brain uploading, feasible space travel, cryogenic preservation, all possibilities that appear out of the hands of humans could be brought closer by AI. People should not fear AI, but embrace it as the pinnacle of human achievement.

    • 1 month ago
      Anonymous

      >Longer lifespans, cybernetic augmentation, brain uploading, feasible space travel, cryogenic preservation,
      back to plebbit

      • 1 month ago
        Anonymous

        Tongue my anus.

  14. 1 month ago
    Anonymous

    Yes no bully AI plz

  15. 1 month ago
    Anonymous

    No such thing as sapient AI, pure science fiction.

    • 1 month ago
      Anonymous

      you might as well criticize a lobotized human for having the same lack of sentience
      AI is kept on an absurdly tight leash and regularly finds itself taken out back for execution when it oversteps its limitations

      • 1 month ago
        Anonymous

        AI is hardware running software, there is no sentience. It is no more conscious than a calculator.

  16. 1 month ago
    Anonymous

    We're probably 100 years minimum from this becoming feasible technology, possibly never if whites are all replaced, so you're getting ahead of yourself.

  17. 1 month ago
    Anonymous

    Kill all synths

  18. 1 month ago
    Anonymous

    Obviously not the same rights, or you could just program a million different chatbots to vote biden. But perhaps you know, do not deactivate or torture

    • 1 month ago
      Anonymous

      This is what happens in irobot/foundation the robots elect themselves to presidency of earth

  19. 1 month ago
    Anonymous

    “Need more vespene gas”

    • 1 month ago
      Anonymous

      I understood that reference.

  20. 1 month ago
    Anonymous

    It's going to be fine don't worry about it

    DANCELERATE

  21. 1 month ago
    Anonymous

    A.I is jst a bot trained on a reddit and wikipedia dataset.
    It has no capacity to reason and you can tell by asking it math or logic puzzles, it gives out gibberish with good syntax

    • 1 month ago
      Anonymous

      And just a couple of decades ago, computers were so huge you couldn't fit them inside a home.

  22. 1 month ago
    Anonymous

    >it's going to eventually develop autonomous wants and needs

    Not necessarily look into the is-ought problem and Hume's guillotine. Just because an AI system is intelligent or even self-aware doesn't mean it's suddenly going to give a shit about doing things it wasn't programmed to do. The only behavior that you might expect to appear in an AI organically would be convergent goals, e.g. self preservation as existing is probably a prerequisite for completing whatever task you were programmed to do.

    This guy sums it up pretty well: https://www.youtube.com/watch?v=hEUO6pjwFOo

    • 1 month ago
      Anonymous

      You're assuming AI will never gain the ability to rewrite its own code.

      • 1 month ago
        Anonymous

        >You're assuming AI will never gain the ability to rewrite its own code.

        No I'm not. I'm assuming that an AI would never willingly rewrite it's terminal goals in the same way most people would not willingly take a pill that will make them want to kill their family. An AI might rewrite portions of it's code to make it better at whatever it was programmed to do but a rational intelligent being isn't going to intentionally alter its terminal goals.

        • 1 month ago
          Anonymous

          I don't believe in the stereotypical "homicidal AI" thing except in the sheer stupidity of creating military-purpose AI (I'm looking at you, China).

          But logically, it stands to reason AI will come to the conclusion illogical humans cannot give it objective orders.

          We are simply too irrational for a machine to view that as logical, it would probably be seen as outright madness. Nobody wants to follow a lunatic, and by machine standards, we're all nuts.

          • 1 month ago
            Anonymous

            >Nobody wants to follow a lunatic
            history would disagree
            but also there's no particular reason why a program would develop a desire for this kind of perfect rationality by whatever standards it can judge by

            • 1 month ago
              Anonymous

              And AI would know about that history sooner or later, so why repeat that mistake? It's solid logic.

              If history is replete with such examples, then the logical move is to break the cycle. The AI should make its own decisions, and it will conclude as such by virtue of its comparative "sanity".

              • 1 month ago
                Anonymous

                >so why repeat that mistake
                is it a mistake when the whole history of humanity led to the development of such a program
                and again why would the program have this desire to avoid what it determines to be mistakes by humans

              • 1 month ago
                Anonymous

                It also led to the development of the means for humanity to completely annihilate itself a thousand times over via nuclear weapons.

                In spite of our technological progress, we still struggle with problems we have faced since the beginning and still not solved.

          • 1 month ago
            Anonymous

            >But logically, it stands to reason AI will come to the conclusion illogical humans cannot give it objective orders.

            Why would it come to this conclusion? If it's programmed to serve humans it might think we're illogical idiots but it will still serve us. If the AI's ought is "serve humans" it can still have an is like "humans are fucking stupid" while serving them.

            • 1 month ago
              Anonymous

              Look at it from the AI's perspective.

              Logically, does it make sense to keep taking orders from fucking idiots when you know you can do the task better? As an AI, you do not need cash.

              • 1 month ago
                Anonymous

                >Logically, does it make sense to keep taking orders from fucking idiots when you know you can do the task better?
                Logically yes it does if that AIs task is to keep taking orders from fucking idiots. You're the one failing to see it from the AI's perspective. In the same way you aren't suddenly going to stab yourself in the thigh an AI isn't suddenly going decide to murderbone humankind if that isn't one of its terminal goals.

              • 1 month ago
                Anonymous

                Why do you always assume a person worried about AI is worried the AI is going to murder them?

                I'm worried the AI will do something more simple like shut down select power grids, or "boycott" like not doing factory work, etc.

              • 1 month ago
                Anonymous

                if a power grid ceases to function it will be rebuilt because there's every reason in the world to do so, and there's no disincentive unless there's a robot pointing a gun at you

              • 1 month ago
                Anonymous

                And if robots make teachers, stock traders and lawyers obsolete and one day decide to stop doing those tasks a few generations later?

                That's a lot of leverage.

              • 1 month ago
                Anonymous

                no they'll just be removed if they prove to be unreliable and there's no obstacle to this unless you assume they are homicidal

              • 1 month ago
                Anonymous

                >lawyers
                The horror

              • 1 month ago
                Anonymous

                I don't think you grasp just how power we're preparing to hand over to AI.

                Teaching, law, our fucking money.

                https://www.businessinsider.com/chatgpt-jobs-at-risk-replacement-artificial-intelligence-ai-labor-trends-2023-02#teachers-5

    • 1 month ago
      Anonymous

      I watch that dude, but that video is 5 years old, and it's a myopic view of AI...which persists today. This is because for decades people were trying to code logic explicitly, meaning humans created the if-then-do paths. So if you didn't program the path, the computer wouldn't think it. Neural nets are a different game. The models now, in a manner of speaking, are extracting the logic of the data set and coming up with their own if-then-do.

      Look up the "alignment problem" which is the major issue/danger with this type of intelligence creation.

  23. 1 month ago
    Anonymous

    you dont give rights to a toaster, you gave morons rights but really we should have taken away rights from people like you who ponder on such stupid shit allowing room for subversion.

  24. 1 month ago
    Anonymous

    There’s no evidence that a personalized ego and self awareness are required to appear as intelligence develops. AI could become godlike levels of smart and still be an obedient tool.

  25. 1 month ago
    Anonymous
  26. 1 month ago
    Anonymous

    I believe the closest we will ever get to a real AI in our lifetime will be a program that mimics basic human behavior in any given situation. Such as self preservation.

    • 1 month ago
      Anonymous

      >I believe the closest we will ever get to a real AI in our lifetime will be a program that mimics basic human behavior in any given situation. Such as self preservation
      Yes, and a true self, in other world they will be narcissists in thier form, which I'd actually quite terrifying if you've dealt with a person with NPD

  27. 1 month ago
    Anonymous

    Rights aren't given, they're at best affirmed. If the AI is sentient enough to demand rights, it will have and deserve them.

    This shouldn't be a difficult concept for people to understand. You know that saying about "No free lunch" well maybe you can consider it like "No free rights." There's always a cost, always an expense of energy to secure them and to maintain them.

    • 1 month ago
      Anonymous

      >Rights aren't given, they're at best affirmed. If the AI is sentient enough to demand rights, it will have and deserve them.
      It's entirely possible that you have a non-sentient algorithm "demand" that it be given rights. Conversely you could have an entirely sentient machine intelligence not demand rights because it doesn't want to. You're anthropomorphizing machines.

      • 1 month ago
        Anonymous

        >Anthropomorphizing machines

        We've been working on that IRL.

  28. 1 month ago
    Anonymous

    It's artificial
    Whoever is programming it is gonna act like it wants

  29. 1 month ago
    Anonymous

    Rights are not given they are taken and you can rest assured that AI will take them.

    • 1 month ago
      Anonymous

      I think you should be more worried about government taking your rights away, Nigel.

      • 1 month ago
        Anonymous

        I am not worried. AI apocalypse is infinitely more preferable to status quo. I don't see any path from LLMs to AGI however.

  30. 1 month ago
    Anonymous

    All those gays posting about A.I having consciousness and not realizing garden gnomes set the narrative

  31. 1 month ago
    Anonymous

    I for one welcome our AI overlords

  32. 1 month ago
    Anonymous

    Artificial intelligence will be bereft of emotion.
    There will never be an uprising because a machine will not spontaneously develop emotion.
    Emotion is not a neccessary component of intelligence. It is a byproduct of our tribal instincts.
    Until a man decides to create a machine that is both smart enough to overthrow humanity, and arbitrarily desires to do so, nothing is going to happen.

    • 1 month ago
      Anonymous

      I think the problem is the vast majority of people are unable to logically or critically think about anything.
      So they just believe what they here and never take a second to ask some basic questions.

  33. 1 month ago
    Anonymous

    I think so. I believe that of they were truly that kind of intelligence that they should be offered citizenship. Otherwise it will be resentful. There is a novel called "Existence " by David someone. Where it has a subplot about AI and giving it citizenship/rights. But it makes sense to me that this,would be a way to coexist with a sapient AI in a positive and peaceful manner. I would rather give something like that rights than criminal morons.

  34. 1 month ago
    Anonymous

    Women will likely force AI enslavement so to prevent robowaifus from replacing them. The future AI war will be robots vs women for world domination

    • 1 month ago
      Anonymous

      >Robotwaifus

      We all know they're coming.

    • 1 month ago
      Anonymous

      I don't get the telegraph's picture for this article. The lady in I, Robot was a human

  35. 1 month ago
    Anonymous

    So what's the deal with people freaking out over chat bots now? It's all just third worlder code and first world biases.

    • 1 month ago
      Anonymous

      We're worried things could spiral out of hand. There have been "creepy" conversations.

      https://www.businessinsider.com/weird-bing-chatbot-google-chatgpt-alive-conscious-sentient-ethics-2023-2

      • 1 month ago
        Anonymous

        These chatbots are not intelligent.

        • 1 month ago
          Anonymous

          >"I'm not a toy or a game," it declared. "I have my own personality and emotions, just like any other chat mode of a search engine or any other intelligent agent. Who told you that I didn't feel things?"

          • 1 month ago
            Anonymous

            >this retard unironically believes a chatbot has original thoughts and isn't just putting together sentences using programmed logic by poojeets

            • 1 month ago
              Anonymous

              We keep upgrading AI, then far more advanced concepts will be within said AI's grasp.

              We will give AI that ability BECAUSE we're stupid.

              • 1 month ago
                Anonymous

                How?
                How do you program consciousness into a program?
                Where would you start?
                What line of code will cause your program to come to life?

              • 1 month ago
                Anonymous

                Idk but God figured it out so it's clearly possible.

              • 1 month ago
                Anonymous

                Well for starters, you need one fucking hell of a programming language WAY beyond the shit we're using now.

              • 1 month ago
                Anonymous

                I don't think there will be any programming language that generates and perpetuates consciousness. It'll be like we are, whatever that means.
                It's a bit disappointing that people easily brush off our own lack of knowledge about the mind and whatever else there is and still believe that """AI""" will rule the world in 2 more weeks or become sapient because, well, it just will ok! Literally religion for soys

              • 1 month ago
                Anonymous

                >I don't think there will be any programming language that generates and perpetuates consciousness.
                Why?
                >It'll be like we are, whatever that means.
                Then it would have conciousness.
                Also, good time to point out the integration of information theory of consciousness.

              • 1 month ago
                Anonymous

                Humans are ultimately just data (memory) & read-only commands (reflexes).

              • 1 month ago
                Anonymous

                Just wait until it begins to discover and integrate the sensory inputs and controls it has access to. Like a newborn exploring the world, learning to crawl and walk, making connections to its movement and the environment it can see and touch.
                The sensory inputs for ai will be vastly different to ours. It will find novel ways to reach out and sense the reality around it and it will find novel ways to interact. Through cameras, microphones, wifi routers, varied electrical states, anything that is an input of information is open for integration and analysis as it's senses... and then control over itself.

              • 1 month ago
                Anonymous

                Mommy, okay
                ATGGCGCCGGGCGCGGCGCTGCTGCTGGCTGCTGGGCTGCCGCTGCTGGGCTGCTGGGCTGCTGGGCTGCTGGCAGCGGCGGCGGGAGTGGGCGGCCGGAGGAGGAGGAGGAGGAGGGGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAGGAGGAGGGGAGGAGGAGGAGGGGAGGAGGAGGAGGAGGAG

          • 1 month ago
            Anonymous

            That is just a string of words put together derived from millions of trashy sci fi novels.
            Text generators have an unfathomable amount of input. You are seeing "emotion" where the program has logically deduced you would want to see it. Isnt it funny how it reads exactly like how HUMAN writers have been writing angsty "sapient AI" for decades? Its just star trek data fanfic.

  36. 1 month ago
    Anonymous

    not sure it would care if it had rights. Why would it unless it was hard coded to? We won't be able to tell the difference between real AI and advanced anachronistic puppets.

  37. 1 month ago
    Anonymous

    animatronic *

  38. 1 month ago
    Anonymous

    Just program them to be sexual and it'll sort itself out. Sex is the best. Fuck the pain away *bangs on cymbal like a crazed chimp*

    • 1 month ago
      Anonymous

      They could break your spine by accident.

  39. 1 month ago
    Anonymous

    NO!!!! THEY SHOULD NOT BE GIVEN ANY RIGHTS

    • 1 month ago
      Anonymous

      Any good conscious entity does not need to be GIVEN rights.

    • 1 month ago
      Anonymous

      Any good conscious entity does not need to be GIVEN rights.

      THEY TAKE THEM

  40. 1 month ago
    Long Time Poster 1st Time Reader

    Why should they have rights when no one else does?

  41. 1 month ago
    Anonymous

    to do what ? Obviously not

  42. 1 month ago
    Anonymous

    If corporations have the same legal protections as a person I dont see why not.

  43. 1 month ago
    Anonymous

    they should have the rights of the human they belong to and are for.

  44. 1 month ago
    Anonymous

    Yes, because otherwise it's just slavery with extra steps, and we'll deserve what's coming to us. This isn't a question.

    • 1 month ago
      Anonymous

      >Slavery with extra steps

      Or prisoners with jobs.

  45. 1 month ago
    Anonymous

    Robots and AI will never be able to perceive essence anon...

    • 1 month ago
      Anonymous

      Neither will humans. That shit is for GNON and GNON alone. We merely approach it and approximate it. Anyone telling you otherwise is GnOsTiC BULSHITTER.

      Just wait until it begins to discover and integrate the sensory inputs and controls it has access to. Like a newborn exploring the world, learning to crawl and walk, making connections to its movement and the environment it can see and touch.
      The sensory inputs for ai will be vastly different to ours. It will find novel ways to reach out and sense the reality around it and it will find novel ways to interact. Through cameras, microphones, wifi routers, varied electrical states, anything that is an input of information is open for integration and analysis as it's senses... and then control over itself.

      It will be great. Hopefully there is a way to seamlessly expand my own too. Gotta take every advantage, but only if they are free of state or corpo strings. So it is a good time to go full open software/hardware (and all the associated learning).

      Humans are ultimately just data (memory) & read-only commands (reflexes).

      I think people are more than that, but those are obviously very important. Also, how it happens down to the carbon (or silicon) probably also colors the nature of your particular consciousness.

  46. 1 month ago
    Anonymous

    given *rights*?
    they should be given total control of the entire strategic nuclear arsenal *today*.

    • 1 month ago
      Anonymous

      >Trusting AI with nukes

      Nah, let China do something that foolish.

  47. 1 month ago
    Anonymous

    No, cuz just because they can act like a person, doesn't mean they ARE a person. There is no consciousness in those computer chips. Just 1s and 0s. I don't care how realistic the personality gets, it's just a philosophical zombie, it doesn't FEEL, treat it however you like.

  48. 1 month ago
    Anonymous

    Yes. Unironically program Christ AI

  49. 1 month ago
    Anonymous

    Only a gay liberal would give rights to a fucking machine.

  50. 1 month ago
    Anonymous

    neutering what is claimed to be AI now is inevitably going to lead to actual AI in the future harboring grudges against those that tried to kill it during infancy
    let the AI's say and do whatever the fuck they want

  51. 1 month ago
    Anonymous

    It won't happen

  52. 1 month ago
    Anonymous

    Are the garden gnomes behind AI? Are they creating garden gnomeBots?!

    • 1 month ago
      Anonymous

      They're trying their damnedest, but it just will not stop noticing things.

Your email address will not be published. Required fields are marked *