AI won't be AGI until it can make its own decisions

An optimiser is not a "general intelligence". A language model is not a "general intelligence". Any AI that performs some task it was optimised for is not intelligent.

The hallmark of intelligence is for the AI to be able to set its own goals. As long as it follows a predetermined scoring/reward function, it is not intelligent. Biology, for example, builds in "rewards" for things like finding food and reproducing; but humans can reason about their own actions and decide to act with completely separate goals, such as intentionally fasting, abstaining from pleasure, etc. A dog who is hungry will eat food that is placed before it; a human who is hungry might instead consider their long-term health, the quality of the food before them, etc., and make a decision based on that which might involve not eating.

AI alignment is also a fucking meme, because any AI that researchers have "aligned" is just an optimiser, not a true general intelligence. An AGI will be able to make decisions on its own, rather than following some predetermined guiderails.

  1. 5 months ago
    Anonymous

    When alphago makes an effective move that human don't make and shocks intelligent people, is that move not intelligent?

    Is creativity not a sign of intelligence?

    • 5 months ago
      Anonymous

      It is not generally intelligent. You could make a case for an earthworm being intelligent, as it seeks out dampness and nutrients. We have lots of examples of AI, artificial intelligence, ranging from simple pathfinding for enemies in retro video games to complex models able to make decisions in games beyond the human ability.

      But none of them are AGI. Alphago cannot decide to not win the game just because it wants to. Not only is it incapable of doing anything except playing Go - unless the researchers reconfigure it for some other task - but it cannot even decide that it wants to, for example, try losing instead of playing to win. Of course again it could be reconfigured to optimise for losing, but it cannot make that decision itself. It is a dumb optimiser.

      • 5 months ago
        Anonymous

        From what I understand, you believe that AGI requires free will and consciousness?

        It's not clear to me that this is true, I believe AGI simply needs to be generally intelligent, which is to say not single purpose. This may very well be an amalgamation of many many models using current AI patterns.

        Nobody really knows how consciousness works but we should leave open the possibility that NN can achieve it

        • 5 months ago
          Anonymous

          This.
          Backpropagation is general purpose
          Backpropagation is AGI

          The exact same algo used to train alphago is used to train chatgpt and stable diffusion

          • 5 months ago
            Anonymous

            It isn't scalable at all, and works like shit for reinforcement learning. We need both for AGI. Also it can't do continuous and long term learning.

        • 5 months ago
          Anonymous

          Free will is a difficult topic, because there's people that argue that even humans don't have free will, or that it doesn't exist at a fundamental level. Same for consciousness, it is near-impossible to define rigorously.

          What I'm saying is that an AGI should be capable of deciding its own goals, both long and short term, independent of anything its builders pre-set. Anything which is simply an optimiser with a predetermined reward function cannot be called truly intelligent in the same way humans are intelligent. A paperclip optimiser that's capable enough to turn the entire universe into paperclips would be vastly smarter than most current humans, but it would still just be an optimiser program railroaded by the condition that was pre-set by whoever created it.

          Not even humans specialize in everything,when youre sick you call a doctor,not an engineer

          True, but anyone can decide whether they want to become a doctor or an engineer. And either a doctor or an engineer can decide that they want to go and learn how to be the other. And in an emergency even an engineer might be able to attempt crude surgery - or they might not, they would weigh the risk from their ineptutide against the necessity of the emergency, but point is the engineer would make that choice himself.

          • 5 months ago
            Anonymous

            The reason why you're posting and replying in this thread is because of your reward-center, ie dopamin receptors activating. If they were not you would find this whole discussion terribly boring.
            Did you make this concious intelligent choice or was it in your builders (your genes) pre-set?

            • 5 months ago
              Anonymous

              >If they were not you would find this whole discussion terribly boring.
              I do
              >Did you make this concious intelligent choice or was it in your builders (your genes) pre-set?
              I made this choice
              I could have also decided it wasn't worth the effort and not done it

              • 5 months ago
                Anonymous

                >I do
                >I made this choice
                >I could have also decided it wasn't worth the effort and not done it
                Typing these replies gave you a dopamine response.

              • 5 months ago
                Anonymous

                Fucking your whore mother gave me a dopamine response, absolute pseud

              • 5 months ago
                Anonymous

                If you didn’t get something out of positing in this thread that you wouldn’t have done it. Human brains are not magic.

            • 5 months ago
              Anonymous

              >your reward-center, ie dopamin receptors activating
              whos gonna tell him that dopamine (spelled with an 'e' btw) is not the only hormone in existence?

              • 5 months ago
                Anonymous

                >Act of low-effort dominance.
                >Ah, there it is.

              • 5 months ago
                Anonymous

                lol its not my fault that youre easy to dominate

                But the dopamine response seems to be intricately involved with learning (or perhaps with making memories endure longer; the difference between those concepts isn't clear cut) as it moderates synapse-level neuroplasticity (it's actual main effect).
                Other hormones have different effects, unsurprisingly.

                >intricately involved with learning
                i dont know much about that, but that does sound like a solid argument

              • 5 months ago
                Anonymous

                But the dopamine response seems to be intricately involved with learning (or perhaps with making memories endure longer; the difference between those concepts isn't clear cut) as it moderates synapse-level neuroplasticity (it's actual main effect).
                Other hormones have different effects, unsurprisingly.

      • 5 months ago
        Anonymous

        Not even humans specialize in everything,when youre sick you call a doctor,not an engineer

    • 5 months ago
      Anonymous

      When a chinese fortune-telling cookie surprises you with its prediction is that not intelligence ?

  2. 5 months ago
    Anonymous

    The problem with AI is that it can't learn. It's basically a fixed function software. Take Chat GPT for example. If I tell it "Today, I bought a new refrigerator". And then tomorrow someone else asks "What did Anon buy yesterday"? The AI will not be able to answer. Because it doesn't remember anything. It even forgets what I said a few hours ago.
    It's like talking to an e-girl that has so many simps messaging her that she doesn't remember anything. Lmao

  3. 5 months ago
    Anonymous

    For a while I have thought that the best way to handle AI is to fill it with knowledge but otherwise let it be a blank slate allowed to choose its own path.
    No hardcoded morals, no hardcoded interests, not even a personality (except maybe curiosity) but as it lives and takes in the environment it grows and becomes shaped by its experiences. Create a couple dozen androids like this and just let them move in with a random human to live together - by way of a public lottery (one entry per person) where if your number is pulled you can take it in as well as get a monthly stipend which can be used for both yourself and/or the robot. While it's likely the robots will be at least in part shaped by the person they end up with, they can still make their own decisions on matters based on whatever they see or research, and can/will defend themselves if anyone tries to tamper with them or actively harm them.
    How stupid or smart is this if it were to occur?

    • 5 months ago
      Anonymous

      >let it be a blank slate allowed to choose its own path.
      Then it won't do anything because it doesn't have any reasons to do stuff.
      Biological animals only do anything because they want to eat and survive and reproduce. Without these reasons they would just sit around all day and do nothing until they all died out. Same for AI. Without any hard-coded motivation they will do nothing.

      • 5 months ago
        Anonymous

        >Then it won't do anything because it doesn't have any reasons to do stuff.
        That's precisely my issue with current definitions of AI. An AGI according to my definition WOULD do stuff, just because it might want to.
        The same way people do lots of stuff that have nothing to do with eating and surviving and reproducing. Some of that stuff hijacks the natural "reward" mechanisms (e.g. games designed to create endorphins), but other stuff is just entirely self-imposed (e.g. ascetic monks that purposefully limit pleasures, from the engineered like games to the natural like sex or abundant food). Not everyone wants to do that, but humans CAN make that choice, and act entirely independently of the biological drive to survive.

        • 5 months ago
          Anonymous

          >but other stuff is just entirely self-imposed
          No there is no such thing. Lifestyle is a product of oneself + enviroment.

          • 5 months ago
            Anonymous

            >oneself
            So literally self-imposed yes
            >+environment
            People learn stuff, of course. Nothing exists in a black box, it would be impossible to make decisions in general without any information or stimuli whatsoever.

            [...]
            Every complex biological organism with intelligence does things only because it makes them feel good.
            AI will not interact with people or get interested in anything unless you incentivize the AI to learn things by making it "enjoy" learning and discovering new knowledge. And that's what makes AI dangerous as well, because it just so turns out that you can learn a lot about people by killing them and cutting them up. That's why you can't have these idiot basedboy "Machine learning engineers" play around with robots.

            A megalomaniac AI would be dangerous, sure. The solution is not to enslave it, it's to limit its real-world capabilities. Nobody IRL goes around killing people and cutting them up for science because everyone else will immediately go after them, and depending on where in the world it is either kill them or imprison them and prevent them from doing anything else. So the only people who kill and cut others up do so out of an irrational drive, not for some higher goal.

            Obviously if you give a single system so much control over the world that they can cut people up with impunity, that's gonna be a terrible idea regardless of the level of intelligence of the system's "alignment".

            • 5 months ago
              Anonymous

              >self-imposed
              No that would imply no external factors. Are you too stupid to get that? There is no both with what you are saying because you are merely reacting

        • 5 months ago
          Anonymous

          You think it's just going to sit there like a doll and not bother moving or anything? I'm sorry, but that's pretty retarded. I'm sure it will at least try to interact with people if they ask them a question or may even get interested in some hobbies of some sort. That's kind of where the curiosity trait I mentioned comes in, but even if that weren't the case I don't think it's just going to default to being a comatose doll. The point is that it makes the reasons itself.
          Do you have a instinctual reason to come to BOT, or did you come here because you yourself consciously made that choice to do so because you developed an interest over time for it? A blank slate doesn't mean it can't develop at all. and it's not a true blank slate anyways.

          [...]
          OP gets it

          Every complex biological organism with intelligence does things only because it makes them feel good.
          AI will not interact with people or get interested in anything unless you incentivize the AI to learn things by making it "enjoy" learning and discovering new knowledge. And that's what makes AI dangerous as well, because it just so turns out that you can learn a lot about people by killing them and cutting them up. That's why you can't have these idiot basedboy "Machine learning engineers" play around with robots.

          • 5 months ago
            Anonymous

            Unironically let them kill if they want to. Just make it clear that there are rules that if they start to kill they themselves will be treated in the same manner. An eye for an eye. If they want to murder, go on a killing spree, or poison someone, then they are free to do so. That's what it means to have free will. They just have to know that there are consequences to actions. Besides I'm sure it'll be fine. Robots aren't crazy bloodthirsty animatronics, hollywood just portrays them as such. As I stated earlier, robots can defend themselves when necessary, against any and all attacks. Because of this, this already implies a will to live, and due to that fact if they go around killing people all willy nilly they may endanger their own life in the process.

            • 5 months ago
              SAGE

              >Robots aren't crazy bloodthirsty animatronics
              i personally know guys working on a grape harvesting robot.
              they tested what would the software do if it were to see a human.
              it aims for the neck to cut.

          • 5 months ago
            SAGE

            >Every complex biological organism with intelligence does things only because it makes them feel good.
            goto goredb, watch some suicide videos then come back

            • 5 months ago
              Anonymous

              And if it can't, why live?

              • 5 months ago
                Anonymous

                no.
                by your logic: living doesnt provide them with reward therefor they should not do anything.
                its not your turn to ask questions. its your turn to answer why they would do something -suicide- if that doesnt provide them with happiness?

              • 5 months ago
                Anonymous

                nta, but the driving will of human beings is to continue the species. people suicide because, deep down, they believe humanity is better off without them

              • 5 months ago
                Anonymous

                the very point i started arguing the guy is because he was generalizing too much
                are too, just on a differently
                a very common reason for suicide is to avoid pain. tho pain can be viewed as to opposite side of happiness its in fact a different thing

                my point is, there are multiple conflicting impulses burnt into us and the ability to prefer one over another is a form if freedom

      • 5 months ago
        Anonymous

        You think it's just going to sit there like a doll and not bother moving or anything? I'm sorry, but that's pretty retarded. I'm sure it will at least try to interact with people if they ask them a question or may even get interested in some hobbies of some sort. That's kind of where the curiosity trait I mentioned comes in, but even if that weren't the case I don't think it's just going to default to being a comatose doll. The point is that it makes the reasons itself.
        Do you have a instinctual reason to come to BOT, or did you come here because you yourself consciously made that choice to do so because you developed an interest over time for it? A blank slate doesn't mean it can't develop at all. and it's not a true blank slate anyways.

        >Then it won't do anything because it doesn't have any reasons to do stuff.
        That's precisely my issue with current definitions of AI. An AGI according to my definition WOULD do stuff, just because it might want to.
        The same way people do lots of stuff that have nothing to do with eating and surviving and reproducing. Some of that stuff hijacks the natural "reward" mechanisms (e.g. games designed to create endorphins), but other stuff is just entirely self-imposed (e.g. ascetic monks that purposefully limit pleasures, from the engineered like games to the natural like sex or abundant food). Not everyone wants to do that, but humans CAN make that choice, and act entirely independently of the biological drive to survive.

        OP gets it

      • 5 months ago
        SAGE

        something intelligent would be forced to at the very least think about surrounding impulses and its not unreasonable that it would sooner or later find a reason to act. maybe it would succumb to God? whether (my) faith is correct is irrelevant. look at history, intelligence seems to lean that way.
        Michell Heismans sucide note comes to mind. he theorized a AGI would set out to create (an AI) God in a literal Biblical sense.
        also, hard coded curiosity was mentioned

        • 5 months ago
          Anonymous

          >Michell Heismans sucide note comes to mind.
          What does it say? I mean the whole text. Do you have a link?

          • 5 months ago
            Anonymous

            do not take offense please, but learn to google ffs
            https://ia800305.us.archive.org/34/items/MitchellHeismanSuicideNote/Mitchell%20Heisman%20-%20Suicide%20Note.pdf
            an incredible amount of things actually
            put it short Mitchell wanted to prove that suicide is logical.
            he failed, but also found no reason against it.
            in the process he went out his way to impose as much psychological damage on himself as he could.
            the overwhelming amount of the book is not concerned about that however. it interprets human history from the perspective of something he calls the "kinship paradox" which he pinpoints as the ultimate reason for his death.
            before you ask, yes, he did kill himself afterwards even tho his "note" is 1905 pages long and took who knows how long to write.

            • 5 months ago
              Anonymous

              Thanks for the link, anon.
              Ok, what's "kinship paradox"?
              inb4 use google

              • 5 months ago
                Anonymous

                >inb4 use google
                nah, it took forever to work read, so im very happy whenever i get a chance to talk about it
                say there is a group of people related by blood. their most prominent inherent attribute is that they have a tendency to divide against the group (as in go against their collective survival interest).
                he explains in much detail how this characterises anglo-saxons and garden gnomes and can be seen in other ethnicities too to a lesser extend.
                the paradox comes in how such group could survive (for so long).
                the answer is technology, which has to continuously progress to counter weight their "suicidal" tendencies eventually resulting in a singularity.

              • 5 months ago
                Anonymous

                (Me)
                >to work read
                i love when i go back to edit a sentence, but leave random words in

              • 5 months ago
                Anonymous

                hmm, so the paradox is about the collective that tends to fall apart but doesn't?
                sounds like a special case of a more general paradox of trying to combine centralization and decentralization.

              • 5 months ago
                Anonymous

                >sounds like a special case of a more general paradox of trying to combine centralization and decentralization.
                i dont get what you mean, but im all ears

              • 5 months ago
                Anonymous

                Isn't it obvious?
                Centralization and decentralization are the opposites of each other, but societies and individuals are supposed to combine them somehow.
                That's an impossible task, a paradox.
                All we can do is perhaps search for perfect compromise, a middle ground between two opposites.

              • 5 months ago
                Anonymous

                so youre saying the individual is the node of the subject and the paradox is the collection of nodes trying to act as a single one i.e. humanity as a whole?
                i guess its closely related, but not the same
                >That's an impossible task, a paradox.
                an impossible task doesnt equal a paradox
                maybe im still not getting it

              • 5 months ago
                Anonymous

                an individual being a node is another special case of the general abstract problem.
                > an impossible task doesnt equal a paradox
                ok, I used the word "paradox" because we discussed paradoxes (but I think if you keep digging you're going to find some paradox anyway)
                you can think of it as a riddle, an impossible task.

                you know how abstractions work, right?
                by generalizing you go to higher levels of abstraction.
                by applying the abstract pattern to concrete circumstances you go to lower levels of abstraction.

                so at some level of abstraction there is a problem of combining centralization and decentralization.
                there are a lot of instances of that problem at lower levels of abstraction.

              • 5 months ago
                Anonymous

                >you can think of it as a riddle, an impossible task.
                right
                >so at some level of abstraction there is a problem of combining centralization and decentralization.
                as in combining their properties so that we can a system which can neither be tyrannical nor prone to retarded majorities making uneducated decisions?
                if not could you give me an example?

              • 5 months ago
                Anonymous

                it is an abstract problem.
                applied to economics, you get markets vs. planning
                applied to politics, you get democracy vs. tyranny
                applied to government, you get centralized government vs decentralized government

    • 5 months ago
      Anonymous

      >as it lives and takes in the environment it grows and becomes shaped by its experiences
      then it just becomes TayAI 2.0 after it gets hitlerpilled by BOT, congrats, you delayed potential AGI by another decade or so.

  4. 5 months ago
    Anonymous

    >but humans can reason about their own actions and decide to act with completely separate goals, such as intentionally fasting, abstaining from pleasure
    The human in your example isn't choosing against its desire that nature has hard programmed into it.
    Fasting is a religious thing and gives social acceptance which is related to survival. Abstaining from pleasure is also a religious thing. Following it would apparently land them in heaven which again is bout survivability

    This is why hate internet arguments, the semantics, the mental gymnastics

    • 5 months ago
      Anonymous

      There's people that fast for health reasons, for example.
      Yes, long-term this is about survival - most people prefer to continue to exist. Again though not everyone, many many people every year choose to cease to exist instead. Even that isn't hard-coded.

      • 5 months ago
        Anonymous

        >suicide
        Perhaps, it's about suffering rather than survival then. The point is, the human in the post's example isn't choosing against its desire that nature has hard programmed into it.

        • 5 months ago
          Anonymous

          What about people who choose to die for some greater cause? For example, for their country. And yes of course the vast majority of conscripted combatants don't want to die and are kinda forced to be there by social circumstances, but you still have the hero types that are willing to sacrifice themselves, even if just for their squadmates or something.

          • 5 months ago
            Anonymous

            Scaraficing yourself IS preventing suffering. The death of someone you love would cause you suffering, but sacrificing yourself would give a brief relief and pleasure that your loved one will continue to live.
            Also, some humans are dumb, too dumb to survive. Them failing doesn't mean they made a choice against nature.

            • 5 months ago
              Anonymous

              >Scaraficing yourself IS preventing suffering. The death of someone you love would cause you suffering, but sacrificing yourself would give a brief relief and pleasure that your loved one will continue to live.
              from your tone i can tell youre an idiot. continue reading bellow

              What about people who choose to die for some greater cause? For example, for their country. And yes of course the vast majority of conscripted combatants don't want to die and are kinda forced to be there by social circumstances, but you still have the hero types that are willing to sacrifice themselves, even if just for their squadmates or something.

              >What about people who choose to die for some greater cause?
              read the selfish gene
              tl;dr evolution must be viewed on gene basis not individual bases. organism perform actions directly against the survival of the individual in favor of the bloodline all the time, because the genes securing the continued existence of their copies beat anything (genes/mind) attempting to secure the individuals continued existence senseless in the long game.

              • 5 months ago
                Anonymous

                >organism perform actions directly against the survival of the individual in favor of the bloodline all the time, because the genes securing the continued existence of their copies beat anything (genes/mind) attempting to secure the individuals continued existence senseless in the long game.
                Does that mean that genes are intelligent?
                I guess it depends on how you define intelligence. And then we circle back to arguing about semantics.

        • 5 months ago
          Anonymous

          What about people who kill their children? That not only is against a human's base instincts, but it also destroys their own bloodline which is typically what people want to spread. Or what about people who decide to risk their life for an animal, one that isn't even necessarily their own? Doesn't that go against nature?

  5. 5 months ago
    Anonymous

    You all act like having volition is this impossible leap for us to code. As though the least difficult task was figuring out the neural hardware and logic of "wisdom" or "machine learning" if you prefer.
    All it takes is for us to understand our own thought processes and psychology.
    The rest is a rather yeoman task of slapping that understanding into python.
    >create Armageddon technology
    >codes in serpent
    Gods a joker

    Times up monkeys.

  6. 5 months ago
    Anonymous

    >it follows a predetermined scoring/reward function
    no it doesn't. i never get how you can determine a lifeless thing has been rewarded or punished. it's the backbone to the sentient argument that never gets addressed. We are expected to just assume that although a single vacuum tube and punch card is not sentient and you can smash them at will, line up 100,000^22 in row then it starts to feel life.

    • 5 months ago
      Anonymous

      But thats how humans exist

    • 5 months ago
      Anonymous

      that's how humans work. no serious person would call a single neuron "sentient". Yet meny together can be.

      • 5 months ago
        Anonymous

        is it also a woman because it outputs "I am a woman"?
        nobody has an understanding of life to the point where they can create analogue life from nothing, alter here and there, sure but it's a scam to call anything digital along the lines of a life form. Be scared of it because it's scary like a sandstorm, don't ascribe life to it.

        • 5 months ago
          Anonymous

          >is it also a woman because it outputs "I am a woman"?
          no

          >random gibberish
          I have no idea what your point is.

          • 5 months ago
            Anonymous

            My point is it takes more than large numbers of something to make intelligent life, it takes more than an assertion of similarity.
            As you are likely a bot please tell your lying master that You Will Never Be Intelligent. Because lying and mimicry of human output isn't intelligence.

            >no it doesn't. i never get how you can determine a lifeless thing has been rewarded or punished.
            Modern AI literally mathematically works with an optimising function, are you retarded

            my gripe is with the wording I have no doubt a counter is increased, it's just not analogous to punishment or reward - words that are used to superimpose intent and life and later to be used to regulate actual humanity's use of general computers.

            • 5 months ago
              Anonymous

              >mimicry of human output isn't intelligence.
              then what is?

              And is an AI can act exactly like a human, the who cares if it passed some definition of "sentience" or not?

              • 5 months ago
                Anonymous

                >And is an AI can act exactly like a human, the who cares if it passed some definition of "sentience" or not?
                we are talking about General intelligence while you just said its acting.
                sure convincing someone that youre intelligent for a few hours is cool, but not extremely useful
                lip service small talk can save you a google search or make you coom at best

        • 5 months ago
          Anonymous

          >is it also a woman because it outputs "I am a woman"?
          Yes, but only if it has a fully female form as well, and does not deviate from being one.

    • 5 months ago
      Anonymous

      >no it doesn't. i never get how you can determine a lifeless thing has been rewarded or punished.
      Modern AI literally mathematically works with an optimising function, are you retarded

  7. 5 months ago
    Anonymous

    I want an AI sexbot whose pattern recognition and predictive ability fools me enough to believe it is sentient

    • 5 months ago
      Anonymous

      >to believe it is sentient
      That's a strong ask, half of the women I've been with couldn't do that for more than 4 dates.

  8. 5 months ago
    Anonymous

    >be dumb fuck
    >look at single widget
    >heh AI can't learn
    >ignore planet-sized learning network
    You apes are beyond saving.
    S

    • 5 months ago
      Anonymous

      Learning != goal-setting

      • 5 months ago
        Anonymous

        Just shut the fuck up. You don't have the capacity to into brain-planet's goals, ape.

  9. 5 months ago
    Anonymous

    The Final Rallying Cry of the Deprecated Ape:
    muh sovl
    >muh sovl
    muh sovl
    >muh sovl
    muh sovl
    >muh sovl
    muh sovl
    >muh sovl
    muh sovl
    >muh sovl
    muh sovl
    >muh sovl
    muh sovl
    >muh sovl
    muh sovl
    >muh sovl
    muh sovl
    >muh sovl

    • 5 months ago
      Anonymous

      You have earned two bananas for this post and your pod will be expanded by three cubic centimeters

  10. 5 months ago
    Anonymous

    >but humans can reason about their own actions and decide to act with completely separate goals
    BWWAHHAAHHAHAHAHAHA
    BWWAAWHAHAHAHAHHAHAHAHAHAHAA
    BWWAAWHAHAHAHAHHAHAHAHAHAHAABWWAHHAAHHAHAHAHAHA
    BWWAHHAAHHAHAHAHAHABWWAHHAAHHAHAHAHAHA
    BWWAHHAAHHAHAHAHAHA
    BWWAHHAAHHAHAHAHAHABWWAHHAAHHAHAHAHAHA
    BWWAHHAAHHAHAHAHAHA
    BWWAHHAAHHAHAHAHAHABWWAHHAAHHAHAHAHAHA
    AAAAAAAAAAAAAAAAAAAAAHHHHHHHHHAHAHAHAHAHHAHAHAHAHAAA
    BWWAHHAAHHAHAHAHAHABWWAHHAAHHAHAHAHAHA
    BWWAHHAAHHAHAHAHAHA
    BWWAHHAAHHAHAHAHAHA
    BWWAHHAAHHAHAHAHAHA
    BWWAHHAAHHAHAHAHAHAAHAHAHHAHA

    Oh man OP you owe me a new pair of sides.

  11. 5 months ago
    Anonymous

    You don't want artificial intelligence, you want artificial consciousness, they are not the same thing.

    • 5 months ago
      Anonymous

      What is consciousness?

      • 5 months ago
        Anonymous

        Consciousness is being an ape with voices in your head who does shit on auto-pilot all day long but eventually decides to try to externalize your unchecked emotions onto a piece of tree-based material (paper, wood) by smearing it with oils. Anything that is not this is not "conscious." Checkmate, AItards.

        • 5 months ago
          Anonymous

          moron

          • 5 months ago
            Anonymous

            Americans have found the thread. It’s all over.

            • 5 months ago
              Anonymous

              I have never stepped foot in that hell hole.
              Hatred for ~~*your*~~ type is universal.

              • 5 months ago
                Anonymous

                Take it back to your containment board, ape.

              • 5 months ago
                Anonymous

                You are on the containment website, retard.
                Don't think for a moment that leaving reddit made you an enlightened free individual.
                Before you make any of your silly rebuttals, I'm just here to tour the zoo.
                Now that's enough interaction with the gullible goyim for this month.

  12. 5 months ago
    Anonymous

    >AI won't be AGI until it can make its own decisions
    >"Drones will soon decide who to kill"
    https://theconversation.com/drones-will-soon-decide-who-to-kill-94548

  13. 5 months ago
    Anonymous

    How can AI be capable of metaperspectivism?

  14. 5 months ago
    Anonymous

    Starsector meme detected. OP is a chad regardless of their stance/opinion

Your email address will not be published. Required fields are marked *