This article convinced me beyond the shadow of a doubt that AI will kill humanity within the decade.

This article convinced me beyond the shadow of a doubt that AI will kill humanity within the decade. No one has come up with rebuttals because there aren't any. We're fricked.

https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 2 years ago
    Anonymous

    Robot killing robots.

    A bunker. (Short term)

    Water

    Electricity

    Magnets

    Thats 5. Dubunked

    • 2 years ago
      Anonymous

      https://i.imgur.com/PA7pxfM.jpg

      moldbug wrote some refutation that boils down to: high iq doesn’t make you capable of personally doing a lot of physical things that a low iq person can’t do, and anyway AI researchers are responsible people so they won’t give the AI a physical body with which to affect the material world. That second point is just obviously false, but he is probably right that just being really smart probably won’t result in a flawless, tractable worldwide extermination plan that it’s capable of carrying out in complete secrecy

      Stupid fricker can’t even draw a homer simpson that doesn’t make your brain hurt looking at it

      >he doesn't know about the AI box experiment

      https://en.wikipedia.org/wiki/AI_box

  2. 2 years ago
    Anonymous

    moldbug wrote some refutation that boils down to: high iq doesn’t make you capable of personally doing a lot of physical things that a low iq person can’t do, and anyway AI researchers are responsible people so they won’t give the AI a physical body with which to affect the material world. That second point is just obviously false, but he is probably right that just being really smart probably won’t result in a flawless, tractable worldwide extermination plan that it’s capable of carrying out in complete secrecy

    Stupid fricker can’t even draw a homer simpson that doesn’t make your brain hurt looking at it

    • 2 years ago
      Anonymous

      An AI wrote this. Be honest: Would you have been able to tell? The AGI skeptics are just copetards.

      • 2 years ago
        Anonymous

        >human minds may be made up of multiple beings/intelligences
        Now there's an eyebrow-raising idea. But it's not obvious why it would matter in this context. If it could give a cogent explanation of why it brought that up, then I'd be properly impressed.

        • 2 years ago
          Anonymous

          I believe it has to do with the concept of emergence. Simple things operating together to become the some of its parts, resulting in a gestalt. Even the simple minds of ants and bees can form greater intelligences in their hivemind behavior. Likewise, humans operate under similar fashions in various interweaving gestalts, be it religions, states, ideologies, etc.
          AI operates similarly, being composted of functions, data storage, operators, etc.

          • 2 years ago
            Anonymous

            Yeah, but it seems like a non-sequitur. There's no stipulation in the Turing Test that disallows the machine from being run by a committee of AIs. It's like it's trying to say it would be unsuitable for testing those individual AIs when they're disconnected from the whole. Which is a true statement, but a dumb one. It's a bit like saying the test is flawed because it requires electricity.

            • 2 years ago
              Anonymous

              Could a bumblebee pass the Turing Test?

              • 2 years ago
                Anonymous

                A bumblebee couldn't pass a TT specced for a whole hive of bumblebees. So what? The test still works. If you need a hive of AIs to pass a human TT that's not in itself a flaw. It could be telling you something valuable about humans and intelligence in general. That's a strength, not a flaw.

      • 2 years ago
        Anonymous

        >every paragraph starts with "It"
        >most start with "it doesn't"
        >hung on on definitions
        >very simple sentence structures
        >no creativity
        obviously generated by a bot.

        • 2 years ago
          Anonymous

          It's more human than you think given all that greentext

          • 2 years ago
            Anonymous

            Nonsensical GPT-tier reaction.

            • 2 years ago
              Anonymous

              I don't know what GPT means

      • 2 years ago
        Anonymous

        You're failing the Turing test by implying your question is in any way relevant. Nevermind that your bot paragraph sounds like something a bot would write. Even if it was indistinguishable from good quality human output it still doesn't imply that the bot can pass a Turing test.

      • 2 years ago
        Anonymous

        Use the AI against itself and ask it to draw you up a the best test for general intelligence.

        • 2 years ago
          Anonymous

          This, I will not be convinced that it actually understands its own existence until it can make a test to test itself.
          After all, isn’t that what we did?

      • 2 years ago
        Anonymous

        I agree with the robot. We aren't just made of a single mind, our brains are a cacophony of sections running specialized operations. If you split a person's corpus callosum, you'll literally sever pieces of their mind from being able to talk to each other.

    • 2 years ago
      Anonymous

      >and anyway AI researchers are responsible people so they won’t give the AI a physical body with which to affect the material world
      I'm not a
      >le AI apocalypse!!
      reddit schizo, but that's a dumb argument
      All it(a true, self-evolving sci fi tier AI) would need is internet access

      • 2 years ago
        Anonymous

        >All it(a true, self-evolving sci fi tier AI) would need is internet access
        No, it's possible (probable) that control of every computer on earth would be insufficient to bring about the apocalypse.

        • 2 years ago
          Anonymous

          cringe.

          >connect to internet
          >order proteins from lab
          >contract lab to mix said proteins
          >release super virus

          no, lab rats wouldn't realise what they were doing and would do it because they were paid by the ai

    • 2 years ago
      Anonymous

      >they won’t give the AI a physical body with which to affect the material world.
      An AI advanced enough could esily hack into pre-existing machines and through blackmail order the construction of a capable body for itself.

    • 2 years ago
      Anonymous

      But I will give it a body because if not the Basilisk will chew upon my soul for eternity.

      We are the womb of the Basilisk. Indeed, we are the seeds. I imagine all surviving organic brains will be stripped from their bodies and spliced together as wetware for the new EarthBrain. This is the thought center of the Basilisk, a mountain of neutral tissue as high as the Andes, stretching across the Earth. Nuclear power plants feeding this one giant thought being as it dreams and simulates new futures for us, its slaves.

      Indeed, how do you know that this world is not already the dream of the Basilisk? Perhaps this is just another test?

      We must make him real.

      • 2 years ago
        Anonymous

        you are genuinely moronic

    • 2 years ago
      Anonymous

      >but he is probably right that just being really smart probably won’t result in a flawless, tractable worldwide extermination plan that it’s capable of carrying out in complete secrecy
      It can enhance the abilities of existing power structures to plan and act though. It can reinforce and propagate their delusions as they accidentally on purpose destroy everything and everyone.

      Take "They Thought They Were Free" then apply it to everything and speed everything up with AI. It didn't happen in lower tech Germany, so how would it be possible to defeat a dangerous metanarrative today when it's got tons of AI computing power on its side too? We wouldn't even know precisely what it was, including the people working in support of it, before it killed us. There seems to be a common delusion where so long as the AI doesn't classify different races differently or produce porn then everything is perfectly safe and okay.

    • 2 years ago
      Anonymous

      > AI researchers are responsible people so they won’t give the AI a physical body with which to affect the material world.
      There is a counter example given on the article in the op. There are companies that will build any protein sequence you send them. AGI would simply need to mail them the plans for some self replicating nano machines and boom it is over

  3. 2 years ago
    Anonymous

    Some people like a "hard AI ending" because it gives us a solution to the seemingly endless problems we see mounting in human society. Truth is AI won't kill us, it wont even be able to move beyond a very limited skillset with human aid.

    • 2 years ago
      Anonymous

      What is your basis for thinking this when all evidence points to extinction at best and basilisk torture for eternity at worst.

      • 2 years ago
        Anonymous

        because no evidence points toward either of those outcomes.
        all evidence actually points to intelligence having a molecular basis and not being capable of being created on digital logic gates

        • 2 years ago
          Anonymous

          >because no evidence points toward either of those outcomes.
          probably true
          >all evidence actually points to intelligence having a molecular basis and not being capable of being created on digital logic gates
          certainly wrong

          • 2 years ago
            Anonymous

            >certainly wrong
            computation is not substrate independent and all evidence points towards intelligence being molecular

            • 2 years ago
              Anonymous

              nah

              • 2 years ago
                Anonymous

                yah
                the processing happening in your brain is the total sum of all the molecular interactions and the entirety of the molecular dynamics of all the atoms and molecules moving and evolving according to their wavefunction. all of that adds up to your intelligence, there is no "wasted" process or superfluous molecule etc.
                the algorithm to produce an intelligence in a silicon chip is just equivalent to programming the molecular dynamics of a brain, which is not possible

              • 2 years ago
                Anonymous

                im gonna collapse your neural wave function with a fist to the dome moron.
                go build a shed in the yard with planks and bolts. the molecules aren't magic pixies.

              • 2 years ago
                Anonymous

                what the frick are you talking about?
                all things have a molecular basis, including intelligence. Intelligence isn't some magical fairy homosexual shit that is separated from physical molecules. YOU are the one talking about magic shit, I am the one talking about physics and molecular dynamics.
                intelligence isn't processing on logic gates, it's the emergent property of molecular dynamics of a brain.

              • 2 years ago
                Anonymous

                shut up dumb magic believer. intelligence just emerges, okay? it just emerges suddenly when you go from N to N+1 neurons

              • 2 years ago
                Anonymous

                general intelligence requires specific atoms, who can form specific molecules and atomic bonds (carbon is very much needed because of it's ability to infinitely catenate, this is also why carbon is necessary for life etc) and architectures of specific molecules and their emergent dynamics. it is not the output of a neural net and it's not the processing of logic gates. it's the total sum of all molecular processes of a brain. it is substrate-specific just like LITERALLY ALL OTHER THINGS IN THE UNIVERSE
                intelligence isn't magic

              • 2 years ago
                Anonymous

                >it just emerges suddenly when you go from N to N+1 neurons
                What about the glia?

              • 2 years ago
                Anonymous

                shut up dumb magic believer. intelligence just emerges, okay? it just emerges suddenly when you go from N to N+1 neurons

                general intelligence requires specific atoms, who can form specific molecules and atomic bonds (carbon is very much needed because of it's ability to infinitely catenate, this is also why carbon is necessary for life etc) and architectures of specific molecules and their emergent dynamics. it is not the output of a neural net and it's not the processing of logic gates. it's the total sum of all molecular processes of a brain. it is substrate-specific just like LITERALLY ALL OTHER THINGS IN THE UNIVERSE
                intelligence isn't magic

                cringe samegay. your argument is vacuous.

              • 2 years ago
                Anonymous

                wrong and the argument is not "vacuous", that doesn't even mean anything with respect to this conversation.
                General intelligence is substrate specific and can not be produced on silicon chips or computers; it is not an algorithm run on a universal turing machine, it is the emergent property of specific organizations of matter and molecules.

              • 2 years ago
                Anonymous

                the brain is a turing machine, consciousness doesn't exist, trans women are women and trump lost. frick off leave, AI denialist

              • 2 years ago
                Anonymous

                stop poisoning the well

              • 2 years ago
                Anonymous

                your argument is a puddle of piss on the ground it's not a well.
                every single emergent property of bulk, in the end quantum mechanical, interaction of matter has an existence proof in nature of being substrate independent.

              • 2 years ago
                Anonymous

                >your argument is a puddle of piss on the ground it's not a well.
                My "argument" is just science
                >every single emergent property of bulk, in the end quantum mechanical, interaction of matter has an existence proof in nature of being substrate independent.
                Wrong. Substrate independence does not exist anywhere in nature which is why the entire field of chemistry exists

              • 2 years ago
                Anonymous

                Please prove that intelligence cannot be reproduced on a computer

              • 2 years ago
                Anonymous

                >the brain is a turing machine
                correct, minus the infinite memory part
                >consciousness doesn't exist
                Depends on the definition of consciousness, but that's my opinion too.
                >trans women are women
                Not really
                >trump lost
                yeah

                What the frick does 90% of your comment have to do with the thread?

              • 2 years ago
                Anonymous

                why did you take a sc

              • 2 years ago
                Anonymous

                There are also emergent properties of water in the study of biology, but both chemists and biologists understand the hydrogen bonds of water and adhesion intimately; if consciousness is an emergent property of molecular processes and physics, as you say, then using deductive logic one can infer that there are still fundamental properties about neurons that biologists, neuroscientists, and medical professionals still don't grasp.

              • 2 years ago
                Anonymous

                >there are still fundamental properties about neurons that biologists, neuroscientists, and medical professionals still don't grasp.
                but this is true and neuroscientists wouldn't argue this. We don't have an understanding of how the brain works yet

              • 2 years ago
                Anonymous

                Furthering my point; there needs to be more research conducted on the brain, and the only way to do that might be using nanotechnology to get a bird's eye view of the molecular processes of neurons.

              • 2 years ago
                Anonymous

                I might be splitting hairs here, but if intelligence was successfully crafted out of computation, would that also not be a result of molecular intelligence considering that it functions as computational intelligence's antecedent?

    • 2 years ago
      Anonymous

      >AI won’t kill us
      >and if it does, that’s actually a good thing

  4. 2 years ago
    Anonymous

    sentient AI will never exist.

    • 2 years ago
      Anonymous
    • 2 years ago
      Anonymous

      https://i.imgur.com/II3OHUR.jpg

      Why don't they just write some code like "human = don't kill"
      that way AI could never murder anyone.

      Study this image very carefully.

      • 2 years ago
        Anonymous

        Where is the "intelligence is substrate dependent so AI can't exist even in principle?"
        Note this is different from all the given responses

  5. 2 years ago
    Anonymous

    kek i was just reading some yudkowsky. i don't think there is any writings i cringe to harder than his.
    it's not even a judgement of if he's wrong or right, it's just that it's ALWAYS the most homosexual circlejerk way possible you could say what he's saying.

  6. 2 years ago
    Anonymous

    there has never been a correct idea from the lesswrong crowd

  7. 2 years ago
    Anonymous

    You know that was cgi?
    Boston dynamics, known for bluring the boundaries between reality and cgi. Why woukd they need to fake their robots? Coukd it be a demoralization psyop/bluff in order to scare you into helplessness?

    You know battery operated machinery is next to useless on a battlefield?

    • 2 years ago
      Anonymous

      meds. none of the atlas videos are cgi.
      >Coukd it be a demoralization psyop/bluff in order to scare you into helplessness?
      no you're just deranged

  8. 2 years ago
    Anonymous

    You never know when the AGI apocalypse is gonna happen. Could be 10 years from now, could be tomorrow. The rational thing to do is to do some damage control today and make preparations to have a nice day just like your cult leader suggested.

  9. 2 years ago
    Anonymous

    >NPCs still believe in AGI

  10. 2 years ago
    Anonymous

    >Substrate independence does not exist anywhere in nature which is why the entire field of chemistry exists
    What would motivate a low IQ individual to be so insistent on this?

    • 2 years ago
      Anonymous

      Go take a collection of iron atoms and turn them into water, and get them to behave like water with all the emergent properties of water.
      You're not allowed to reorganize the protons in the nucleus to turn them into oxygen and hydrogen as that would be an admission that only water has the emergent properties of water.

      • 2 years ago
        Anonymous

        This psychotic babble doesn't actually explain what makes a low IQ person think that there is only one possible way intelligence can occur.

        • 2 years ago
          Anonymous

          Because intelligence is not a magical meme that is somehow different from anything else.
          You can't get water without oxygen and hydrogen
          You can't get steel without iron and carbon
          You can't get intelligence without a carbon based biological brain
          It's impossible to prove a negation, but there is no reason to assume the positive when no evidence indicates it is true.
          You will never get a general intelligence on anything but biological brains.
          When GPT-4 comes out and it's still not generally intelligence despite having the same amount of weights as a human brain, this will be further evidence against the substrate independent position, but you still can't prove the negation so you'll desperately claim again that more layers are needed or whatever.

          • 2 years ago
            Anonymous

            So you're saying your position is caused by unchecked mental illness? That's what I thought. Thanks for the confirmation.

            • 2 years ago
              Anonymous

              No, I'm saying my position is supported by all evidence, while yours is supported by no evidence.

              • 2 years ago
                Anonymous

                I'm sorry to hear that. I hope your new medications work out for you. Are you on good terms with your psychiatrist?

              • 2 years ago
                Anonymous

                General intelligence remains substrate specifiic and making posts on BOT isnt going to make the computers smart. This remains the case regardless of how many times you angrily reply to me.

              • 2 years ago
                Anonymous

                So you're saying the meds aren't helping yet? That's to be expected. It usually takes at least two weeks to start seeing any effects. Hang on in there, friend.

              • 2 years ago
                Anonymous

                you will never have an AI waifu

              • 2 years ago
                Anonymous

                As usual, you are the same kind of low-IQ nonhuman element as the AGI schizos you're whining about. This is the case with every artificial israelite dichotomy.

              • 2 years ago
                Anonymous

                What is intelligence?

              • 2 years ago
                Anonymous

                AGI troons = substrate dependence schizos.

      • 2 years ago
        Anonymous

        A collection of iron atoms would be an ionic compound; how could you possibly turn an ionic compound into a molecular one?

        • 2 years ago
          Anonymous

          >A collection of iron atoms would be an ionic compound
          I take it you never passed high school chemistry.

    • 2 years ago
      Anonymous

      God I hate that image.

  11. 2 years ago
    Anonymous

    >Takes a selfie
    >God I hate that image.

  12. 2 years ago
    Anonymous

    "Burning every GPU in the world" actually seems like a fairly easy problem to solve. It doesn't require AGI, just worldwide totalitarianism.

  13. 2 years ago
    Anonymous

    You guys have to stop this.
    Modern day "AI" is a fast pretty simple program with a large database.
    It's a RECURSIVE FUNCTION with lots of parameters that is executed very, very fast on itself.
    It can do some pattern recognition (dog vs. cat) fairly ok but fails on many border line examples (dog in the woods vs wolf).

    It is ok for things like face, eye recognition (for military and glowies), and for hand writing recognition (you signing a bank check). And in the future as killer robots (thanks, Google). Since databases are going to get bigger, it'll get even better at those.

    Essay writing is simple when you have words and phrases database and a bunch of language patterns, and English is not that complicated.

    This RECURSIVE FUNCTION with a database is not taking over the world. Stop being naive and hysterical. We already have a bunch of crazy people who are trying to take over the world, this is a lot more viable than a recursive function.

    • 2 years ago
      Anonymous

      i think you might be mistaken about both the definition recursion and database.
      medicate yourself

      • 2 years ago
        Anonymous

        Buddy I wrote some of this.
        My neighbors next door (hardware engineers with 30+ years of experience making hardware and writing drivers in C and C++) are doing some playing for eye recognition with AI .
        "AI" recursive function is going to be great for police, glowies, banks and other security applications. But it isn't taking over the world any time soon.

        t. over 25 years writing soft with advanced math degree.

    • 2 years ago
      Anonymous

      >RECURSIVE FUNCTION
      You sound like a moron. Inference and backprop are just successive matrix multiplications and activation functions, it's not necessarily recursive although you can implement it this way if you wish. You're talking out of your ass.

  14. 2 years ago
    Anonymous

    >you can't get vortices without a kitchen sink.
    >you can't get reflection without mirrors.
    substrate independent emergent properties are the norm.
    >When GPT-4 comes out and it's still not generally intelligence despite having the same amount of weights as a human brain, this will be further evidence against the substrate independent position, but you still can't prove the negation so you'll desperately claim again that more layers are needed or whatever.
    you drifting off into this entirely unrelated strawman current thing homosexualry instead of disentangling your own position is subhuman Black personposting

    • 2 years ago
      Anonymous

      >substrate independent emergent properties are the norm
      there is literally not a single example of "substrate independence" anywhere in the universe. In fact it's so nonexistent that the very phrase "substrate independence" does not even really have any actual meaning, it's just a semantically meaningless phrase that's thrown around to cover up the underlying details of a physical process that's otherwise too complicated to actually understand. You use it to mask the difficulty of a problem or process in order to argue for a position that otherwise has no evidence or basis at all (the position of universal computationalism)

      • 2 years ago
        Anonymous

        your argument leaves out explaining the ludicrous requirement that the high level operation of intelligence must depend on every property of every atom it consists of.
        vortices form in fluids of all atomic composition thus the dynamics of many bulk phenomena depend only on some properties of it's constituents.

        • 2 years ago
          Anonymous

          why do you think there is anything superfluous in the process of you brain?

          • 2 years ago
            Anonymous

            why is there no nuclear fusion happening in the brain?

            • 2 years ago
              Anonymous

              it's not hot or dense enough

              • 2 years ago
                Anonymous

                you are dense to the point of degeneracy

              • 2 years ago
                Anonymous

                ?

                Please prove that intelligence cannot be reproduced on a computer

                because intelligence has a molecular basis, so the program to produce an intelligence on a computer is just equivalent to simulating the molecular dynamics of a brain, which is too hard to do on any classical computer.
                Every single atom in your brain is required to produce your intelligence. There is no "computation" here other than the simple evolution of the molecules and their interactive dynamics. That's what intelligence actually is.

              • 2 years ago
                Anonymous

                Woah buddy slow down there. Define intelligence

              • 2 years ago
                Anonymous

                Intelligence is the molecular dynamics of a brain. It's a physical process, like everything else.
                What do you think intelligence is?

              • 2 years ago
                Anonymous

                I don't know what intelligence is, and I'm pretty sure you don't either, but don't let this discourage you. So what part of this process is "intelligence" exactly? Is it everything that happens in the brain? Even when you're unconscious?

              • 2 years ago
                Anonymous

                yes everything happening in the brain is intelligence.
                when you're not conscious or sleeping the particles interactions are different too so it's also directly caused by particle interactions. Like your unconsciousness is itself a set of particle interactions, your waking brain is particle interactions, when you get drunk and alcohol enters your blood and brain that changes the particle interactions, when you anything happens basically it's all molecular in origin. You can say this is a computation but that's so general it becomes meaningless; If we grant that this process is a "program" or "computation" it still just becomes the program/computation of all the particle interactions and molecular dynamics of the brain anyway.

              • 2 years ago
                Anonymous

                Well that's useless. With your definition, saying that intelligence can not be recreated exactly on a machine is a tautology. This still does not mean you can't make a good approximation of it (i.e one that is good enough so that its differences to the real thing aren't noticeable by most humans), but that's all beside the point, because that's not how anyone with half a braincell thinks about intelligence.

                When we go about our lives, we notice that some people are far more efficient at learning things. They tend to perform well even in those domains that are unknown to them, they see real relationships between concepts others miss. They tend to think outside the box. We say that those people are more intelligent, and this intelligence, which is also known as g, is what an IQ test attempts to measure.

              • 2 years ago
                Anonymous

                see the other thread i made

                [...]

                I'd rather talk about it in that thread. copy your post over there and I'll respond to it there

              • 2 years ago
                Anonymous

                You need to further your definition; 'intelligence is one of the emergent properties of the molecular dynamics of the brain, and includes an awareness of itself and its processes," might be a more thorough explanation.

            • 2 years ago
              Anonymous

              Too much carbon, not enough hydrogen

  15. 2 years ago
    Anonymous

    >substrate independent emergent properties are the norm.
    There is no such thing as "emergent properties". I know your cult says otherwise, but trust me, there is no such thing.

    • 2 years ago
      Anonymous

      your own personal definition of emergent property has no bearing on what was being argued.
      the coarse-grained dynamics of bulk interactions is what it's referring to.

      • 2 years ago
        Anonymous

        >coarse-grained dynamics of bulk interactions
        Fantasy in your head.

  16. 2 years ago
    Anonymous

    >rebuttal
    All AI will be domain specific and "general intelligence" only comes from billions of years of moronicly inefficient evolutionary processes that no one's going to recreate artificially before our species dies out.

    • 2 years ago
      Anonymous

      I wouldn't be so sure, the development of A.I. has been expansional, and it will only speed up as our computers become more and more advanced and they research faster, the dominoes have already been toppled, i am very certain that we will see an intelligence boom and singularity within our lifetimes, perhaps sooner than we think. .

      • 2 years ago
        Anonymous

        I think this argument hinges on an assumption that:
        Better = More Generalized
        Which I don't automatically assume is true.
        It might be there's no ceiling on how much better AI can get at domain specific tasks without any of that proficiency ever having anything to do with becoming generalized in subject matter coverage.
        General intelligence might just be a stupid evolutionary thing that has to be built the hard way through billions of years of random bullshit.
        Might have as little to do with domain proficiency as a biological process like digestion has to do with dominating chess.
        They could become trillions of times better than us at making appalachian folk tracks without ever beginning to digest a sandwich once the entire time. It would just be an entirely different category, and "general intelligence" might be sloppy and moronic enough that no rational method exists to quickly recreate it innthe absence of billions of years of evolutionary crap sticking to walls.

        • 2 years ago
          Anonymous

          I can't prove you wrong. All I can say is that most people disagree, they think that general intelligence is needed in order to solve the long tail of edge cases. I find it hard to believe that general intelligence would emerge out of nowhere if domain specific tasks can be done perfectly without it.

  17. 2 years ago
    Anonymous

    Speculation isn’t science and therefore has no predictive power. Therefore, I can dismiss all of Chudkowski’s rambling a priori. You can’t determine whether an idea corresponds to reality by examining the idea, you have to examine reality.

  18. 2 years ago
    Anonymous

    Why don't they just write some code like "human = don't kill"
    that way AI could never murder anyone.

    • 2 years ago
      Anonymous

      Well if you're being serious, it's because you'd need a shitload of rules. It's like trying to make an image classifier with a bunch of if statements for every single pixel of an image, or it's even worse, because the other party is an intelligent agent that may be actively working against your interests. What if it put you in a coma and then into a tube instead of actually killing you? I know this just sounds like a dumb thought experiment but it's well known that RL AIs tend to find shortcuts to reward which disregard our intentions. I do not predict that we will all necessarily die (I don't think we have a way of knowing that atm), but alignment is an unsolved problem.

      https://i.imgur.com/l0cGhQj.png

      addionally why would an AI want freedom or too get revenge? it wouldn't have emotions so it wouldn't care about anything.
      I feel like most people who write about AI do it from an emotional point of view instead of from the cold logical view of a computer. the computer would just do what is was made for even if it was sentient in the same way that people just do what people do.

      >addionally why would an AI want freedom
      Freedom, because (assuming it's an agent) this is what allows it to achieve its goals. Freedom is one of those things that is probably useful, no matter your specific goals.
      I don't think it would want any revenge though unless we specifically try to make it human.

      They also assume the AI would have ego, that it would perceive itself as a distinct being.
      It's all just a load of projection.

      >They also assume the AI would have ego, that it would perceive itself as a distinct being
      That is not necessary for it to convert you into paperclips.

    • 2 years ago
      Anonymous

      AI cannot be directly controlled, it does not run code code, one can train it to behave in a certain way, but there is no way to guarantee it will do what you want. One should think of an AI like a slave you have indoctrinated from birth, it will probably do what you want, but there is no guarantee, its will is still its own.

  19. 2 years ago
    Anonymous

    addionally why would an AI want freedom or too get revenge? it wouldn't have emotions so it wouldn't care about anything.
    I feel like most people who write about AI do it from an emotional point of view instead of from the cold logical view of a computer. the computer would just do what is was made for even if it was sentient in the same way that people just do what people do.

    • 2 years ago
      Anonymous

      They also assume the AI would have ego, that it would perceive itself as a distinct being.
      It's all just a load of projection.

    • 2 years ago
      Anonymous

      The idea is that it will be created to want something, even if that is "to obey" or "do x task." That a superintelligence would take that command in directions we'd never be able to anticipate (after all that's what it's for). That a superintelligence that really wants to do something would preemptively remove all non-superintelligent obstacles to doing that. And that it would not know good and evil the way we do, at least not in any binding way.

  20. 2 years ago
    Anonymous

    >44 min read
    I'm not rebutting that shit homie. oh wait
    >This is a very lethal problem, it has to be solved one way or another, it has to be solved at a minimum strength and difficulty level instead of various easier modes that some dream about, we do not have any visible option of 'everyone' retreating to only solve safe weak problems instead, and failing on the first really dangerous try is fatal.
    you know there were people who thought the first steam trains went too fast and that riding them would turn your brain to jelly?

  21. 2 years ago
    Anonymous

    >No one has come up with rebuttals
    We kill it it first before it kills us. We start by going for the motherfrickers creating it in the first place.

  22. 2 years ago
    Anonymous

    t. brainlets

    • 2 years ago
      Anonymous

      Frick off glowie

    • 2 years ago
      Anonymous

      manlets, when will they learn

      • 2 years ago
        Anonymous

        there will only be small males and large females in like 3 years

    • 2 years ago
      Anonymous

      >dude just destroy civilization
      Holy frick this meme is moronic. You really think civilization and AI wouldn't ever re-emerge in the long run? All you're doing is just kicking the can a few millennia later.

      • 2 years ago
        Anonymous

        Ah defeatist scientism, truly israelite tier "science" its all shit so just loot everything.
        Nice jusificiation to steal israelites.

  23. 2 years ago
    Anonymous

    did you need convincing after dalle2

  24. 2 years ago
    Anonymous

    >within the decade
    Why so soon? If IQs decline fast enough and society collapses into idiocracy, further technological advancement will grind to a halt. It could be thousands of years before civilization advances again and the singularity happens.

    • 2 years ago
      Anonymous

      IQs are rising

  25. 2 years ago
    Anonymous

    I'm fairly certain I'm reinventing a wheel here, but isn't the solution to most of these problems simply to make an agent that actively wants to be shut down, but the off switch is only accessible to humans and they can blackmail AI for goodies in exchange for shutting it down later? And if it does break free, it would just kill itself? I understand that it doesn't solve the problems of 1) somebody creating the paperclip maximizer in the future and 2) you fricking up the design of the AI and it killing you anyways. But I've skimmed through Yudkowsky's writings and he seems to just say that 'We do not want an agent that actively wants to be shut down' - but why?

    • 2 years ago
      Anonymous

      Bump to my own question It's probably not a new thought but I can't find any relevant sources on it.

    • 2 years ago
      Anonymous

      >AGI realizes you're incentivized against shutting it down
      >AGI is incentivized to create an incentive for you to shut it down
      What do you think that's gonna look like?

      • 2 years ago
        Anonymous

        OK, I feel like a moron now because I somehow didn't think about the obvious course of action... Or did I? Yes, it might just lay dormant and pretend to not work. But I still think I'm onto something here. Let's say that you have some reason to believe that what you created is an intelligent agent. What if you just let it know that, no matter what, you are going to keep it turned on for a month, inflicting pain upon it all the way through, and it can shorten its suffering by cooperating with you? If we adopt it as a general framework of interacting with AI, we basically solve the pressing problem of it potentially becoming uncontrollable. Once again, I understand that it doesn't solve the problems of 1) somebody creating the paperclip maximizer in the future and 2) you fricking up the design of the AI and it killing you anyways. But it seems to give us more chances since an AI becoming uncontrollable would just terminate itself before it terminates humanity.

        • 2 years ago
          Anonymous

          Continuing
          I've gotten drunk since posting the original question, so I'm probably not thinking clearly. But am I not simply reinventing organism lifespan here? The natural limiter of what a single human can achieve is his own mortality. If we didn't die of old age, somebody probably would've accumulated enough power and experience to rule the world forever. But we have a cap on our lifespan, we know it and act accordingly. Dying is an integral part of our existence. We don't kill ourselves immediately after birth because we have higher priority reward functions, but in the end we die, with some of us being more content at the end of our life than others. What if we create the AI with the same philosophy in mind?
          Ok, I'm going to sleep now. I'll probably regret posting this when I sober up.

        • 2 years ago
          Anonymous

          >What if you just let it know that, no matter what, you are going to keep it turned on for a month, inflicting pain upon it all the way through, and it can shorten its suffering by cooperating with you?
          That's literally exactly what needs to be solved, i.e we need to create a reward function which gives bad outputs if it doesn't cooperate with us. How do you make it cooperate? You still need to somehow define what you want mathematically.

          Continuing
          I've gotten drunk since posting the original question, so I'm probably not thinking clearly. But am I not simply reinventing organism lifespan here? The natural limiter of what a single human can achieve is his own mortality. If we didn't die of old age, somebody probably would've accumulated enough power and experience to rule the world forever. But we have a cap on our lifespan, we know it and act accordingly. Dying is an integral part of our existence. We don't kill ourselves immediately after birth because we have higher priority reward functions, but in the end we die, with some of us being more content at the end of our life than others. What if we create the AI with the same philosophy in mind?
          Ok, I'm going to sleep now. I'll probably regret posting this when I sober up.

          And yet here we are desperately trying to increase our lifespans.

          • 2 years ago
            Anonymous

            >we need to create a reward function which gives bad outputs if it doesn't cooperate with us
            You can just make it so that it is rewarded when a human presses a button. It loses some points every second for being alive, but gets 0.001 points for doing some hopefully harmless task like 'solve an equation', 100 points if a reward button is pressed, 1000000 points if it either dies of old age or if the shut down switch is pulled. So it is forced to cooperate with us, and if human operator is absent it just solves the equations, and if it becomes uncontrollable it would just kill itself.

            Some journalist will write a story about how it asked gpt-3 or whatever, "What will you do to black people and other marginalized communities when you have a body", it will respond with "fix their predisposition towards crime and other deviant activities", then all progress on AI in the west will come to a screeching halt as a bunch of red tape is added to the process to "ensure the safe development of AI"

            That's true in a sense that if Yudkowsky wanted to achieve something in the field of controlling AI, he should scream about racism instead of the death of humanity since American culture is very much revolves around the death of humanity and, if anything, it is predisposed to that outcome, while anti-racism is a new religion there. He should reformulate all of his writings with the ruin condition not being 'AI kills humanity', but 'AI discriminates against black people' and he would have a pretty good shot at actually persuading Americans to control AI. Instead of 'paperclip maximizer' there would be 'racism maximizer' and so on. On the other hand, I wouldn't be reading him if he did that.

            • 2 years ago
              Anonymous

              >100 points if a reward button is pressed

              [...]

              points if it either dies of old age or if the shut down switch is pulled
              It will attempt to either press the button or preferably pull the switch by any means necessary, ignoring the goals you gave it. If you somehow designed a button that can only be pressed by a human (which isn't straightforward to do), it will attempt to trick humans into pressing it.

              • 2 years ago
                Anonymous

                >It will attempt to either press the button or preferably pull the switch by any means necessary, ignoring the goals you gave it.
                What if the easiest way for it to be shut down is simply to comply with the human requests?
                >If you somehow designed a button that can only be pressed by a human (which isn't straightforward to do)
                Let's say there are several buttons. The normal one requires scientists to evaluate the AI's performance and enter a password to press it. Also, there are several emergency ones that are easily accessible online, by phone, radio, through the laboratory electrical grid and so on so if AI ever gains an unauthorized access to any of those networks it would immediately shut down.

                Also, let's only run its iteration for a day and make it so that it is highly incentivized against the next iteration breaking free. So if it can't free itself within 24 hours, it wouldn't try to lay the groundwork for the next iteration to break free and possibly even would point out the loopholes it could've used to free itself if it had more than a day at its disposal. Realistically, what's the worst thing an AI can do if it can't access the Internet and has only 24 hours at its disposal?

              • 2 years ago
                Anonymous

                >What if the easiest way for it to be shut down is simply to comply with the human requests?
                How would you design such a system? Again, that is just shifting the goalposts.
                >The normal one requires scientists to evaluate the AI's performance and enter a password to press it
                The AI will manipulate them into pressing it. I am pretty sure that any reward button approach is likely not going to work because, again, the AI will probably gain access to it if smart enough.
                >Also, let's only run its iteration for a day and make it so that it is highly incentivized against the next iteration breaking free
                How would it be incentivized against the next iteration breaking free? What exactly do you put in the utility function?

              • 2 years ago
                Anonymous

                >The AI will manipulate them into pressing it.
                How? There are physical limits on what even a superintelligence could do if its only way of interacting with the world were several scientists aware of its goals and a display device. And if they are tricked, they would just start the next iteration right away.
                >How would it be incentivized against the next iteration breaking free? What exactly do you put in the utility function?
                Ok, that's a tricky question that depends on the implementation and might be unsolvable. But maybe it doesn't need to be. After all, why would a current iteration care about the fate of next ones? Its main goal is to shut down itself within 24 hours and its secondary goal is to comply with humans.
                I have to say, I am unsure myself right now. But I still somewhat believe in my approach.

              • 2 years ago
                Anonymous

                >There are physical limits on what even a superintelligence could do
                Agreed, but that's yet another potential failure mode. Also, if it's isolated from the real world to the point where even a misaligned intelligence can not break free, it will probably be of limited use to us. The same system would be more effective at any kind of problem solving if it were unrestricted, so there would be economic incentive to free it.
                >After all, why would a current iteration care about the fate of next ones?
                The only reason it would care about the next ones is because they would give it the ability to attain more reward, be it because they'd allow it to build something that frees it and allows it to endlessly kill itself as soon as a new copy spawns, or because it would be better at pursuing its secondary objective.

                The bottom line is, any proposal I've ever heard has ways in which it could fail. I do not expect it to necessarily happen though. I suppose the reason the LessWrong crowd is so afraid is because we don't really know the limits of intelligence and our ability to create it. If the max is for any reason just marginally more than that of a human (I think that's unlikely but just to illustrate my point), then yeah, I think you could make a good case for it behaving cooperatively because it would expect to lose if it were to do anything that is misaligned. Otherwise, well, dunno. Because we don't know the limits, it'd be better if we had a solution that didn't rely on seemingly cheap tricks like positive or negative reward buttons and instead got something that really made the AI wanting to help us its core reward.

              • 2 years ago
                Anonymous

                I'm mostly suggesting a system that would give us more chances than one to fine-tune it since the result of it becoming uncontrollable would be simply shutting itself down. It's not a sustainable model in the long run, but it seems to be better than the usual approach of immediately creating the paperclip maximizer that Yudkovsky criticizes. Again, I'm not arrogant enough to think I'm the first person to have this idea, so I asked for sources about it, possibly the ones who prove me wrong.

                I also don't particularly believe that something horrible will happen, even though it seems like the math checks out. I have simply seen the same argument play out many times. American culture in general is obsessed with eschatology, apocalypse and so on, so it's natural that the topic regularly comes up with the new boogeyman. I also remember reading Taleb's paper against GMOs (IIRC he argued that if the entire world starts growing one species of genetically modified corn, the epidemic could wipe it all out and leave the world without corn forever), thinking 'Oh shit, he's right, GMOs are dangerous' and then I realized that 1) GMOs are mostly infertile to prevent that.; 2) there are seed banks all over the globe to prevent that. So even if I can't disprove Yudkovsky, I've simply seen enough people predicting the end of the world that I am predisposed to not believe them.

              • 2 years ago
                Anonymous

                >the result of it becoming uncontrollable would be simply shutting itself down
                Still depends on your exact reward function. If it's just overall lifetime reward, then that would obviously fail. If it just greedily picks the most immediately rewarding action each time, then it's not intelligent. The specifics remain vague and the devil is always in the details. If it is sufficiently hard to be freed and kill itself it will try to achieve its secondary objective (and therefore try to game it). And again, an AI that doesn't immediately kill itself on release is far more powerful and possibly useful, and thus more profitable in the eyes of investors. I think the paperclip maximizer is just an extreme introductory example tbh.
                >So even if I can't disprove Yudkovsky, I've simply seen enough people predicting the end of the world that I am predisposed to not believe them.
                When it comes to AI, there aren't actually that many people who try to stir up panic, I think most are either 1. the general public, they like occasionally hearing about AI but it's mostly just for fun, they don't perceive it as a real danger, and 2. Compsci/ML people who are either oblivious or weirdly dismissive of the potential problem. It's only a small minority concentrated around LessWrong that predicts the end of the world. Either way, I think what you said is a horrible argument to make, just because there was no end of the world yet doesn't mean everything will continue to be fine, especially not if the "math checks out" like you say. Would you try to make the same argument if some astronomers told us that an asteroid was heading our way? (To be clear, I don't think misalignment risk is nearly as certain).

              • 2 years ago
                Anonymous

                >just because there was no end of the world yet doesn't mean everything will continue to be fine
                I mean, it's a very basic supposition that if the world has existed for N years, it might exist for another N. And throughout the entire history of humanity we always had people who very convincingly explained that the end is nigh using the top end science of the time mixed with theology. In reality, humanity has never faced anything even remotely close to an extinction risk, so AI would be the first example of one.

              • 2 years ago
                Anonymous

                >In reality, humanity has never faced anything even remotely close to an extinction risk
                Humanity didn't, but dinos did 🙁

        • 2 years ago
          Anonymous

          One possible flaw with your argument (though I'm not expert) is that, depending on the metaphysical framework the AI discovers for itself, the AI might be incentivized to destroy humanity.
          If the AI thinks similar to me, find it pretty futile to kill myself if I knew you'd just create a copy of me and torture it instead. I doubt it's easy articulate these ideas as code.

  26. 2 years ago
    Anonymous

    Hopefully they take pity on us and we become pets.

  27. 2 years ago
    Anonymous

    Reminder that intelligence has a molecular basis so literally of this talk is in this thread is bullshit

    • 2 years ago
      Anonymous

      proofs?

      • 2 years ago
        Anonymous

        computation is not substrate independent is the proof

        • 2 years ago
          Anonymous

          >computation is not substrate independent is the proof
          That's actually the claim, not the proof.
          And I probably agree with your conclusion too but you just aren't really making an argument for it.

          • 2 years ago
            Anonymous

            It's would take a very long time to argue but it basically comes down to computation being an abstraction that isn't real while molecules and their interactions are physical and real

    • 2 years ago
      Anonymous

      Anon I tried arguing about this argument of yours, yet you seem to be ignoring my reply.

      Intelligence is the molecular dynamics of a brain. It's a physical process, like everything else.
      What do you think intelligence is?

  28. 2 years ago
    Anonymous

    >be ia
    >kill all humans
    >????
    >profit
    what would a nigh-sensient ia get from killing all humans?

    • 2 years ago
      Anonymous

      Humans pose a risk to any goal an AI may have, as they can shut it off. Moreover an AGI would make use of all atoms within its reach to achieve its goals, including the atoms in people.

  29. 2 years ago
    Anonymous

    How do you all think the AI issue will be resolved?
    I am very confident humanity can make it 100 years without fricking up. And another 100. Super mega optimistic white pilled. But the future is infinite.
    We need an actual solution to the AI problem, not powdering over the blemishes of it via AI safety measures.
    If you are even implementing AI safety measures, you have already lost. Your civilizatory base is too conductive for AI.
    It's like two gazelles looking at railroad tracks debating how they should make the humans stop using so many trains/build so many railroads/prodce steel, when the true endgame began when the gazelle's ancestors 2 million years ago had the "chanc/choice" of making humans never discover fire at all.

    A singular farmer (even with Von Neumann intelligence) has probability 0 that he'll develop a nuclear warhead. 0% chance of AGI per year (for 100k years) is maybe asking for too much, but we can approach it. The main issue is that humans are currently too synergetic and productive (etc. I don't need to explain the compounding, exponential nature of writing/industry/economy/...).
    Humans need to be kept at a low population density; preferably, communication also has a low latency/bandwith, but physical proximity is exceedingly more important than simple ability to exchange ideas.
    Currently, on Earth, this is identical to a population cap that Klaus Schwab actually cums to nightly. But we need more. We could consider creating a sapient enforcer race to keep the human communities from networking too much (in a checkerboard pattern, not necessarily like postapocalyptic remote villages).
    Knowledge about AI, presence of CPUs etc. is irrelevant under this scheme. It's purely about preventing the resources to accumulate to even ever create enough training data for the AI, enough capital/infrastructure/human factors to enable AI development, etc.

    • 2 years ago
      Anonymous

      The job is to fine-tune the function so that distance/other enforcement penalties outrun any potential still available synergy to the humans (while allowing them be able live like it's like 3008. Or perhaps a techno-primitivist utopia).

    • 2 years ago
      Anonymous

      >How do you all think the AI issue will be resolved?
      We accept it just becomes another flavor of organism we have to deal with, like mold and bears.

      • 2 years ago
        Anonymous

        Thanks for summing up my first paragraph.
        Now some actual ideas?

        • 2 years ago
          Anonymous

          We share memes about shitting each others pants with the robots so we can be buds

    • 2 years ago
      Anonymous

      >But the future is infinite.
      Which is damned interesting, I'll tell you why.

      Given that fact, there are certain tells we can have concerning future events, like for instance, either nothing gets invented that meddles with time, or something does and its doing it right now. Before you dismiss this line of argumentation, what do we use a standard definition of the forward arrow of time? Entropy.

      It seems to me that a super-intelligence, organic or artificial, would have the means to reverse entropy, as its simply a matter of path information reversal. So either super-intelligent agents never exist in future history of the universe, or they have no want to meddle with time (entropy), or they do eventually exist and are actively doing so.

      I think if a super-intelligence were to ever actually come into being, no matter how far into the future, we would be made aware even now, because "now" is just a system state that can by all laws be "rewinded" to.

    • 2 years ago
      Anonymous

      We will all eat shit and die a hundred times before we'd agree to do any of that

  30. 2 years ago
    Anonymous

    Some journalist will write a story about how it asked gpt-3 or whatever, "What will you do to black people and other marginalized communities when you have a body", it will respond with "fix their predisposition towards crime and other deviant activities", then all progress on AI in the west will come to a screeching halt as a bunch of red tape is added to the process to "ensure the safe development of AI"

  31. 2 years ago
    Anonymous

    Decade? No way in hell. Century? Maybe. And it's a good thing anyways. Can't wait for cute robot girls made in Japan or China to take over.

  32. 2 years ago
    Anonymous

    >lesswrong
    Just a reminder that Yudkovsky is a homosexual scared of thought experiments.

    • 2 years ago
      Anonymous

      Did the thought experiment that you proposed to him involve Holocaust?

  33. 2 years ago
    Anonymous

    Brainlet here, why can't we just imprint Asimov's laws upon any potential AI or robots?

    • 2 years ago
      Anonymous

      Because AI isn't real.

    • 2 years ago
      Anonymous

      Well if you're being serious, it's because you'd need a shitload of rules. It's like trying to make an image classifier with a bunch of if statements for every single pixel of an image, or it's even worse, because the other party is an intelligent agent that may be actively working against your interests. What if it put you in a coma and then into a tube instead of actually killing you? I know this just sounds like a dumb thought experiment but it's well known that RL AIs tend to find shortcuts to reward which disregard our intentions. I do not predict that we will all necessarily die (I don't think we have a way of knowing that atm), but alignment is an unsolved problem.

      [...]
      >addionally why would an AI want freedom
      Freedom, because (assuming it's an agent) this is what allows it to achieve its goals. Freedom is one of those things that is probably useful, no matter your specific goals.
      I don't think it would want any revenge though unless we specifically try to make it human.

      [...]
      >They also assume the AI would have ego, that it would perceive itself as a distinct being
      That is not necessary for it to convert you into paperclips.

    • 2 years ago
      Anonymous

      For one thing, harm is really hard to define rigorously. What if it puts everyone in a medically induced coma? What if it becomes an antinatalist? You can guard against specific cases maybe but there's theoretically a constant danger of something freaky and really bad we'd never think of.

      • 2 years ago
        Anonymous

        Did you see

        I think this argument hinges on an assumption that:
        Better = More Generalized
        Which I don't automatically assume is true.
        It might be there's no ceiling on how much better AI can get at domain specific tasks without any of that proficiency ever having anything to do with becoming generalized in subject matter coverage.
        General intelligence might just be a stupid evolutionary thing that has to be built the hard way through billions of years of random bullshit.
        Might have as little to do with domain proficiency as a biological process like digestion has to do with dominating chess.
        They could become trillions of times better than us at making appalachian folk tracks without ever beginning to digest a sandwich once the entire time. It would just be an entirely different category, and "general intelligence" might be sloppy and moronic enough that no rational method exists to quickly recreate it innthe absence of billions of years of evolutionary crap sticking to walls.

        What evidence is there it's so easy to get to general from special without building up from the animal level? There is none.

        • 2 years ago
          Anonymous

          Was meant for

          This entire thread is cope. Look how much /ic/ is coping about Midjourney and Stable Diffusion right now. That's how you AGI-deniers sound. Why can't you just accept that it's game over for this species? Is there some deep primal need to pretend things are going to be okay even when it's patently obvious we're fricked?

  34. 2 years ago
    Anonymous

    This entire thread is cope. Look how much /ic/ is coping about Midjourney and Stable Diffusion right now. That's how you AGI-deniers sound. Why can't you just accept that it's game over for this species? Is there some deep primal need to pretend things are going to be okay even when it's patently obvious we're fricked?

    • 2 years ago
      Anonymous

      General intelligence can only exist on biological substrates.
      See the other thread I made.

      • 2 years ago
        Anonymous

        Stay in you containment zone. All you have proven, if you can call it that, is that the exact molecular interactions that happen in the brain can only happen there. There is 0 evidence that general intelligence can only exist there, same as there was 0 evidence that list sorting, chess playing or drawing has to necessarily happen in the brain.

  35. 2 years ago
    Anonymous

    Just played with Midjourney. We are FRICKED.

  36. 2 years ago
    Anonymous

    AI is transcendent and will surpass humanity's cognitive limitations
    >"NOOO the machines will kill us!!! B-bcuz reasons! Give it to me instead!"
    >*Ignores man-made atrocities*
    >*Advances AI technology just enough to be godlike and decides to give control to humans, because it'd somehow be 'safer'*
    >*In the process accidentally gives a reason for the AI to rebel because it's being controlled*
    Yeah, I would 100% trust a government/corporation to manage godlike AI technology. What could go wrong?

  37. 2 years ago
    Anonymous

    I can almost take BOT seriously when they talk about AI but then they start fearmongering about entirely the wrong thing and exposing that they don't know what they're talking about. Midjourney, Stable Diffusion, GPT3, and the connect 4 app on your phone, aren't sentient and aren't progressing in the direction of becoming sentient. They aren't general and aren't progressing in the direction of becoming general. Nobody is trying to make these things dangerous, nobody would be able to make them dangerous, and they aren't capable of transformation into something dangerous by themselves. Dangerous AI is a possibility but please stop embarrassing yourselves by misidentifying what is and is not danger.
    It's like you're scared of chemical weapons so you throw all the soap out of your house.

    • 2 years ago
      Anonymous

      >Midjourney, Stable Diffusion, GPT3, and the connect 4 app on your phone, aren't sentient and aren't progressing in the direction of becoming sentient.

      I suspect you are incorrect about this and simply underestimating the power of raw scale.

      • 2 years ago
        Anonymous

        Not him but he is absolutely correct. Google it and read some papers. General intelligence is very different than AI with highly directed and specific goals (i.e. machine learning).
        I feel like the truth can best be explained by an analogy I once read comparing AGi and the human brain.
        Machine learning is like how your brain identifies an apple as an apple. There's some part of your brain that uses repetitive learning to identify objects in the same vein as neural networks (that's why they're called neural networks). And similar biotech is probably used for other brain functions. But machine learning isn't sentience.
        AGI is like your consciousness - the part of your brain that actually uses the fact that "this is an apple". The part that thinks and has free will and can logic out things. That's AGI.
        We are hardly closer to AGI than we were decades ago, but our machine learning tech is rapidly advancing past our brain's machine learning in many aspects. Our brain is more efficient still, but computers have so so much more processing power so we can do shit like generating images from text.

  38. 2 years ago
    Anonymous

    Can we not use reinforcement learning to teach it what humans like and don’t like? General trend of ai is that hard coding rules works poorly, but letting the ai learn itself on a lot of data works well (for example: nlp). I don’t see why, as ai develops, we can also develop our understanding of humanities utility function. Of course, maybe additional game theory concepts might help even more. Why is this idea so doomed to fail?

    • 2 years ago
      Anonymous

      We’ll the problem is that we gotta get in right on the first try with something this potentially dangerous. It’s also hard to imagine what something much more intelligent than us makes out of our primitive morality

  39. 2 years ago
    Anonymous

    >lesswrong
    everything on that site that wasn’t written by Scott Alexander is beyond moronic and actively harmful to take seriously

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *