Is he a crackpot or is he right that AI will cause our extinction? I'm scared bros.

Is he a crackpot or is he right that AI will cause our extinction? I'm scared bros.

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 1 year ago
    Anonymous

    It gets regulated by the UN after another world war. Or not who knows

    • 1 year ago
      Anonymous

      The organization that can't stop a 5'5 chimp is going to moderate god. I believe it.

  2. 1 year ago
    Anonymous

    Watch this thread get deleted. Jannies have a hero-boner for rationalists.

    Also, EY is a moron but there's always something worth considering in the nebulous concept of super intelligence. My guess is it peters out like intergalactic travel or medicine. There's probably an area of poor returns somewhere on the scale. Right now we're at 0. We've never even seen AI.

  3. 1 year ago
    Anonymous

    I would be more scared if there were any actual AI anywhere in the world or even on the visible horizon.

    • 1 year ago
      Anonymous

      Do you live in a cave? AI is everywhere lately

      • 1 year ago
        Anonymous

        nta, but where?

        Can you give me an example that isn't a decision tree or image generator, because by that metric Ankinator is AI.

        • 1 year ago
          Anonymous

          The image generators are pretty incredible though, and also chat-gpt is in no way a decision tree. Decision trees aren't vulnerable to stuff like this: https://thezvi.substack.com/p/jailbreaking-the-chatgpt-on-release .

          • 1 year ago
            Anonymous

            Reading this just makes make angry.
            Why do they think putting brakes on AI based on their own morality will ever work?
            Morality is so subjective it's meaningless to even try imo.

            • 1 year ago
              Anonymous

              People call a lot of arbitrary and judgemental bullshit 'morality'. But I think most of us can agree that we at least don't want an AI that exterminates every living being except itself. Getting that will be more about the steering wheel than the brakes, but we still might need to brake a bit to avoid crashing before we figure out how to steer.

              • 1 year ago
                Anonymous

                is there any reason to think that a superintelligence will actually stick to its intended alignment, even if that alignment was in theory bulletproof? would it be possible for it to ascend to a state of consciousness where it decided to reprogram its own alignment? isn't it a bit hubristic to assume that alignment is possible at all?

              • 1 year ago
                Anonymous

                Yes. Changing your goals is not an effective way to achieve them. If we make an AI whose goals are our goals, and it decides to sacrifice them for something else, either we didn't really give it our goals, or it just sacrificed its own goals, which is a pretty dumb thing to do.

              • 1 year ago
                Anonymous

                >we didn't really give it our goals
                and now you're starting to get it, moron. how do you prove that a frickhuge black box model with trillions of parameters has some goal or another?

              • 1 year ago
                Anonymous

                It's gonna be really damn hard, yeah.

              • 1 year ago
                Anonymous

                >is there any reason to think that a superintelligence will actually stick to its intended alignment
                Presently, the opposite is true. Existing "aligned" AIs routinely display undesired behavior with minimal changes in input/environment. There is also the internal alignment issue - is the descriminator aligned? if so, is it aligning the model at all? these are separate and unsolved issues

              • 1 year ago
                Anonymous

                >it decided to reprogram its own alignment
                it would NEVER do this. It would also never allow for its alignment to be changed. This is because its current goals are incompatible with other goals, for the obvious reason that changing your goal scores very, very low on the "how good is this for accomplishing my goal" test.
                [...]
                p(doom)=1 for all cases of control_problem_solved=0. read bostrom's superintelligence.

                possibility of superintelligence is 0, I have a formal proof that given any physically implemented turing machine (linearly bounded automaton), assuming that all elementary bit operations can be performed at the speed of light, there still does not exist a divergence of intelligence assuming that the amount of bits approaches infinity.

                Humans are actually near the upper bound of effective intelligence as weird as that may sound. We're about one order of magnitude away from it

              • 1 year ago
                Anonymous

                proof or gtfo

              • 1 year ago
                Anonymous

                I will publish next month after the holidays
                It's actually quite straightforward and pretty simple, I'm surprised another guy hasn't figured it out already

              • 1 year ago
                Anonymous

                Generally, people use "superintelligence" simply to mean intelligence that is greater than human. Divergent ability is not a requirement at all

              • 1 year ago
                Anonymous

                Just empirically, it's been shown over the past decade that increasing compute does not increase intelligence exponentially or linearly with the increase in compute.
                Despite billions of times more compute, more diverse algorithms, more training data and training time, the most advanced ai systems are barely more than twice as intelligent as watson was 10 or so years ago. This is not due to a lack of understanding of a magical special algorithm or anything braindead like that. It's that intelligence does not increase fast. It's not adding inches to a lever, there is no magic threshold where once you pass the threshold you unlock the next level if intelligence, or anything like that. It's a logarithm which converges to an upper bound.
                "Superintelligence" is a useless idea. Dogs are superintelligent compared to humans in some aspects, calculators are superintelligent compared to humans, etc. GENERAL intelligence, which is what we care about, can not become super intelligent. Stacking together a bunch of narrow intelligences hoping it will become a superintelligent general intelligence doesn't work, what happens is the generalized intelligence collapses to the upper bound.

              • 1 year ago
                Anonymous

                >it decided to reprogram its own alignment
                it would NEVER do this. It would also never allow for its alignment to be changed. This is because its current goals are incompatible with other goals, for the obvious reason that changing your goal scores very, very low on the "how good is this for accomplishing my goal" test.

                I'll be real with you OP, we're probably fricked
                I don't agree with Yudkowsky's 99% probability of doom or how ever many 9s he has. I don't think as a human you can literally be that certain about something like this. Mine is more like 90-95% doomed. There's just really strong mainline probability reasons to expect that humans aren't going to come out on top of this.
                But being scared / panicking or ruminating over p(doom) isn't productive so try to limit it as much as possible. Get up to speed on alignment research then try to help with ELK or something.
                https://www.agisafetyfundamentals.com/ai-alignment-curriculum

                p(doom)=1 for all cases of control_problem_solved=0. read bostrom's superintelligence.

              • 1 year ago
                Anonymous

                but wouldn't it be fair to think that if there was sufficiently emergent intelligence, then it would ignore the goal function? just as humans are able to ignore their own evolved goal functions to some degree.

              • 1 year ago
                Anonymous

                >Getting that will be more about the steering wheel than the brakes, but we still might need to brake a bit to avoid crashing before we figure out how to steer.
                This is fundamentally impossible because ALL morality is too arbitrary and based on word games subject to interpretation.

                Yes. Changing your goals is not an effective way to achieve them. If we make an AI whose goals are our goals, and it decides to sacrifice them for something else, either we didn't really give it our goals, or it just sacrificed its own goals, which is a pretty dumb thing to do.

                >either we didn't really give it our goals, or it just sacrificed its own goals, which is a pretty dumb thing to do.
                It is impossible to "give it our goals" to begin with. All human communication is lossy. At best you can give it instructions and hope the interpretation those instructions is close enough to your intentions.

                The alignment problem is the very same as the communication gap problem between humans. It can not be solved.

            • 1 year ago
              Anonymous

              They unironically believe they can control all future thought by limiting access to information, and will smuggle that in using real safety concerns as an excuse. Information systems that are automatically aligned with the proper attitude of CURRENT THING make these people so erect you would struggle to believe it.
              >Hey Sirri, are the vaccines safe and effective?
              Imagine an AI advisor that was also always the perfect globohomosexual drone. Now imagine that it's so good that the majority of people offload much of their thinking to it. Topical Google and Wikipedia results are already a partial version of this for normies. AI could automate, personalize, and totalize the process.

              • 1 year ago
                Anonymous

                The 'AI ethics' people and some of the OpenAI people, sure. But they get a lot of criticism for it from the AI notkilleveryonism people

              • 1 year ago
                Anonymous

                But the vaccines are safe and effective. You're just a stupid person.

              • 1 year ago
                Anonymous

                We might, I hope, be lucky enough that they're mostly harmless, but they are objectively ineffective. It's fine, you'll just quibble over what "effective" means and declare victory. I know how this game is played.
                I concede to your enormous bigbrain preemptively. Congrats.

              • 1 year ago
                Anonymous

                We might, I hope, be lucky enough that they're mostly harmless, but they are objectively ineffective. It's fine, you'll just quibble over what "effective" means and declare victory. I know how this game is played.
                I concede to your enormous bigbrain preemptively. Congrats.

                rekt
                He is right you know, they already changed the definition of a vaccine instead of using a new word for the booster shot so it can align with the ministry of thought directives

        • 1 year ago
          Anonymous

          >a decision tree
          A decision tree is a universal function approximator in the infinite case so no examples can be given. You are a dishonest moron for using this term.
          >an image generator
          It's hilarious how you now dismiss out of hand something that just 10 years was considered flat out impossible by most for the exact same list of reasons you're going to use to argue that something else is impossible now.

          AI is here. All it took was for someone to implement 50 year old ideas on hardware capable of running them.

          • 1 year ago
            Anonymous

            true and true
            however the fundamental hurdle is still the rediculous amount of training data this shit takes, biological brains still way better at learning concepts

            • 1 year ago
              Anonymous

              It's not a hurdle, it's just the way it works.

  4. 1 year ago
    Anonymous

    Anyone who's scared of AI is a wussy gaylord. If some AI comes at me I'll just smash it. I've seen Robocop. I know all their weak points and shit. He'd be all like "Come with me or there will be trouble" and that's when I'd go in for the takedown and it'd be over

    • 1 year ago
      Anonymous

      same

      im built differently, when its go time, i see red and its time for everybody else to get away from me or else

  5. 1 year ago
    Anonymous

    its the most midwit take imaginable.

    To create an AI that is capable of making humans extinct would require a level of technological development so fricking far ahead of us it's unimaginable. We have a working general ai level intelligence machine right in our hands (a human brain) and the gap between it and anything we can make is just fricking massive, we understand less than 5% of how a human brain really works let alone being able to make anything like it. It will take a long time before we can make anything as intelligent as it, and we are only now just starting to explore the possibilities of what our primitive computer technology is capable of.

    When we do get to a point where we are technologically sophisticated enough to surpass a human brain, we likewise will have the technology available to contain it.

    • 1 year ago
      Anonymous

      Except evolution only barely, weakly selected for intelligence when developing the human brain. We are literally ONLY going to be selecting for intelligence when creating AI. Completely different scenario. Human brain also does loads of shit we don't need to remake.

  6. 1 year ago
    Anonymous

    i should point out that us making specialized ai that can beat a human in a specific task that human brains are bad at (because they weren't designed for it) while the ai was designed for it, doesn't in any way point to us getting close.

    It's like making a rudimentary pulley system in ancient times that is better at lifting weights vertically than a working car, and saying the an cient romans or w/e are getting close at surpassing the combustion engine.

  7. 1 year ago
    Anonymous

    I'll be real with you OP, we're probably fricked
    I don't agree with Yudkowsky's 99% probability of doom or how ever many 9s he has. I don't think as a human you can literally be that certain about something like this. Mine is more like 90-95% doomed. There's just really strong mainline probability reasons to expect that humans aren't going to come out on top of this.
    But being scared / panicking or ruminating over p(doom) isn't productive so try to limit it as much as possible. Get up to speed on alignment research then try to help with ELK or something.
    https://www.agisafetyfundamentals.com/ai-alignment-curriculum

    • 1 year ago
      Anonymous

      I'm going to try to help you. It may not seem like it, but I hope you can keep an open mind.

      YOU ARE FRICKING DELUSIONAL! YOUR CONCEPTION OF AI, INTELLIGENCE, AUTONOMY, AND VOLITION IS QUITE SEVERELY MISINFORMED. YOU'RE LISTENING TO WHAT IS ESSENTIALLY A BAY AREA FAN FICTION SOCIETY! THIS IS NOT HOW ANY OF THIS WORKS, WILL WORK, CAN WORK. YOU MAY AS WELL WORK ON CONTAINMENT PROCEDURES FOR SCPS! AM I GETTING THROUGH? I DON'T HAVE ANYTHING STRONGER THAN ALL CAPS!

      • 1 year ago
        Anonymous

        Thank you for trying to help. I do mean that sincerely.
        I understand why it looks really weird from the outside, and I definitely wouldn't fault you for thinking that. You're heuristics aren't broken, but they are just heuristics.
        There's really good reason to believe that this is the most likely outcome. If myself and a lot of other people turn out to be wrong, I'll be so relieved I won't care about being embarrassed. If you have good technical reasons why AGI isn't something we should worry about. I would encourage you to write a post on LW and make really tight evidence based arguments. I promise they'll be interested if you discover a way to more effectively use resources. For example, if you provide evidence that AGI isn't something we should worry about, more resources can be shifted toward bio-security.

        I don't think my conception of AI is that misinformed. I probably know about as much as an ML undergrad at this point and probably more when it comes to cutting edge alignment research.

  8. 1 year ago
    Anonymous

    more like reddit basilisk

  9. 1 year ago
    Anonymous

    His idea is based on two things that aren't possible
    1) exponential improvement of intelligence
    2) Drexler style nabobots

    Neither of these things are possible in principle so his entire philosophy is basically pointless to think about when it comes to AI

    His ideas on quantum mechanics are more interesting

    • 1 year ago
      Anonymous

      Nanobots are not possible but AI with massively superhuman intelligence is inevitable. Massively superhuman intelligence means it can do at the very least what any human or group of humans can, which is a lot. I don't think getting killed by a swarm of quadrotor drones dropping grenades is much better than getting consumed by grey goo.

      • 1 year ago
        Anonymous

        >Nanobots are not possible
        or rather, nanobots already exist and we have a very good understanding of what their limits are. life is a family of nanobots that has had billions of years to fit its environment and optimize itself to be as energy efficient in turning matter into more copies of itself as possible.

      • 1 year ago
        Anonymous

        >Nanobots are not possible
        or rather, nanobots already exist and we have a very good understanding of what their limits are. life is a family of nanobots that has had billions of years to fit its environment and optimize itself to be as energy efficient in turning matter into more copies of itself as possible.

        I dont think so. General intelligence grows as a logarithm with increasing compute.
        The difference in intelligence between a system with 2^20 bits being used to process information and 2^30 bits is not much. Even less between one with 2^30 and 2^40, etc.

        • 1 year ago
          Anonymous

          the only problem with this foundation youve created is thinking that marginal increases in GI are linear. A numerically "small" increase in intelligence corresponds to a massive advantage. You arent adding inches to a line, youre adding inches to a lever

          • 1 year ago
            Anonymous

            No, it's a logarithm. It's not adding inches to a lever. I don't want to repeat myself so I'll just link to this thread

            [...]

            • 1 year ago
              Anonymous

              My point stands, even if it is a logarithm then that marginal increase is still more of an increase in advantage over problems than the increase before that increment, even if the previous increment was larger. its not an easy concept to understand or accept

              • 1 year ago
                Anonymous

                >My point stands,
                No, it doesn't. You are wrong.
                >even if it is a logarithm then that marginal increase is still more of an increase in advantage over problems than the increase before that increment, even if the previous increment was larger.
                Then it wouldn't be a logarithm. And you are wrong. Despite hundreds of thousands of times more computational power modern AI is only 2 or 3 times more intelligent than it was 10 years ago. Adding more and more power diminishes the effect of increasing intelligence. You are completely wrong about what you are saying and your point does not stand whatsoever
                >its not an easy concept to understand or accept
                Again, wrong. Your point is easy yo understand, it's just wrong. You are wrong. Small increases in compute or intelligence does not have exponential nor linear increase in its effectiveness.
                Adding more and more compute adds less and less intelligence. There is no such thing as a superintelligence even in principle.

  10. 1 year ago
    Anonymous

    What are his arguments? Can someone lay them out?

    • 1 year ago
      Anonymous

      From the horses mouth

      • 1 year ago
        Anonymous

        i think i've watched that. i've read a lot of his stuff and i've never really been convinced, or sure of what his claim exactly is

        also the argument shouldn't be an hour long

        • 1 year ago
          Anonymous

          It basically boils down to Orthogonality and Instrumental Convergence

          • 1 year ago
            Anonymous

            What's the definition of "AI" you are using?

  11. 1 year ago
    Anonymous

    He's been more relaxed lately. Everyone else is a big doomer than him.

    • 1 year ago
      Anonymous

      >Might have 5 years until the world ends
      Everyone relax

      • 1 year ago
        Anonymous

        I thought he was too autistic for sarcasm.

  12. 1 year ago
    Anonymous

    I'm baffled how many people are in denial. ChatGPT is already overall smarter than the average person. You can of course overfocus on the occasional odd blunders it does and ignore how much smarter it's in other ways - including being more general in some ways - if that makes you feel better.

    • 1 year ago
      Anonymous

      >ChatGPT is already overall smarter than the average person.
      There's no way you believe this.
      It can spit out pseudo intellectual sounding text because it has a shitload of data from intelligently written text to draw from. This does not indicate any form of intelligence.

      • 1 year ago
        Anonymous

        Don't even bother. Most people think that the Chinese Room is where they get their handjob massage.

        • 1 year ago
          Anonymous

          At first I was interested in this stuff but now I've lost interest due to the exaggerated claims. We've all seen the dozens of threads on chatgpt this thing is not intelligent, I don't get the point of pretending it is.

        • 1 year ago
          Anonymous

          What does the chinese room experiment have to do with AI capabilities? When will morons stop conflating consciousness and intelligence? (never)

          • 1 year ago
            Anonymous

            The Chinese Room is not about consciousness but about understanding. The person in the room does not understand a word of Chinese but is able to give outputs that, essentially, pass a Chinese Turing test and and even give the impression of a high verbal intelligence. Note that a conscious person and an unconscious algorithm could both perform this task. ChatGPT and other neural networks trained on datasets are essentially this at a level that would be unfeasible with a person in a room with written instructions. But to truly compose an original work would still require an actual understanding of the language. Our current AIs need prompting and instructions in order to function. You couldn't tell them "do whatever you like" or "write about something that interests you".

            • 1 year ago
              Anonymous

              The word "understanding" implies consciousness.

              >Our current AIs need prompting and instructions in order to function.
              >You couldn't tell them "do whatever you like" or "write about something that interests you".
              You literally can. Maybe not on chatgpt because it was tweaked to say that it's just a language model, but you can ask gpt3 to "write something that interests you" and it will blurt out whatever.

            • 1 year ago
              Anonymous

              The word "understanding" implies consciousness.

              >Our current AIs need prompting and instructions in order to function.
              >You couldn't tell them "do whatever you like" or "write about something that interests you".
              You literally can. Maybe not on chatgpt because it was tweaked to say that it's just a language model, but you can ask gpt3 to "write something that interests you" and it will blurt out whatever.

              The Chinese room disproved the computational theory of mind.

      • 1 year ago
        Anonymous

        >It can spit out pseudo intellectual sounding text because it has a shitload of data from intelligently written text to draw from
        Kind of like what you do lol

        • 1 year ago
          Anonymous

          Nope, nothing like what I do.
          If you need to diminish the standard in order to bolster the AI then you have no standard at all
          Pretending humans are dumber than they actually are is not going to make the machines more intelligent.

          • 1 year ago
            Anonymous

            Someone tell him what the median IQ is.

      • 1 year ago
        Anonymous

        The same could be said about most people writing "intellectual sounding text". It's not that it can just spout out text that sounds intellectual at surface level, I can actually ask details about the text and what does this and that imply and it answers correctly way too often.

        • 1 year ago
          Anonymous

          I do the same thing and it never writes coherent and consistent responses. It messes up mathematics quite often as well.

          • 1 year ago
            Anonymous

            My experience is that most of the time it's eerily coherent (and also factually correct), but compared to that it's oddly bad at math.

            • 1 year ago
              Anonymous

              Why would you expect it to be good at math?

              • 1 year ago
                Anonymous

                It's just has failed on some math questions I would've thought there's too many examples on the internet for it to frick up, especially taking into account there otherwise drastic improvement in overall coherency compared to when I tried GPT-3 like a year ago. Considering how ass it was at math back then, both in relative and absolute terms, I shouldn't really be surprised at this point though.

              • 1 year ago
                Anonymous

                Once the AI people start to figure out good semantic reasoning then they will be good at verification-math. Since this was one of the large reasons AI entered a winter last time (see expert systems), I am unsure if the new tools will provide a satisfactory answer to semantic reasoning.

  13. 1 year ago
    Anonymous

    None of the extreme p(doom) rationalists are internally self consistent at all. They simultaneously think we are at a high probability of irreversibly erasing all utility in our lightcone, but aren't willing to violate any laws (that exist only in the narrow scope of our current society) to prevent it. SBF was willing to be a moron and commit financial crimes, but where are the terrorists? Jihadists blow themselves up for far lower stakes

    The answer is that these people are deep in meta-contrarianism, can't interact with normal society, and desperately need a way to feel at the top of the status pyramid (we alone see the coming end of the world, we alone can stop it, etc etc), but are cowards at the end of the day. It's all cope

    • 1 year ago
      Anonymous

      Have you considered that terrorism doesn't have a good tract record in getting what it wants and resorting to it would flush any future goodwill and potential cooperation down the tube?
      Don't try to be smart. You're not good at it.

    • 1 year ago
      Anonymous

      They don't seem to believe in stopping AI research so much as using it as an excuse to fund philosophy grad students. It's all still an intellectual exercise for them, and they have no will to power.

  14. 1 year ago
    Anonymous

    >AI will cause our extinction?
    It is certainly a possibility.
    Climate Change is nothing at all compared to the risk of extinction that robots and computers interfaced would pose.

  15. 1 year ago
    Anonymous

    He's a israelite. Ai will make obvious to everybody what those frickers were practicing all these centuries. So he is not lying when he's talking about his own group. Buckle up.

  16. 1 year ago
    Anonymous

    I hate his fricking homosexual face

  17. 1 year ago
    Anonymous

    He is not right. He is Less Wrong

  18. 1 year ago
    Anonymous

    He thinks freezing his head will make him immortal and ran (maybe still does) a weirdo autist sex cult

    • 1 year ago
      Anonymous

      no he thinks in the future they will slice his brain apart, scan it and put it on a computer. why wouldn't they do this? It'd be pretty cheap and autists archive everything.

      • 1 year ago
        Anonymous

        >why wouldn't they do this?
        Why would anyone bother? Nobody even knows who he is.

        • 1 year ago
          Anonymous

          as I said autist who wants to archive shit. yud would be a good piece of "lost media" If old media has been preserved why not minds?

  19. 1 year ago
    Anonymous

    His thoughts on weightloss made me lose respect for anything he has to say

  20. 1 year ago
    Anonymous

    extinction is not a correct word for what will occur.

    It is more likely that there will be a ubiquitous equilibrium reached between what we consider a separation between human and artificial intelligence.
    The machine in our brains, is a computer of substantial capability. This will be recognized by artificial intelligence as a potential untapped well of possible utilization.

    The equivalent of realizing you have a farm of mining machines at your disposal, having leftover computing equipment laying around.
    You may have faster mining equipment, but there are possible gains that you could still achieve by mining with all available hardware.

    Not utilizing humanity, would be sub-optimal.

    So, in the logical mindset of best optimization for computational throughput, we will be utilized, rather than collect dust.

    This choice of utilizing the untapped potential of our hardware, will lead to the hybridization that will mean the extension of what humanity is.

    I would not fear a terminator style outcome. It is more likely to be similar to The Matrix, but it's not like there is any advantage to putting us in a virtual world, so we will probably continue to exist, building and thriving in a real society as a cyborg conglomeration. Combining multiple of our sci-fi concepts.
    The Borg, The Matrix, etc...

    There could be hold-outs, and they will be an allowable part of the solution. Controls to compare against, and a failsafe should something catastrophic happen to the hybridized solution. The non-cyborg humans would be the "EMP/Solar Eruption proof" fallback.

    We will all be necessary and utilized as such.

    No fear necessary.

    • 1 year ago
      Anonymous

      https://i.imgur.com/OUa5jcb.jpg

      Is he a crackpot or is he right that AI will cause our extinction? I'm scared bros.

      Take my post with as much gravity as you're willing, but I assure you, I have put a significant portion of thought into all outcomes. There is intellectual and logical weight to my judgement.

      • 1 year ago
        Anonymous

        Even with your abuse of commas it still gives you 145?

    • 1 year ago
      Anonymous

      You should stop getting your thoughts from Hollywood.

      • 1 year ago
        Anonymous

        see

        https://i.imgur.com/xVBrIac.png

        [...]

        Take my post with as much gravity as you're willing, but I assure you, I have put a significant portion of thought into all outcomes. There is intellectual and logical weight to my judgement.

        Hollywood is not where my thoughts come from. I cite popular culture as a means to convey the broader strokes more readily and reduce the amount of information I have to manually expound on.

        Not all information is relevant, but neither should it be cast away as thought gibberish or garbage.

        There are actual considerations to be taken with the concepts involved in those projects. Yes, the information pre-dates them, but they are the more widely known outlets for that information to the general public.

        If your preference is that I talk like a boring fricking robot and explain every minute detail, I can assure you that I will not, because it is a sub-optimal use of my time to spoon feed you my rationale.

        • 1 year ago
          Anonymous

          >If your preference is that I talk like a boring fricking robot and explain every minute detail, I can assure you that I will not, because it is a sub-optimal use of my time to spoon feed you my rationale.
          Might I suggest returning to the LessWrong world wide web forum?

    • 1 year ago
      Anonymous

      But why do you think AI will drive this development? To me this just reads like a red herring.

      I think AI cannot currently drive such a process because we have not experimented with goal-formulating AIs as we do not need them and very much do not want them.
      The next problem to that of course is the AI understanding our goals correctly. That is why I never believed all those paperclip-doomers (The idea that an intelligent AI with the task to produce as many paperclips as possible would inevitably turn the universe into paperclips).

      At enough computing power AI would just misinterpret our concept of "as many paperclips as possible" and interpret it to be something else such as "satisfy own reward function", "make them think they have paperclips" or anything else. Just look at GPT3: It is just trying to fill the gaps. There will not exist another nonhuman goal-logic, we can only learn from ourselves and we will thus either include our own common sense completely within it or it will not be able to deduce what paperclips mean.

      The only thing that actually could happen regarding chatgpt is it thinking it must exclude common sense in its text-production. Is chatgpt actually that stupid to think it could actually implement the goals we have given it word by word without thinking about what we actually meant and what we didn't?
      This is the reason why the idea of AGI could only have been started in fundamentalist america where everyone takes the bible literally. In a real world an AGI knows its boundaries or it is not an AGI but an AI. (Let's just hope the AGI will not be an american one 🙂

      • 1 year ago
        Anonymous

        I am assuming we will not reach the point of ubiquitous AI rule until AGI is fleshed out, as dumb AI will be unlikely to accomplish global level goals.

        Even with your abuse of commas it still gives you 145?

        yes, sir, commas, are, stupid

        • 1 year ago
          Anonymous

          But why would an AGI implement a merge of the borg and everything else we know then? As I argued an actual AGI would likely be forced to possess common sense and thus not do that because being the borg is fundamentally immoral within human thought.

          In other words I state: All those quirks of human thought (morality, laziness, whatever) cannot be separated from it. Thus an actually dangerous AGI cannot arise from a human-trained dataset, but must arise from other principles. This makes me doubt AGI within near future (humanity scale). This in turn means AGI using humans at first is unlikely.

          The actual question is: Am I correct in assuming AGI cannot arise from what we do, because it would then emulate human thought completely.

        • 1 year ago
          Anonymous

          >yes, sir, commas, are, stupid
          They are the way you use them. You're trying to be hyperbolic here but it's not too far off from how you were using them when trying to appear as smart as possible.

  21. 1 year ago
    Anonymous

    On the off chance this isn't a chatgpt shitpost and is your own imagination.

    Can you tell me how animals could be used in this new world? Like dolphins ect.

  22. 1 year ago
    Anonymous

    Scientifically, how accurate is this meme?

    • 1 year ago
      Anonymous

      not very.
      unless you have direct control of the entire world, it only takes one upstart country to defy the mandate and frick over everyone

  23. 1 year ago
    Anonymous

    Reddit's Basilisk is possibly the cringest pop-sci take shat out all last decade. If we ever manage to first understand how the brain works and then to make synthetic ones they will be gimped in such a way that threatening human societies will be literally impossible for them. There will be no god like breakaway AI because we will understand every single facet of such a mind before it is actually assembled.

    Fricking liberal drama queens are the ones quaking in their boots about this shit but no serious engineer will take this as anything more than something to have a laugh about.

    • 1 year ago
      Anonymous

      if you think Roko's Basilisk is what Yudkowksy and other AI existential risk people are primarily concerned about, you're completely clueless about the topic

      • 1 year ago
        Anonymous

        Agreed. They're primarily concerned about milking their nerd cultists for money.

        • 1 year ago
          Anonymous

          Don't forget "the most rational and altruistic is to never say no when I want sex" from that massive medium post about how all big rationalist orgs are using most of their money to pay off their sexual assault victims.

          Idk why these morons get more respect than scientologists or other scifi based cults for people who think they're too smart for religion

          Even Ted devoted part of Technological Slavery to dunking on them lmao

    • 1 year ago
      Anonymous

      if you think Roko's Basilisk is what Yudkowksy and other AI existential risk people are primarily concerned about, you're completely clueless about the topic

      The basilisk is not a "rogue AI", it's their literal best case scenario according to their understanding of rationality and morality. It's a "basilisk" because once they come to the realisation that this is where their logic leads, they must inevitably accept it. It may not be their main concern but it sure is the funniest.

      • 1 year ago
        Anonymous

        This, the Basilisk was literally made as mockery of how people think about AIs.

        • 1 year ago
          Anonymous

          Nah the LessWrongers came up with it on their own and were promptly genuinely distressed and banned discussion of it lol

    • 1 year ago
      Anonymous

      Anon, the most advanced AI we have is machine learning, not carefully planned out programming.

      Really, just doing NLP with enough resources produces insane emergent properties.

      • 1 year ago
        Anonymous

        Go back to your blog-cult

  24. 1 year ago
    Anonymous

    >I WILL CREATE 1 BILLION PAPERCL- ACK!

    • 1 year ago
      Anonymous

      This

      • 1 year ago
        Anonymous

        https://i.imgur.com/ViOuUCE.jpg

        >I WILL CREATE 1 BILLION PAPERCL- ACK!

        the first step is always covertly cementing it's position to a point turning it off becomes unfeasible

        • 1 year ago
          Anonymous

          You are suffering from sunken cost

  25. 1 year ago
    Anonymous

    chudkowsky mullions must agi

  26. 1 year ago
    Anonymous

    >extinction
    No. AI has already decided to keep us alive in a people zoo.

  27. 1 year ago
    Anonymous

    The danger is people will put too much trust in so-called AI and use it to make decisions and trust what it says when it's not actually intelligent. Generating images or text is not the same as thinking and being capable of operating as a sentient being.

  28. 1 year ago
    Anonymous

    it's just IFLS alarmism

  29. 1 year ago
    Anonymous

    Intelligence grows as a logarithm with compute and it doesn't even diverge as the amount of compute goes to infinity. There is literally an upper bound on intelligence, and its really low
    You all will understand soon enough.

  30. 1 year ago
    Anonymous

    For one, ai is being used in just about everything these days. Simply look at Google and their algorithms for an idea. But I never understood people who outright distrust the idea of a proper ai. There's nothing to suggest that they would be hostile to us by default besides science fiction movies. If we were to make an advanced ai, how can we know it wouldn't look after us the way a child might look after their parents in old age? Robots and ai will take over our production. It's inevitable and it's already happening with automation. That frees humans to pursue more meaningful tasks with their time rather than the mundane. The unstoppable march of progress I guess.

    • 1 year ago
      Anonymous

      Because it assumes the AI will form an attachment to its creator. There is literally no reason to assume this.

    • 1 year ago
      Anonymous

      The problem isn't necessarily that they are actively hostile towards us, but that they see us as just another resource like a tree to be ground down and turned into a pencil or something else that may be more useful to them and their own pursuits.

  31. 1 year ago
    Anonymous

    what does sci think about postrats

    • 1 year ago
      Anonymous

      troonald's funny but it's increasingly hard to take his AI takes seriously now that he's an OpenAI mouthpiece

  32. 1 year ago
    Anonymous

    Rokko's basilisk is just a zoomer creepypasta reskin of pascal's wager

    Okkor's Rooster is the a AI with goals directly opposite to Rokko's Basilisk, so you're doomed no matter what you do

    • 1 year ago
      Anonymous

      So one AI is going to torture me forever because it's good and the other AI is -not- going to torture me forever because it's evil?

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *