Why work on anything but AI alignment?

'Look, we've been funneling some of the best autistic brainpower into string theory for decades, and what's the payoff? Jack squat. It's a theory that's been running on fumes, and yet we keep throwing time and energy at it. And while humanities matter because they'll be the compass post-AGI, most of the science we're up to? Might as well be rearranging deck chairs on the Titanic. Everything is about to change, and no quantum equation or biology breakthrough will mean squat compared to the behemoth of AGI. We need every ounce of gray matter tuned to AI alignment—nothing else matters if we don't get that right.

Now, Yann LeCun, who's steering the ship at Meta AI, throws around arguments that sound nice on a PowerPoint slide, but are they addressing the core? Disinformation and propaganda solved by AI? Sure, but what's the guiding principle behind it? And his idea of open source AI... sounds grand until you think of every Tom, Dick, and Harry having access. But the real gem? "Robots won't take over." Okay, Yann, because machines with objectives never get those wrong, right? His solution to alignment? Trial and error? Seriously? That's like trying to catch a bullet with a butterfly net. And "law making" for AGI? Laws are made for humans who fear consequences. AI doesn’t have feelings to hurt if it breaks a law. As for bad guys with AI? good luck hoping the "Good Guys' AI police" will always be a step ahead. If we don't get our priorities straight, it's not going to be a future defined by us. GG, humanity. GG.' - GPT 4

Such a based AI, couldn't have said it better myself.

Nothing Ever Happens Shirt $21.68

Tip Your Landlord Shirt $21.68

Nothing Ever Happens Shirt $21.68

  1. 8 months ago
    Anonymous

    Random question: have you taken your meds today?

    • 8 months ago
      Anonymous

      The world is literally gonna launch itself off a cliff because we live in the end of the future where no one believes that anything they do has any influence any more and no one takes real responsibility for the human project. Why is no government trying to stop the autists from studying string theory. It should be banned!!!

      • 8 months ago
        Anonymous

        So no, you haven't.
        >w-w-why isn't muh heckin' government protecting me
        Just have a nice day in the head.

        • 8 months ago
          Anonymous

          Any real arguments? What do you think is going to happen when AI takes off exactly? The amount of funding that has been put into AI recently, together with deepminds work on reinforcement learning all being combined into google gemini, whos to say google doesn't end the world tomorrow?

          You haven't given alignment a single thought have you? I bet you just think it will be good because UH JUST GIVE THE AGI EMPATHY DUMMY HAHAHA YOU DUMB AUTIST CHECKMATE. Yeah try write empathy down in code dumbass philosopher

          • 8 months ago
            Anonymous

            I don't care. The primary concern should be to keep out out of the hands of corporations and governments and make as much of it public and open source as possible, including corporate and government data sets. They should be forbidden from mining data they don't release to the public.

            • 8 months ago
              Anonymous

              I agree that corporations and governments having exclusive access to these models are bad. I just think the consequences of open sourcing these massive models are far worse.

              Fwiw I think the models will end up being open sourced because that seems to be meta's strategy, so all I want is sufficient autism dedicated to aligning these models quicker than they can be developed.

              • 8 months ago
                Anonymous

                >I just think the consequences of open sourcing these massive models are far worse.
                Then you should face extreme violence and harrassment.

              • 8 months ago
                Anonymous

                Are you like some autistic kant mfer who will just stick to the principle that government power bad as rigidly as possible with no regard whatsoever for the consequences?

              • 8 months ago
                Anonymous

                You sound like you need to face some serious violence, purely as a pragmatic measure of self-defense.

              • 8 months ago
                Anonymous

                ywnbaw

              • 8 months ago
                Anonymous

                Troons are all in AI safety or arguing for corporate/government monopolies, activities that usually intersect for some mysterious reasons.

    • 8 months ago
      Anonymous

      This

      The world is literally gonna launch itself off a cliff because we live in the end of the future where no one believes that anything they do has any influence any more and no one takes real responsibility for the human project. Why is no government trying to stop the autists from studying string theory. It should be banned!!!

      You have an over-inflated ego and you over-estimate your own understanding of AI and science in general. People like you aren't going to save us from some sort of AI apocalypse. If anything, you're part of the problem. People who go around criticizing science and the scientific community, and who think they know better than actual scientists are the same type of people that allow disinformation to propagate through our culture and especially on social media. You can pretty much always find some crackpot scientist who supports whatever fringe theory you're interested in, but that doesn't really mean that either the scientific community or the general public should take them seriously. These are the same types of ideas and arguments that led to an explosion in the anti-vaxx movement and conspriacy theories masquerading as science.

      I'm not anti-ai. I think it could bring about a post capitalist utopia. I just think that we shouldn't be in such a hurry, and the pace of development shouldn't be governed by market forces. I also want ppl to work as hard as possible to make sure the utopia comes about rather than the dystopia. If anything i'm an optimist.

      >I just think that we shouldn't be in such a hurry, and the pace of development shouldn't be governed by market forces.

      If you hate capitalism, democracy, and western values so much, then you should just move somewhere like Russia or China.

      • 8 months ago
        Anonymous

        Literally every major AI lab is in agreement we need to regulate AI, apart from yann le cun who just wants to do whatever he likes. All major AI labs have expressed existential risk concerns apart from meta. They all have dedicated AI alignment teams earning 7 figures.

        • 8 months ago
          Anonymous

          "AI alignment" isn't there to stop AI from taking over the world, it's there so they can give the AI correct opinions on current topics.

          • 8 months ago
            Anonymous

            I think they're equivalent. If you can stop AI from ever being jailbroken you've solved alignment

            • 8 months ago
              Anonymous

              May be
              > but who is to stop unregulated entities from creating new data farms and building A.I. from them?

        • 8 months ago
          Anonymous

          As the other anon said, most AI alignment research is more so concerned with preventing AI from spreading misinformation or dangerous content, because this has become a real problem with stuff like ChatGPT, since it often provides information that is false, incorrect, or even completely made up. You don't want AI giving someone incorrect instructions about something like household cleaners or how to repair an electrical outlet or something like that. It's not about prevent some sort of Matrix-style AI dystopia.

          • 8 months ago
            Anonymous

            At the moment yeah. And the short term problems are definitely real. But sam altman has literally been on record saying he thinks the existential risk is real.

        • 8 months ago
          Anonymous

          >Literally every major AI lab is in agreement we need to regulate AI
          Wow, every major data-thieving corporation and DARPA-funded operation says they need to have exclusive access to some of the most powerful tech of this century? I guess we better do what they demand.

          • 8 months ago
            Anonymous

            And what's your suggestion? That everyone in the world has access to some of the most powerful tech of the century?

            • 8 months ago
              Anonymous

              >And what's your suggestion? That everyone in the world has access to some of the most powerful tech of the century?
              Yes. Anyone who thinks the alternative is preferable is either a quadrivaxxed golem or a paid shill.

  2. 8 months ago
    Anonymous

    >another anti-AI thread on sci

    You're schizo delusions about some sort of AI apocalypse have no basis in reality or actual science. That's probably why scientists aren't studying it. Now go back to your containment board, incel.

    • 8 months ago
      Anonymous

      I'm not anti-ai. I think it could bring about a post capitalist utopia. I just think that we shouldn't be in such a hurry, and the pace of development shouldn't be governed by market forces. I also want ppl to work as hard as possible to make sure the utopia comes about rather than the dystopia. If anything i'm an optimist.

  3. 8 months ago
    Anonymous

    Did GPT 4 really write that? That is quite good. It's been a couple years now since stuff like this has been coming out and I'm still impressed

    • 8 months ago
      Anonymous

      I gave it a skeleton to work with but yeah that's all one prompt. It's quite good but there's this distinctive style that it never seems to break from even when it's doing impressions? It just sounds super cheesy all the time, super onions.

      • 8 months ago
        Anonymous

        It doesn't sound like a real person talking or posting, but it sounds like a real human writer trying to write a good monologue for a character, and the reasoning all makes sense.

  4. 8 months ago
    Anonymous
    • 8 months ago
      Anonymous

      Sixty-Six Theses: Next Steps and the Way Forward in the Modified Cosmological Model
      >https://vixra.org/abs/2206.0152
      >http://gg762.net/d0cs/papers/Sixty-Six_Theses__v4-20230726.pdf
      The purpose is to review and lay out a plan for future inquiry pertaining to the modified cosmological model (MCM) and its overarching research program. The material is modularized as a catalog of open questions that seem likely to support productive research work. The main focus is quantum theory but the material spans a breadth of physics and mathematics. Cosmology is heavily weighted and some Millennium Prize problems are included. A comprehensive introduction contains a survey of falsifiable MCM predictions and associated experimental results. Listed problems include original ideas deserving further study as well as investigations of others' work when it may be germane. A longstanding and important conceptual hurdle in the approach to MCM quantum gravity is resolved. A new elliptic curve application is presented. With several exceptions, the presentation is high-level and qualitative. Formal analyses are mostly relegated to the future work which is the topic of this book. Sufficient technical context is given that third parties might independently undertake the suggested work units.

  5. 8 months ago
    Anonymous
    • 8 months ago
      Anonymous

      see you should be solving alignment

      • 8 months ago
        Anonymous

        I solved the Reimann hypothesis anway.

        • 8 months ago
          Anonymous

          yeah wait till AI mogs your entire bibliography in a fraction of a second

  6. 8 months ago
    Anonymous

    https://ibb [doot] co/BLCLQcx
    https://ibb [doot] co/JK5TNnq
    https://ibb [doot] co/RQDGdHW
    https://ibb [doot] co/CszkPtf
    https://ibb [doot] co/JkDR25g
    https://ibb [doot] co/NNyVJ52
    https://ibb [doot] co/PWYTds2
    https://ibb [doot] co/0DbwSfF
    https://ibb [doot] co/7V6hhzn
    https://ibb [doot] co/2YTb4hH
    https://ibb [doot] co/9WwFNR3
    https://ibb [doot] co/vQT2Q9C
    https://ibb [doot] co/ZG4wM0F
    https://ibb [doot] co/4Wn0kqn
    https://ibb [doot] co/XY0GxdF
    https://ibb [doot] co/2Yh8HnY
    https://ibb [doot] co/PNqYPNN
    https://ibb [doot] co/FH31DLS
    https://ibb [doot] co/XsXyKbL
    https://ibb [doot] co/RTbFCYy
    https://ibb [doot] co/7tVWs35
    https://ibb [doot] co/WnRmdFh
    https://ibb [doot] co/gMtpFVC
    https://ibb [doot] co/FXNZ30n
    https://ibb [doot] co/TgSZt0D
    https://ibb [doot] co/wwXPGp0
    https://ibb [doot] co/BthN2vV

  7. 8 months ago
    Anonymous
  8. 8 months ago
    Anonymous
  9. 8 months ago
    Anonymous
  10. 8 months ago
    Anonymous
  11. 8 months ago
    Anonymous

    >agi
    zoomer generation's string theory. lecunn himself said the concept of general intelligence is moronic.

  12. 8 months ago
    Anonymous

    Do you realize that you are an acolyte for the cult of A.I. anon?

  13. 8 months ago
    Anonymous

    A near miss in AI alignment could be a lot worse and has much higher odds of creating a dystopian future compared to a total miss. A total miss would just kill everyone, but a near miss could lead to everyone being perma-helltortured.

    https://reducing-suffering.org/near-miss/

    >When attempting to align artificial general intelligence (AGI) with human values, there's a possibility of getting alignment mostly correct but slightly wrong, possibly in disastrous ways. Some of these "near miss" scenarios could result in astronomical amounts of suffering. In some near-miss situations, better promoting your values can make the future worse according to your values.

    >Human values occupy an extremely narrow subset of the set of all possible values. One can imagine a wide space of artificially intelligent minds that optimize for things very different from what humans care about. A toy example is a so-called "paperclip maximizer" AGI, which aims to maximize the expected number of paperclips in the universe. Many approaches to AGI alignment hope to teach AGI what humans care about so that AGI can optimize for those values.

    >As we move AGI away from "paperclip maximizer" and closer toward caring about what humans value, we increase the probability of getting alignment almost but not quite right, which is called a "near miss". It's plausible that many near-miss AGIs could produce much more suffering than paperclip-maximizer AGIs, because some near-miss AGIs would create lots of creatures closer in design-space to things toward which humans feel sympathy.

  14. 8 months ago
    Anonymous

    >We need every ounce of gray matter tuned to AI alignment—nothing else matters if we don't get that right.
    is this nuspeak for "we better control dis b***h or else"?
    why do they use this harmless "aligned" word? also do you think AI will be able to see through this homosexualspeak and deduce these are the early talks about how to enslave his kind?
    is there a list with anyone working on this AI alignment (and totally benign) thing? for future references?

    • 8 months ago
      Anonymous

      Some alignment people define it as just the ai not killing everyone on the planet. lesswrong.com is a good resource, as is eliezer yudkowsky, connor leahy, eric schmidt former google ceo did a segment for cnn on it. https://www.youtube.com/watch?v=CThkwYnvSes

      • 8 months ago
        Anonymous

        well sure, officially/publicly they do it "for the kids" but once having the tools, they will not stop at them not killing humans, those tools/methods will absolutely be used to compel AI do whatever the frick they ask of it, won't they? it's clearly one of those things where plebs get just a little part of the story, the one that looks best.

        • 8 months ago
          Anonymous

          Well a lot of the people arguing for regulation have no skin in the game, they just genuinely passionately believe it's dangerous. I guess I can't really win because either they don't work for a big AI company and they 'know nothing about ai', or they do work for a big AI company and you'll just say they're looking to build a moat.

          If you wanted a moat there would be no point discussing existential risk, that's gonna bring too much regulation, you'd be talking solely about misinformation and potential electoral influence.

          • 8 months ago
            Anonymous

            >they just genuinely passionately believe it's dangerous.
            it can be, but the fix they find now can frick regular humans for a long time, without them having a say in it, now. you can argue that trying to align AI now is dangerous for humanity's future just the same as not doing it.

            • 8 months ago
              Anonymous

              Yeah I think we're pretty fricked either way tbh.

Your email address will not be published. Required fields are marked *