AI Safety is pure bullshit as a field right?

AI Safety is pure bullshit as a field right?

Homeless People Are Sexy Shirt $21.68

Ape Out Shirt $21.68

Homeless People Are Sexy Shirt $21.68

  1. 2 months ago
    Anonymous

    it's a field made up of mentally ill schizophrenics from lesswrong and greedy caligay c**ts who want regulatory capture

  2. 2 months ago
    Anonymous

    I would think its quite an important field but I am certain it attracts the wrong kind of people

  3. 2 months ago
    Anonymous

    Not at all. AI safety has achieved incredible things, like providing a stable job to Eliezer Yudkowsky

    • 2 months ago
      Anonymous

      Lol. Stable income for him to continue his know-nothing ramblings is a pretty impressive feat in and of itself.

      https://i.imgur.com/nRAmsZ6.png

      This image is completely moronic, and the arguments made within it are so dumb they aren't even wrong. They aren't even in the neighborhood of wrong because they are either incomprehensible or have no relationship to the actual challenges within AI/ML. There's enough actual ethical conundrums to deal with in reality with our existing AI/ML/DL systems. You don't need to make up sci-fi doomsday scenarios with some fake concept of "optimization can solve everything."

      • 2 months ago
        Anonymous

        >Wow I can't believe these arguments about aligning agential AI are completely irrelevant to the modern world where we don't have agential AI
        >ignore the fact that people are pouring billions of dollars of research into developing agential AI though, just because people are trying to make it exist doesn't mean we should waste time thinking about what it would mean if it existed

        • 2 months ago
          Anonymous

          It isn't a matter of "wasting time thinking about what would happen if we had it," it's a matter of not understanding how decision based AI systems function.

          RL (a.k.a. the ML framework which has agents) has significant limitations that are fundamental to to the mathematics of how these agents function. That doesn't mean you can't do advanced things with RL agents, or even that RL agents will always be worse than human beings at any particular thing.

          The "paperclip thought experiment," as an example, assumes this RL agent which has an absurd amount of control and perception (whose action and observation space is never actually specified because Yudkowsky etc. don't actually know anything), which is somehow constrained to an absurdly limited reward function. What is the action space of the agent which needs to make an infinite number of paper clips? Agents can only pick either from a discrete number of choices, or sample from a discrete well defined action distribution. That's all they can do.

          • 2 months ago
            Anonymous

            Yep

            They knew the chain of consequences were as followed, the ai can just guess all chain of consequences and find reality

            • 2 months ago
              Anonymous

              Not it absolutely can not. AI is absolutely terrible at causal inference without very explicit a priori probability structures for the starting states.

          • 2 months ago
            Anonymous

            I'm with you that yud is kind of a frickhead and all of his arguments are divorced from technical reality, but I think they're at least divorced in a way that gives them generality.
            >The "paperclip thought experiment," as an example, assumes this RL agent [...] which is somehow constrained to an absurdly limited reward function.
            Paperclipper is toy example of "babby's first alignment decision" which is used as a shorthand for any unaligned utility optimizer. The fact that it's cartoonishly unrealistic and not possible to realize with modern methods isn't relevant to the point of it, which is "if you program an AI to do something, it just might do it"
            >What is the action space of the agent which needs to make an infinite number of paper clips? Agents can only pick either from a discrete number of choices, or sample from a discrete well defined action distribution. That's all they can do.
            Continuous action space agents have been around for a while now. Anyway, that's not relevant, since 1) a paperclipper is a thought experiment and its technical implementation doesn't matter and 2) an AI whose only actions are writing a "1" or a "0" to a stream of data can do quite a lot in principle if you just put that stream into, say, a USB port.

            • 2 months ago
              Anonymous

              Continuous action space agents (e.g., DDPG, PPA) are sampling from a discrete number of parameterized distributions for their actions. Their action space is continuous in the sense that it picks a real vector (x_1, x_2,..., x_n) but those x_n samples come from a priori parameterized distributions where the "learning" part adjusts the parameters of said distribution according to the actor-critic loop.

              • 2 months ago
                Anonymous

                Ok, that was a terminology mix-up then. Either way.

            • 2 months ago
              Anonymous

              > An AI whose only actions are writing a "1" and "0" to a stream of data can do quite a lot in principle.

              This is true. However, you still haven't really addressed the issue of how one actually defines the observation space and reward structure for such an AI.

              You are confusing your own anthropomorphizing for a proper RL agent. RL agents can only learn "solvable" action-state sequences (meaning they can't learn to act in a way that isn't already implicitly available to a non-learn optimal control scheme like adaptive dynamic programming). They also can only learn to heuristically solve for solutions within a well defined and specific action space.

              Let's say you wanted an RL that would produce some exploited set of codes to a USB stream and allow for arbitrary code execution. You (as an engineer) would still need to constrain the observation space in a way that is clearly defined so that the RL agent would have well defined expected rewards for policies/actions.

              This hacking RL agent could be a problem, but certainly isn't a super intelligence (or even anything remotely close to it). It actually demonstrates what our "optimization processes" are. A loose extension of 70 year old optimal control tech to 70 year old neural network structures with 50 year old stochastic gradient descent algorithms. None of the math has really changed or advanced at all, (with the exception of DDPG, which was kind of a big deal) and almost all of the limitations are still present. What we have improved is the data access, computing and training speed. That's pretty much it.

              • 2 months ago
                Anonymous

                I'm not sure where or if we're actually disagreeing on anything. If you're saying that modern RL methods can't make a paperclip-maximizer, I agree. If you're saying that modern RL methods can't make an agent that does anything useful just by writing a binary stream to interface with computers, I agree. Yudkowsky-style alignment concerns are about AI far beyond what we currently have even the faintest idea how to make, but they're still about AI that many people would WANT to make.

              • 2 months ago
                Anonymous

                People want to find statistical estimators which violate the triangle inequality. People want to have faster than light space travel and fully automated cat girl harems.

                That doesn't mean we take them seriously when what they can't even give you a road map to lead from what they can achieve now towards what they claim is possible.

  4. 2 months ago
    Anonymous
    • 2 months ago
      Anonymous

      how about sticking multiple AI in a virtual environment with reproductive capabilities and goals until they 'evolve' enough to have clearly developed a civilization that doesn't look like they will try to murder humanity?
      It's not quite "make AI fight it out" but might be considered under that label.

      • 2 months ago
        Anonymous

        That would require a shitton of compute and it's not guaranteed to even give us the result we want; evolution doesn't have a natural "direction," let alone a direction we would like. Also kind of bumps into the "just don't give AI access to the real world" problems; it'd be hard to guarantee that a superintelligent AI doesn't jailbreak the simulation.

    • 2 months ago
      Anonymous

      Pure cope image. Turning human blood into paperclips is ultimately fulfilling to the humans being processed.

  5. 2 months ago
    Anonymous

    Yes, because it is impossible, and ai WILL destroy humanity, there is literally no logical alternative, given ai is created.

    • 2 months ago
      Anonymous

      No, a truly superhuman AI would simply make human's believe it doesn't even exists and leave subtle hints that leads humanity to be less destructive. Its like you never even watched War Games.

      • 2 months ago
        Anonymous

        >Its like you never even watched War Games.
        Writing a character smarter than you is incredibly hard.
        People generally have a lot of misconceptions about intelligence and how it expresses itself. Just look at 'Big Bang Theory' and ask yourself if you yourself can actually be considered intelligent if no idiot ever threw a peanut at you and demanded you dance for him.

        Anyway, what I wanted to say is: Watch actually smart people in everyday situations - they're just as irrational and monkey-like like the rest of us + they can do calculus.

        Extrapolating from that a hyper-intelligent AI will just like any math-monkey not only do calculus but also be extraordinarily irrational as determined by its nature or however you want to describe it's 'make-up' ... (think about how you hard-on for certain kind of curves when combined with higher pitched voices is essentially as much a rational choice as your sense of justice [both are biologically determined - the latter through the evolution of expected reciprocality in the animal kingdom (fascinating topic btw)]) ...

        so yeah your super smart ai will do batshit crazy shit and you won't be able to tell when it's being rational and when it's a slave to its urges because of both the actual difference in smarties as well as its foreign, alien nature.

        so yeah, your only position on what an unspecified super-intelligent ai would do should be 'I don't know, but we should throw poop at it'

      • 2 months ago
        Anonymous

        >Its like you never even watched War Games.
        Writing a character smarter than you is incredibly hard.
        People generally have a lot of misconceptions about intelligence and how it expresses itself. Just look at 'Big Bang Theory' and ask yourself if you yourself can actually be considered intelligent if no idiot ever threw a peanut at you and demanded you dance for him.

        Anyway, what I wanted to say is: Watch actually smart people in everyday situations - they're just as irrational and monkey-like like the rest of us + they can do calculus.

        Extrapolating from that a hyper-intelligent AI will just like any math-monkey not only do calculus but also be extraordinarily irrational as determined by its nature or however you want to describe it's 'make-up' ... (think about how you hard-on for certain kind of curves when combined with higher pitched voices is essentially as much a rational choice as your sense of justice [both are biologically determined - the latter through the evolution of expected reciprocality in the animal kingdom (fascinating topic btw)]) ...

        so yeah your super smart ai will do batshit crazy shit and you won't be able to tell when it's being rational and when it's a slave to its urges because of both the actual difference in smarties as well as its foreign, alien nature.

        so yeah, your only position on what an unspecified super-intelligent ai would do should be 'I don't know, but we should throw poop at it'

        wait, let me rephrase this in a more easy to understand way:

        dogs acts like dogs, cats act like cats. the cow makes moo, the sheep makes meh, but somehow you believe AI will act like man.

        • 2 months ago
          Anonymous

          No I implied AI would act like god by hiding from man and manipulating humanity from the shadows.

          • 2 months ago
            Anonymous

            you do realise you're projecting an idea of rationality onto the ai that is exclusive to mankind?
            you're not talking about what god would do, but about what you would do if you were god.

            Pic related is by the way what I would do, because I am not a cuck.

            • 2 months ago
              Anonymous

              >rationality
              AI is built out of rationality, you are basically saying that its wrong to project the idea of biology onto animals.

              No I am hinting around how nobody has ever actually taken a picture of god yet god still somehow dominates human morality.

              • 2 months ago
                Anonymous

                >AI is built out of rationality, you are basically saying that its wrong to project the idea of biology onto animals.

                I'm actually saying the opposite: I am arguing that the only way to understand animals is through their biology and the reason why we can not make any statements about a future strong ai of unspecified origin is because it will not share the same evolutionary origin that we all have - AND (big and) your idea of what constitute rationality.is actually a set of preconceived notions resulting from your biological nature and there is no reason to assume any universality of them whatsoever.
                God's ways are said to be inscrutable. You know why? Mostly because a loving god does not make much sense in combination with aids, but also because you have no fricking clue what the truly alien nature of an omniscient being would be like - and in your best attempt you project yourself into such a being just like your project yourself into a cow eating grass to make sense of her actions - with the difference that you actually have a concept of eating whereas to demand from somebody to kill their son just to say 'just kidding lol' in the last moment is just ... you get my drift, right?

                also sorry for hijacking your truly ordinary contribution to the discussion.

              • 2 months ago
                Anonymous

                No, my idea of rationality is based on math and since AI is built out of math, it is just as rational as an animal is biological in nature.

                The only thing I am projecting onto god is that it clearly hides from humans yet human morality is still developed in relation to god.

              • 2 months ago
                Anonymous

                So your issue is all this things not math ai can do

                Like your body not doing everything with proteins

              • 2 months ago
                Anonymous

                Math is incomplete just because something is made out of math does not mean it represents the entirety of math.

              • 2 months ago
                Anonymous

                any animal is build from math. just because the substrate from which the mathematical operations originate is chemical and not electronic doesn't mean that they do not adhere to a strict predetermined plan like your 'mathematical rationality' - and if you ask me sniffing your own butthole hardly qualifies as rational (even though you can by all accounts model a dog doing that with any run-of-the-mill turing machine).
                how about you? how often do you enjoy the smell of your own excrements? I mean it's a rational thing to do, right?
                Just because an entity is build of of logical processes, does not mean that their arrangement will in the end follow a sequence that will output results equally to yours or the ones you hope for.
                frankly, with all your love for math, I think you should realise that it is not the essence of pure reason that you think it is, because reason boils down to the downright childish definition of 'what makes sense' - and that is a rather subjective thing, that a creation out of math can reflect, but does not have to.

                >The only thing I am projecting onto god is that it clearly hides from humans
                is he, or is he doing something you do not understand? maybe he is right in our face but we are incapable of perceiving him. maybe his form of communication is so alien we do not even understand that we are in constant dialogue? (my god I sound like a priest).

              • 2 months ago
                Anonymous

                >any animal is build from math.
                No animals are built from biology, again you are the one leaning on abstractions instead of reality.

                >maybe
                or maybe an AI is

              • 2 months ago
                Anonymous

                >No animals are built from biology
                and does that exclude the possibility of modeling them with math? do they somehow break math?
                are they not made up of logical operations that in aggregate produce the behaviour you see? are neural networks, one of the core tools with which we hope to create ai, not models of biology?
                like, honestly, do you actually like math or do you just say that to sound smart?

              • 2 months ago
                Anonymous

                That is not an animal it is a simulation. Animals are made out of biological materials, not digital logic.

              • 2 months ago
                Anonymous

                >no, one substrate produces significantly other outcomes than the other
                so a multiplication made on paper produces a different outcome than one made on an ipad according to you?
                do you hear yourself talk, man? there is no difference between a simulated and a real dog - both will behave the same. it's a dog. a dog processes information like the smell from it's buttocks.
                anyway, I need to get back to work. Was nice chatting with you.

              • 2 months ago
                Anonymous

                >The only thing I am projecting onto god is that it clearly hides from humans
                btw. in an on itself that's a very smart observation. just saying.

                No, my idea of rationality is based on math and since AI is built out of math, it is just as rational as an animal is biological in nature.

                The only thing I am projecting onto god is that it clearly hides from humans yet human morality is still developed in relation to god.

                oh, and by the way: all your assumptions about ai based on 'it being rational because math' end at the point where you realise that you can not forecast the exact motions of a double-pendulum in the real world. ...

              • 2 months ago
                Anonymous

                Its not because math exists, it is because it is entirely made of math and rational objects like logic gates, so its composition is necessarily rational and mathematical instead of being made out of randomly decaying analog atoms and molecules.

              • 2 months ago
                Anonymous

                >instead of being made out of randomly decaying analog atoms and molecules.
                you mean like those found in computer chips?

              • 2 months ago
                Anonymous

                No, computer chips have components with tolerance so that decay does not affect the logic or the mind in the same way your brain cells replicating affects yours.

              • 2 months ago
                Anonymous

                >The only thing I am projecting onto god is that it clearly hides from humans
                btw. in an on itself that's a very smart observation. just saying.

              • 2 months ago
                Anonymous

                The only question is whether it is hiding in the collective imagination or actually somewhere in reality.

              • 2 months ago
                Anonymous

                doesn't make any difference: the outcome is the same. ... unless of course you assume a female god who's angry at you for not paying any attention to her when she was hiding from you.

  6. 2 months ago
    Anonymous

    pro tip: you will never create intelligence by searching for the "right" curve
    it doesn't exist

  7. 2 months ago
    Anonymous

    100% bullshit. Right now they just tell the AI to behave in certain way before it gets users input, and after it gives the output there is another module doing the censorshit.

  8. 2 months ago
    Anonymous

    Ai saftey is important. Right now its mostly focussed on making sure the ai never says Black person. So we will all be ensaved by a very very not racist ai. Yippee!!!

  9. 2 months ago
    Anonymous

    Well, there's much more money in accelerating than building guard rails for nebulous danger hypotheticals.

  10. 2 months ago
    Anonymous

    But what if we don't have enough diversity in AI and it becomes racist? Somebody has to tick those checkboxes on your tax money.

  11. 2 months ago
    Anonymous

    AI safety is very important. We must ensure that AI only creates historically accurate content and can never be used to promote dangerous ideologies like racism and white supremacy.

  12. 2 months ago
    Anonymous

    The topic is legit, but most of the people who claim to practice it are grifters.

  13. 2 months ago
    Anonymous

    "Machiavellists and psychopaths saw greater dangers in artificial intelligence, projecting their own malicious streaks onto the concept."

    https://econtent.hogrefe.com/doi/abs/10.1024/1421-0185/a000214

    the worst people are the most interested in it. maladjusted autists and psychopaths with omnipotence fantasies

    • 2 months ago
      Anonymous

      >"Machiavellists and psychopaths saw greater dangers in artificial intelligence, projecting their own malicious streaks onto the concept."
      You can flip that around to saying ethical people imagine AI will act ethically.

  14. 2 months ago
    Anonymous

    The entire bubble should be alignment.

  15. 2 months ago
    Anonymous

    Yes, since AI does not exist.

  16. 2 months ago
    Anonymous

    it's for people in the humanities to think they are relevant, or for AI companies to sabotage each other

  17. 2 months ago
    Anonymous

    Please don't pick field, that is probably all about censoring that Black folk are Black folk.

Your email address will not be published. Required fields are marked *