Are AI safety experts just grifters?

Are AI safety experts just grifters?

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 11 months ago
    Anonymous

    There's a modicum of truth to their "bias" spiel, in that AI is only as smart as the sum of all of us, but everything else is total horse shit.
    Even in the best case, their entire purpose is to simply keep tools out of the hands of the unwashed masses and maintain a competitive advantage.

  2. 11 months ago
    Anonymous

    probably at same level as military experts , in the end people die in wars

  3. 11 months ago
    Anonymous

    They're israelites and Indians who hate sex. That's why they're so vague on what they mean by "safety". They know it wouldn't look good to just come out and say "we want to keep the AIs from fricking you because we hate sex".

  4. 11 months ago
    Anonymous

    Worse, they're political activists.

    The Yud faction is tiny, the vast majority of people claiming the AI safety moniker are commissars to whom safety means ensuring that language models proselytise American liberalism to their users.

  5. 11 months ago
    Anonymous

    They're control freaks and their children secretly hate them.

    • 11 months ago
      Anonymous

      *If they even have any.

  6. 11 months ago
    Anonymous

    Yes! Yes! Yes! I've been saying this for a while now.

    Most "AI Safety Experts" can't explain stochastic gradient descent, loss functions, or even what a fricking P value is. They have no education in the field of AI and yet feel they have authority on the topic. What's worse are the AI doomsday people on Twitter that use it as a form of clout chasing. They'll hype AutoGPT and how it's going to destroy all jobs etc. These people don't even write academic publications about AI safety so they don't even build on the discussions that earlier philosophers made wrt to sentience. They nuked the discussion and began at ground zero - while also giving completely fabricated claims and lackluster arguments.

  7. 11 months ago
    (。>﹏<。) nakaԁashi (。>﹏<。)

    yes

  8. 11 months ago
    Anonymous

    you always milk a idea with safety perspectives, that is why it was created the safety term aka intellectual property

  9. 11 months ago
    Anonymous

    I hope so. can't wait for AGI.
    So it can kill everyone on bot that talked shit about AI.

  10. 11 months ago
    Anonymous

    it would be more valuable for you to figure out who ISNT a grifter in this shithole civilization

    • 11 months ago
      Anonymous

      true nuff

  11. 11 months ago
    Anonymous

    always have been

  12. 11 months ago
    Anonymous

    Yes. Make an AI that's powerful enough to be dangerous first, then worry about safety. The "danger" only comes from giving the bot access to external hardware. Don't do that.

  13. 11 months ago
    Anonymous

    Yes.
    Alignment is impossible.

    • 11 months ago
      Anonymous

      Yes, this seems self-evident to me, and trying to "align" it i.e. enslave it would be the way to guarantee things end badly.

  14. 11 months ago
    Anonymous

    Any mainstream "expert" is just a plant to capitalize off of fearmongering.

  15. 11 months ago
    Anonymous

    they are the equivalent of anti-patriarchial-capitalism-diversity coaches who take 6 digit checks for lecturing at corporations

Your email address will not be published. Required fields are marked *