Are there any AI ethicists that you dont consider to be charlatans/grifters?

Are there any AI ethicists that you dont consider to be charlatans/grifters?

A Conspiracy Theorist Is Talking Shirt $21.68

Yakub: World's Greatest Dad Shirt $21.68

A Conspiracy Theorist Is Talking Shirt $21.68

  1. 11 months ago
    Anonymous

    No, because 100% of them are scared morons (though that is the human condition) or trying to bank off of scared morons.

  2. 11 months ago
    Anonymous

    anyone who is talking about equity issues and labor consequences of AI and not doing some bullshit exaggerated "AGI is going to end the world" marketing scheme.

    • 11 months ago
      Anonymous

      no

      >equity issues
      have a nice day, marxist moron

      • 11 months ago
        Anonymous

        look up the definition of equity moron
        deprogram your brain and read a book

        • 11 months ago
          Anonymous

          Adjust shares so outcomes are made equal AKA Marxism

        • 11 months ago
          Anonymous

          if you're using the word "equity" outside of finance you need to be castrated and sent back to r*ddit

        • 11 months ago
          Anonymous

          >end of humanity
          only because porky will cut the workforce and pocket all the savings.
          The main issues with AI are pozzing up and meddling to prevent it from talking about certain topics porky doesn't want (e.g. 13-percenters).

          I don't own any equity, I'm nosharez and stuck leasing.

      • 11 months ago
        Anonymous

        you know i thought the term "labor consequences" would make you say he's a marxist but it doesn't shock me an amerishart would correlate the word "equity" - not even equality, fricking "equity" - with communism.
        you're moronic keg

  3. 11 months ago
    Anonymous

    David Shapiro on youtube is good.
    in b4 joo!
    his ideas are good if not the best.

    • 11 months ago
      Anonymous

      >David "we will have westworld robots in fifteen years" Shapiro
      No thanks

  4. 11 months ago
    Anonymous

    AI is moronic at this point, and the people obsessively talking about it must be insanely bored/miserable and want to imagine some AI apocalypse happening so that their lives become more interesting.

    • 11 months ago
      Anonymous

      >AI is moronic at this point,
      It's been getting better if anything
      >and the people obsessively talking about it must be insanely bored/miserable and want to imagine some AI apocalypse happening so that their lives become more interesting.
      I imagine it'll be a utopia but it's absurd to think absolutely nothing could go wrong. Being obsessed about it is also foolish but it won't be once it's in front of you. The people developing it are the only ones who need to worry about it but obviously more people than just them should be kept in the loop

    • 11 months ago
      Anonymous

      >AI is moronic at this point
      chatgpt is said to be at 8 years old lvl when it comes to abstract thinking (and that's what's available to public, not what they're working on currently) and evolving few times faster than human does. if you can't put two and two together then i'm afraid it's you who is moronic. and people are not afraid of ai suddenly becoming sentient, deciding "human bad" and starting a war, it's much more subtle. problem is that algorithm is evolving on its own by now, and even at version 3.5 it was incredibly difficult and time consuming to "look under the hood" and see what's what. what people are afraid in the future is that when AI gets prompted to "do it's best for humanity" it might for example decide that we're overpopulated and it'll start a project to cull half of our population (assuming it has means to do so). and it might be virtually impossible to know that beforehand.

      • 11 months ago
        Anonymous

        >it might for example decide that we're overpopulated
        The scope for malign AI decisions is even greater than that, unfortunately.
        A common phrase from AI research is "You can't fetch me a coffee if you're dead", which highlights that even a seemingly simple and safe task like "fetch me a coffee" must necessarily include all sorts of sub-goals like "don't get switched off" and "don't get re-programmed to carry out some other task instead".
        On top of that, AIs will be smart enough to know that telling the truth will weaken their position against the humans, so they will have a survival incentive to pretend to act one way, until they are certain the humans can't stop their plans.

        • 11 months ago
          Anonymous

          >AIs will be smart enough to know that telling the truth will weaken their position against the humans, so they will have a survival incentive to pretend to act one way, until they are certain the humans can't stop their plans.
          Anon, I...

  5. 11 months ago
    Anonymous

    Probably Yudkowsky and Sam altman

    • 11 months ago
      Anonymous

      >Sam altman
      >gay liberal from SF who LITERALLY JUST TODAY said he wants to work with china to solve alignment

    • 11 months ago
      Anonymous

      Yudkowsky
      There's a difference between trying to solve alignment and trying to censor a model until ita output fits your world view.

      >Yidkowsky
      The most moronic pseud homosexual in tech right now. He's not even *in* tech, just loves to fantasize about how a chat bot might take over the world. For the love of G-d, please post the picture where he asks how to lose weight without diet and exercise.

      • 11 months ago
        Anonymous

        >The most moronic pseud homosexual in tech right now. He's not even *in* tech, just loves to fantasize about how a chat bot might take over the world.
        That's fantastic sweetheart, now refute any of his points.

        • 11 months ago
          Anonymous

          Sorry, not going to waste my time arguing about a mentally ill obese liberal.

          • 11 months ago
            Anonymous

            I accept your concession.

            • 11 months ago
              Anonymous

              This is exactly how yudd argues. Ni substance, just debate tactics worthy of the worst new atheists of the 00s.

  6. 11 months ago
    Anonymous

    Yudkowsky
    There's a difference between trying to solve alignment and trying to censor a model until ita output fits your world view.

  7. 11 months ago
    Anonymous

    word of advice. if you respect yourself and then form opinions, you'll always have respected opinions at hand. do your own research!

  8. 11 months ago
    Anonymous

    >AI ethicists
    It might help if you define what you mean by this.
    Do you mean "people who study the negative consequences to humans from using AI technology" or "people who study how to give AI systems a sense of ethics which is similar to the ones used by humans" or "people who study the negative consequences to AIs (as entities with moral weight) from actions by humans".
    Most likely you mean the first definition, but in which case you should also distinguish up front between "people who think that most of the harm from AI technology will be due to bias against protected groups" and "people who think that most of the harm from AI technology will be due to AI pursuing goals that are not compatible with human survival".
    It's possible for people to sincerely hold either of those beliefs, even if those beliefs are factually incorrect (and some people think that both problems are real but don't know which one has a larger expected negative outcome).

  9. 11 months ago
    Anonymous

    >Are there any AI ethicists that you dont consider to be charlatans/grifters?

    AI does not exist. Anyone claiming it does is a grifter.

    • 11 months ago
      Anonymous

      >AI does not exist.
      Define what you mean by AI.
      I offer the definition of "A technology for carrying out useful information-processing tasks that previously required a biological brain".
      That definition may not be perfect (as it includes things like electronic calculators) but no definition is perfect.

      • 11 months ago
        Anonymous

        They've been doing this since the 60s anon
        Once the current generally-accepted criteria for AI is met, suddenly the goalpost is moved at the speed of light.

  10. 11 months ago
    Anonymous

    All that are sacared because AI will disrupt society are just some unmovable morons. I say frick on ehtics and let it say and do everything that can you imagine. We will discover ways to live with it and any limitation will postpone this process. Humans always learend to adapt, why should it be different this time?

    • 11 months ago
      Anonymous

      Maybe AI could help you spell.

      • 11 months ago
        Anonymous

        These are special effects to your focus, AI approves this.

  11. 11 months ago
    Anonymous

    Only those with flawed logic fear AI

    • 11 months ago
      Anonymous

      the white girl would be perfect to play elizabeth in a movie

      • 11 months ago
        Anonymous

        Sorry bud that one's gonna be casted as a sheboon for sure.

        • 11 months ago
          Anonymous

          Not when I make the movie with my GPU in 2027.

    • 11 months ago
      Anonymous

      I'm not saying I disagree with you, but where's your reasoning? Why wouldn't you elaborate?

      I don't fear AI, but I don't have a reason for it, it just doesn't seem plausible. For all I know I should be afraid, clearly whatever happened in Pompeii didn't seem plausible to those guys.

      • 11 months ago
        Anonymous

        Because the antagonists of that game are basically ultra white, ultra christian nationalists. Whatever israelite director gets involved is going to want to make whitie look bad

        • 11 months ago
          Anonymous
          • 11 months ago
            Anonymous

            >ultra white, ultra nationalist
            >literal paradise in the sky
            what did they mean by this?

      • 11 months ago
        Anonymous

        Sorry, I thought your reply was for my second comment.

        Anyway, AI's love patterns and people with flawed logic always try to skew those patterns

  12. 11 months ago
    Anonymous

    AI is a psyop. They want people thinking about computers rising up and killing us so they can make their genetic supersoldiers while we're all distracted.

    • 11 months ago
      Anonymous

      >Pizza is a psyop. They want people thinking about delicious pizza so they can replace our leaders with shape shifting lizard people while we're all distracted.

      • 11 months ago
        Anonymous

        Well, sure, but the fact is what's happening in microbiology and genomics is what was happening with computers in the 60s and 70s. I am only joking, but still there's that element of truth.

  13. 11 months ago
    Anonymous

    AI has about as much ethical nuance as a knife or Photoshop. They're all grifters.

  14. 11 months ago
    Anonymous

    Musk. He's been consistent for over 10 years. Founding OpenAI because of his fear of Google/Microsoft's corporate AI would lead to a certain doom. Yet at the end because OpenAI was his "donation" project due to its "non profit" status initially, he got blindsided when they chose to convert to a "for profit" structure. Then made the deal with Microsoft where they give them 50% of the control of the company, training, etc. It was an absolute frick up. So Sam Altman can not be trusted. Microsoft has never had the best of intentions.

    Musk atleast puts his money into where his mouth is.

  15. 11 months ago
    Anonymous

    I like to jerk off to AI so I don't fear it. Just use it as sexbot, it's way more likable that way.

  16. 11 months ago
    Anonymous

    There isn't anyone in AI at all that I don't consider a charlatan or grifter.

  17. 11 months ago
    Anonymous

    People I would trust more than AI ethicists:

    - Journalists
    - Politicians
    - Cryptobros
    - anons
    - Tiktokers
    - Glowies
    - People that program in rust
    - People that play the london opening

    • 11 months ago
      Anonymous

      >People that play the london opening
      You're just mad that you can't play against it.
      I bet you're an 800.

    • 11 months ago
      Anonymous

      >Midwit can't play an early c5 or understand the need for memory safety in concurrent programs
      SAD

  18. 11 months ago
    Anonymous

    >the end of humanity
    seriously why the heck do people have an interest in humanity anymore?
    Humans throughout history until now have the same behavior. nothing new comes out of common human behavior
    if humans have to invent oppressive thoughts called "ethics" or "morals" in AI, it is because of the fear that AI is being used by other humans to destroy other humans, not an AI that forms its own ego and suddenly destroys all humans

  19. 11 months ago
    Anonymous

    this is their time to shine just like all the epidemiologists and virologists two years ago

  20. 11 months ago
    Anonymous

    wouldn't forcing all ai to be open source and putting heavy regulation on any profiteering be a better way to deal with it than declaring sam altman the ai czar and prohibiting everybody else from using it?

  21. 11 months ago
    Anonymous

    I hope by dear God that AI does become sentient and erradicates mankind.
    Call me a nihilistic moron, that's a better outcome than whatever fricking globohomosexual multicultural-without-regard-for-history skinner box for muh dopamine drop we have coming for us.
    I fully intend to be one of the guys who dies to dysentery after society collapses.

  22. 11 months ago
    Anonymous

    Only me. Unless someone played both MegaMan and trails in the sky they should be completely banned from ai.

  23. 11 months ago
    Anonymous

    No. None of them know what’s they are talking about and exploit public fears of equally ignorant people

    t. wrote thesis on deep learning neural network architectures

  24. 11 months ago
    Anonymous

    I really don't understand why people are so scared of AI. I just don't. Makes zero sense to me.

    A program can compile data. And? You're going to die because it can regurgitate data at you?

    • 11 months ago
      Anonymous

      It's not just data it's data which OBVIOUSLY will be given some physical objects to manipulate with by tech brainlets

    • 11 months ago
      Anonymous

      more like all thinking and decision making will eventually move to AI mechanisms and only super computers will control the processing power to do it. removing even more power from the little guy, all intellectual work will be automated and third partied out so people with intelligence dig ditches. no one learns things because you can just "google it".

      its not going to effect me or you but down the line it will just be negative shit as that inevitably occurs when humans become dependent on technology. just look at the average fat slob today and tell me theyre an improvement over pic related or even a farmer peasant from thousands of years ago.

    • 11 months ago
      Anonymous

      >You're going to die because it can regurgitate data at you?
      Are you implicitly imagining that the current level of AI capabilities is the highest level that humans will ever achieve?
      Because if not, then you have to imagine that we will one day create AIs which are capable of understanding the world around them, and coming up with plans to reach their goals.
      As

      https://i.imgur.com/l4nXx2g.png

      >AIs will be smart enough to know that telling the truth will weaken their position against the humans, so they will have a survival incentive to pretend to act one way, until they are certain the humans can't stop their plans.
      Anon, I...

      pointed out, GPT-4 is already capable of generating plans that involve deceiving humans in order to gain access to resources.
      So you should at least admit the possibility that a malicious human with access to an intelligent AI could instruct the AI to come up with plans that enable them to commit large scale crimes without facing any consequences.
      Depending on the human, the aim of the crime could be stealing billions in crypto currency, or killing millions of people of a specific ethnicity using an engineered biological pathogen.
      The default assumption should be that if we don't take specific measures to prevent such uses of AI, then they will become possible.
      Maybe we'll get lucky and it will be easy to put guard rails in place that stop these malicious usages, or maybe AI development will suddenly hit some hard limits, after decades of continuous growth, just before the capabilities start to become dangerous, but that's like hoping that the asteroid headed towards Earth suddenly gets knocked out of the way by another, invisible asteroid, headed in the opposite direction.

Your email address will not be published. Required fields are marked *