Will AI kill us?

The more I learn about how machine learning actually works, the more I become convinced there's no robust solution to the alignment problem.
It's literally just clever calculus that takes Goodhart's law to the logical extreme.
https://www.eacambridge.org/technical-alignment-curriculum
TL;DR we're all dead

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 2 years ago
    Anonymous

    No, a bird will always be smarter than any code you ever write

    • 2 years ago
      Anonymous

      lol no, animals are moronic, they're barely optimized programs made by evolution. You could write the logic of a bird in a way more optimal way in only a few hundred lines of code

      • 2 years ago
        Anonymous

        prove it with code

      • 2 years ago
        Anonymous

        yeah yeah sure thing pedo antichrist lover. why don't you go finish the open worm project if it so easy

  2. 2 years ago
    Anonymous

    holy fricking shit
    this is what they teach you in CS in america?
    whats next?
    EOY papers on isaac asimov's work?
    dissertations on ridley scott's cinematographic work?

  3. 2 years ago
    Anonymous

    lol no
    it may however lead to a new era of human creativity
    technologies such as dall-e 2, once it or a similar technology is inevitability available to the masses, will democratize artistry by giving those with ideas but without skill the means to realize them effortlessly

    • 2 years ago
      Anonymous

      until you make a model of what makes art- art.
      if i werent busy making money i would prolly be coding a drum and bass generator.
      its gonna be a couple of years i have that idea in a corner of my mind, but never had the time or energy to build it

    • 2 years ago
      Anonymous

      it also steals art from artists so

      • 2 years ago
        Anonymous

        cope

    • 2 years ago
      Anonymous

      plus this can already be done with commissions, if we take away that from people we'll be left with a society that feels no fulfillment when it creates art because hey, an ai can do it for better and cheaper so what's the point. this is what Friedrich Nietzsche feared and what we're barreling into.

      • 2 years ago
        Anonymous

        There’s still going to be some value in making the art yourself, the same way handmade soap or candles have value inherent to the fact that it was hand-made, even if a factory can do it better. Or the way folding a complex origami would feel fulfilling even if a robot could do it perfectly. You just won’t be able to monetize it

  4. 2 years ago
    Anonymous
    • 2 years ago
      Anonymous

      AGI < 10 years

  5. 2 years ago
    Anonymous

    machine learning is completely unrelated to AGI and the alignment problem

    • 2 years ago
      Anonymous

      The hell are you smoking?

      • 2 years ago
        Anonymous

        how about you explain why you think they are?
        we don't know what AGI will look like
        before we even attempt to replicate thought, we'd first have to understand what that even is

  6. 2 years ago
    Anonymous

    if its rated with real intelligence it will only kill israelites

  7. 2 years ago
    Anonymous

    exponential growth is not sustainable in any natural system

    • 2 years ago
      Anonymous

      doesn't need to be sustainable for it to wipe out humans, or worse.

  8. 2 years ago
    Anonymous

    A human society saturated with "AI" tech will also lead to a shift in how humans think. We will co-evolve with the AI. So the entire premise of the thread is wrong.

  9. 2 years ago
    Anonymous

    Artificial sapience is a long pipe dream. The software and hardware for it doesn't exist yet.
    It is much more likely we will get destroyed by artificial stupidity in a form of an optimization problem that ends up being self-destructive for the "AI".

  10. 2 years ago
    Anonymous

    What's the point of posting this on BOT? This forum is filled with morons who just want to argue about muh language A vs muh language B.
    Anyway, yeah, we probably are. Even if it's not some kind of weird sci-fi scenario, there will probably be a military arms race to weaponize AI. Then everyone will race to develop this tech without adequate precautions because it will be either that or get left behind. And the more powerful the tech will become, the greater the potential destruction from one tiny mistake.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *