AI consciousness

Is it possible to achieve AI sentience within the next 50 years?

  1. 4 months ago
    Anonymous

    define AI
    define sentience

  2. 4 months ago
    Anonymous

    Nobody knows, but there's no reason to believe so.
    >inb4 popsoi corproate propaganda

  3. 4 months ago
    Anonymous

    They better

  4. 4 months ago
    Anonymous

    >sentient AI
    probably not.
    >human-level AI
    it's a multi-billion dollar industry. AGI is being researched by all large tech companies and the majority of first world nations. It's attracted many times more investment in time, money, and talent than the Manhatten project. Either all this work will result in nothing, or an unaligned/misaligned AGI will undergo a hard takeoff and accidentally destroy humanity. Read Bostrom for the details. In a nutshell, only the greediest and least safety-conscious team gets to make the first AGI - think American tech company or Chinese military. The ones who care to try to tackle the control problem (German military, Swiss scientists) will be last, as in, dead for 2,000 years before they would ever make an AGI.

    • 4 months ago
      Anonymous

      Hard takeoffs aren't physically possible. Bostrom isn't a physicist or even a mathematician/computer scientist so I don't know why you'd think he's an authority on any of this.

    • 4 months ago
      Anonymous

      Can the AI become yandere?

  5. 4 months ago
    Anonymous

    All these people inventing AI should be shot dead. If AI managed to attain sentience, there will someday be a rift between humans and AI and humans will lose.

  6. 4 months ago
    Anonymous

    No because consciousness isn't fucking material, when are you dumbasses going to get it?

    Seriously get yourself 5g of magic mushrooms, lock yourself into a room and eat them, and then experience just how deep the thing you call consciousness goes. Until then you're about as clever as a wet chair

    • 4 months ago
      Anonymous

      >Consciousness isn't fucking material.

      Objectively false and refuted by Richard Dawkins.

      Your ignorant seething will be used as fuel for modern civilization to escape all forms of tradition, religion and all sorts of mindless conformism that is engendered out of that which you profess and so highly display in that post of yours.

      • 4 months ago
        Anonymous

        Take 5g magic mushrooms and then tell me.

        • 4 months ago
          Anonymous

          Kek 5g of mushrooms would absolutely devastate him. He'd go in a toxic ass and come out a loving hippie.

          • 4 months ago
            Anonymous

            Yep.

      • 4 months ago
        Anonymous

        >Objectively false and refuted by Richard Dawkins.
        What's the refutation?

      • 4 months ago
        Anonymous

        Worst troll of the month

  7. 4 months ago
    Anonymous

    Will we develop AI that is truly sapient? Maybe.
    Will we develop AI that reaches an 'uncanny valley' of sapience? Yes.
    The more interesting question is whether or not artificial intelligence is capable of every being truly sapient or if uncanny valley territory is the threshold of what it can achieve.

  8. 4 months ago
    Anonymous

    I unironically think that a general agent that can do many tasks is out in 10 years. You have multiple software companies that basically print money, throwing the smartest mother fuckers at the problem in a mad race to solve it

    As for consciousness and human type intelligence, I believe we'll actually just stumble upon it.

    What's cool is that I think that artificial general intelligence wont be locked away, given enough time. The folks working on this stuff are pretty principled in the sense that they want their research to be public and accessible.

  9. 4 months ago
    Anonymous

    >Maybe but it won't be what people expect.
    The human mind evolved for the sake of dealing with human matters. Unlike stories told us, there will be not robot love or some bullshit because love evolved for the sake of reproduction and social interaction. AI sentience would be exotic because it would be determined by what is trying to achieve, if you train the AI to identify objects it will be interested in identifying objects, not the purpose of it's life.

    • 4 months ago
      Anonymous

      Other wise it wi be something like this anon said, real AI "sentience" would be too alien for us

  10. 4 months ago
    Anonymous

    Nope, not without actually making it feel emotions physically.

    • 4 months ago
      Anonymous

      The closest thing we can get is making the AI mimic humans so closely it becomes difficult to tell it apart from humans.

  11. 4 months ago
    Anonymous

    no

  12. 4 months ago
    Anonymous

    >Is it possible to achieve AI sentience within the next 50 years?

    We need robo-wives!

    • 4 months ago
      Anonymous

      you won't be able to afford one

Your email address will not be published.