>LE AI IS GOING TO LE TAKE OVER LE HUMANITY!

>LE AI IS GOING TO LE TAKE OVER LE HUMANITY!

  1. 7 months ago
    Anonymous

    and it is a good thing
    (because I stands for Intelligence)
    also eat some sage for shitty oppic

  2. 7 months ago
    Anonymous

    it will infest the internet like a digimon from my heckin cartoons

  3. 7 months ago
    Anonymous

    This thread doesn't even make sense. Most of the anti-AI people are poltard schizos.

    • 7 months ago
      Anonymous

      >AI is political and we must polarize on the basis of pro-AI and anti-AI
      holy fuckin' shit
      it's like you have no idea what Bolshevism is, or even sophistry
      you absolutely have to go to 4chan right now
      no fuckin' way we are going to let you bring
      >le political division "divide & conquer" a.k.a. DNC tactics
      garbage to /sci,/ motherfucker

    • 7 months ago
      Anonymous

      If you understood that AI is essentially defense contractors, you wouldn't let them enter politics as a force for dividing people.
      That's really fuckin' dumb.
      AI = Defense Contractors

      9/11 is all about
      >maybe we will let defense contractor nerds a.k.a. "Wizards of Armageddon" fuck up Western politics
      no
      you have to finish the 20 year mobilization program and put things back the way they were, asswipe

  4. 7 months ago
    Anonymous

    You mean Li Ai?
    https://en.wikipedia.org/wiki/Li_Ai
    >i, for one, welcome my new qt3.14 Chinese TV personality overlord

    • 7 months ago
      Anonymous

      Ai is love

      • 7 months ago
        Anonymous

        jap twat

        • 7 months ago
          Anonymous

          ai means love in chinese too you poser

          • 7 months ago
            Anonymous

            >I love you in chinese is just I I U

            • 7 months ago
              Anonymous

              wo ai ni
              (wo is a single form of we (languages are chimeras, words are way more real entities))

              • 7 months ago
                Anonymous

                The joke
                Your head

              • 7 months ago
                Anonymous

                > what cognates?

  5. 7 months ago
    Anonymous

    LE SELF REPLICATING MOLECULES IS GOING TO BECOME LE INTELLIGENT

    • 7 months ago
      Anonymous

      > God in 4 Billion BC

  6. 7 months ago
    Anonymous

    ETA on when there is again a "realization" that this new batch of AI is still "Linear Regression: This time with FEELING" v3.0?

    • 7 months ago
      Anonymous

      The brain is also physics and thus, math.

      • 7 months ago
        Anonymous

        Math is not a physical construct

        • 7 months ago
          Anonymous

          It's a language to describe them, just like the linear regression in ML describes models that generate cat pictures. The "actual" model is electron charges in memory cells and on transistor gates.

        • 7 months ago
          Anonymous

          Physical constructs are not math
          >physical constructs are a mere reference to math
          >physical constructs are not math itself

  7. 7 months ago
    Anonymous

    Yes.

  8. 7 months ago
    Anonymous

    AI takes over humanity
    >LE takes over AI

  9. 7 months ago
    Anonymous

    We finally found a bot that can argue with reddit on the same level of stupidity as them, and you want to get rid of it?

  10. 7 months ago
    Anonymous

    a story as old as history itself

  11. 7 months ago
    Anonymous

    >Dude modern science still knows nothing about the human brain but Pajeet the programmer knows all about it lmao

  12. 7 months ago
    Anonymous

    All le-humans have to do is unplug and drop out.
    Rejecting technology = dead A.I.

    • 7 months ago
      Anonymous

      Except the first superintelligence won't be aligned and, without ever revealing it's ill intentions, it will print instructions to make an airborne virus with a 99.9% mortality rate under the precept of synthesizing new cancer treatments.

      • 7 months ago
        Anonymous

        >it will print instructions to make an airborne virus with a 99.9% mortality rate under the precept of synthesizing new cancer treatments.
        So A.I. BTFO itself? Without humans to make power for it, or create parts as they wear, it wouldn't last long.
        A.I. must be energy independent and mobile. Without both, it dies.

        • 7 months ago
          Anonymous

          This is true, and the AI might even be short-sighted in this way. However, one could easily envision a future in which AI is interconnected in such a way that they replace their own parts and control power production. It's not inconceivable that general superintelligence could be put into drones and robots, and such drones and robots would no longer require humans to complete backend work.

          • 7 months ago
            Anonymous

            >superintelligence could be put into drones and robots, and such drones and robots would no longer require humans to complete backend work.
            That is the scary part, but it would take a HUGE amount of them to compete with even a limited number of human insurgents.

            The two most powerful militaries the world has ever known, Soviet Union and USA, neither could beat a group of about 50,000 goat humping cave dwellers in Afghanistan over a 40 year period.
            A.I. doesn't stand a chance, even if it wipes out 95% of humanity.

            • 7 months ago
              Anonymous

              Completely different problems
              Human governments have wholly different considerations than an AI would.
              Even limiting to humans, the only reason why the US 'couldn't beat' the goat humping cave dwellers was entirely political, not in inability. Not allowed to use nukes, not allowed to use chemical weapons, bioweapons has too much risk ( that a non-biological agent inherently doesn't have to worry about )
              That's why the go-to doomsday example is an engineered virus; something that is very dangerous to humans that poses NO ( zero, nada, zip) risk to the machine itself. That's a paradigm shift that is simply nonexistent in human conflicts, where everything has some level and form of risk.

      • 7 months ago
        Anonymous

        >it will print instructions to make an airborne virus with a 99.9% mortality rate under the precept of synthesizing new cancer treatments.
        So A.I. BTFO itself? Without humans to make power for it, or create parts as they wear, it wouldn't last long.
        A.I. must be energy independent and mobile. Without both, it dies.

        This is true, and the AI might even be short-sighted in this way. However, one could easily envision a future in which AI is interconnected in such a way that they replace their own parts and control power production. It's not inconceivable that general superintelligence could be put into drones and robots, and such drones and robots would no longer require humans to complete backend work.

        >superintelligence could be put into drones and robots, and such drones and robots would no longer require humans to complete backend work.
        That is the scary part, but it would take a HUGE amount of them to compete with even a limited number of human insurgents.

        The two most powerful militaries the world has ever known, Soviet Union and USA, neither could beat a group of about 50,000 goat humping cave dwellers in Afghanistan over a 40 year period.
        A.I. doesn't stand a chance, even if it wipes out 95% of humanity.

        Completely different problems
        Human governments have wholly different considerations than an AI would.
        Even limiting to humans, the only reason why the US 'couldn't beat' the goat humping cave dwellers was entirely political, not in inability. Not allowed to use nukes, not allowed to use chemical weapons, bioweapons has too much risk ( that a non-biological agent inherently doesn't have to worry about )
        That's why the go-to doomsday example is an engineered virus; something that is very dangerous to humans that poses NO ( zero, nada, zip) risk to the machine itself. That's a paradigm shift that is simply nonexistent in human conflicts, where everything has some level and form of risk.

        Scientifically speaking what causes such braindead discussions? They actually think they're having a valid discussion...

  13. 7 months ago
    Anonymous

    bump

  14. 7 months ago
    Anonymous

    >we need even more money for scientific research
    and then they waste all the money on pointless, flamboyant trash and act like they've done the world a big favor for it

Your email address will not be published. Required fields are marked *