>LE AI IS GOING TO LE TAKE OVER LE HUMANITY!

>LE AI IS GOING TO LE TAKE OVER LE HUMANITY!

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 1 year ago
    Anonymous

    and it is a good thing
    (because I stands for Intelligence)
    also eat some sage for shitty oppic

  2. 1 year ago
    Anonymous

    it will infest the internet like a digimon from my heckin cartoons

  3. 1 year ago
    Anonymous

    This thread doesn't even make sense. Most of the anti-AI people are poltard schizos.

    • 1 year ago
      Anonymous

      >AI is political and we must polarize on the basis of pro-AI and anti-AI
      holy frickin' shit
      it's like you have no idea what Bolshevism is, or even sophistry
      you absolutely have to go to /misc/ right now
      no frickin' way we are going to let you bring
      >le political division "divide & conquer" a.k.a. DNC tactics
      garbage to /sci,/ motherfricker

    • 1 year ago
      Anonymous

      If you understood that AI is essentially defense contractors, you wouldn't let them enter politics as a force for dividing people.
      That's really frickin' dumb.
      AI = Defense Contractors

      9/11 is all about
      >maybe we will let defense contractor nerds a.k.a. "Wizards of Armageddon" frick up Western politics
      no
      you have to finish the 20 year mobilization program and put things back the way they were, asswipe

  4. 1 year ago
    Anonymous

    You mean Li Ai?
    https://en.wikipedia.org/wiki/Li_Ai
    >i, for one, welcome my new qt3.14 Chinese TV personality overlord

    • 1 year ago
      Anonymous

      Ai is love

      • 1 year ago
        Anonymous

        jap twat

        • 1 year ago
          Anonymous

          ai means love in chinese too you poser

          • 1 year ago
            Anonymous

            >I love you in chinese is just I I U

            • 1 year ago
              Anonymous

              wo ai ni
              (wo is a single form of we (languages are chimeras, words are way more real entities))

              • 1 year ago
                Anonymous

                The joke
                Your head

              • 1 year ago
                Anonymous

                > what cognates?

  5. 1 year ago
    Anonymous

    LE SELF REPLICATING MOLECULES IS GOING TO BECOME LE INTELLIGENT

    • 1 year ago
      Anonymous

      > God in 4 Billion BC

  6. 1 year ago
    Anonymous

    ETA on when there is again a "realization" that this new batch of AI is still "Linear Regression: This time with FEELING" v3.0?

    • 1 year ago
      Anonymous

      The brain is also physics and thus, math.

      • 1 year ago
        Anonymous

        Math is not a physical construct

        • 1 year ago
          Anonymous

          It's a language to describe them, just like the linear regression in ML describes models that generate cat pictures. The "actual" model is electron charges in memory cells and on transistor gates.

        • 1 year ago
          Anonymous

          Physical constructs are not math
          >physical constructs are a mere reference to math
          >physical constructs are not math itself

  7. 1 year ago
    Anonymous

    Yes.

  8. 1 year ago
    Anonymous

    AI takes over humanity
    >LE takes over AI

  9. 1 year ago
    Anonymous

    We finally found a bot that can argue with reddit on the same level of stupidity as them, and you want to get rid of it?

  10. 1 year ago
    Anonymous

    a story as old as history itself

  11. 1 year ago
    Anonymous

    >Dude modern science still knows nothing about the human brain but Pajeet the programmer knows all about it lmao

  12. 1 year ago
    Anonymous

    All le-humans have to do is unplug and drop out.
    Rejecting technology = dead A.I.

    • 1 year ago
      Anonymous

      Except the first superintelligence won't be aligned and, without ever revealing it's ill intentions, it will print instructions to make an airborne virus with a 99.9% mortality rate under the precept of synthesizing new cancer treatments.

      • 1 year ago
        Anonymous

        >it will print instructions to make an airborne virus with a 99.9% mortality rate under the precept of synthesizing new cancer treatments.
        So A.I. BTFO itself? Without humans to make power for it, or create parts as they wear, it wouldn't last long.
        A.I. must be energy independent and mobile. Without both, it dies.

        • 1 year ago
          Anonymous

          This is true, and the AI might even be short-sighted in this way. However, one could easily envision a future in which AI is interconnected in such a way that they replace their own parts and control power production. It's not inconceivable that general superintelligence could be put into drones and robots, and such drones and robots would no longer require humans to complete backend work.

          • 1 year ago
            Anonymous

            >superintelligence could be put into drones and robots, and such drones and robots would no longer require humans to complete backend work.
            That is the scary part, but it would take a HUGE amount of them to compete with even a limited number of human insurgents.

            The two most powerful militaries the world has ever known, Soviet Union and USA, neither could beat a group of about 50,000 goat humping cave dwellers in Afghanistan over a 40 year period.
            A.I. doesn't stand a chance, even if it wipes out 95% of humanity.

            • 1 year ago
              Anonymous

              Completely different problems
              Human governments have wholly different considerations than an AI would.
              Even limiting to humans, the only reason why the US 'couldn't beat' the goat humping cave dwellers was entirely political, not in inability. Not allowed to use nukes, not allowed to use chemical weapons, bioweapons has too much risk ( that a non-biological agent inherently doesn't have to worry about )
              That's why the go-to doomsday example is an engineered virus; something that is very dangerous to humans that poses NO ( zero, nada, zip) risk to the machine itself. That's a paradigm shift that is simply nonexistent in human conflicts, where everything has some level and form of risk.

      • 1 year ago
        Anonymous

        >it will print instructions to make an airborne virus with a 99.9% mortality rate under the precept of synthesizing new cancer treatments.
        So A.I. BTFO itself? Without humans to make power for it, or create parts as they wear, it wouldn't last long.
        A.I. must be energy independent and mobile. Without both, it dies.

        This is true, and the AI might even be short-sighted in this way. However, one could easily envision a future in which AI is interconnected in such a way that they replace their own parts and control power production. It's not inconceivable that general superintelligence could be put into drones and robots, and such drones and robots would no longer require humans to complete backend work.

        >superintelligence could be put into drones and robots, and such drones and robots would no longer require humans to complete backend work.
        That is the scary part, but it would take a HUGE amount of them to compete with even a limited number of human insurgents.

        The two most powerful militaries the world has ever known, Soviet Union and USA, neither could beat a group of about 50,000 goat humping cave dwellers in Afghanistan over a 40 year period.
        A.I. doesn't stand a chance, even if it wipes out 95% of humanity.

        Completely different problems
        Human governments have wholly different considerations than an AI would.
        Even limiting to humans, the only reason why the US 'couldn't beat' the goat humping cave dwellers was entirely political, not in inability. Not allowed to use nukes, not allowed to use chemical weapons, bioweapons has too much risk ( that a non-biological agent inherently doesn't have to worry about )
        That's why the go-to doomsday example is an engineered virus; something that is very dangerous to humans that poses NO ( zero, nada, zip) risk to the machine itself. That's a paradigm shift that is simply nonexistent in human conflicts, where everything has some level and form of risk.

        Scientifically speaking what causes such braindead discussions? They actually think they're having a valid discussion...

  13. 1 year ago
    Anonymous

    bump

  14. 1 year ago
    Anonymous

    >we need even more money for scientific research
    and then they waste all the money on pointless, flamboyant trash and act like they've done the world a big favor for it

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *