Current AI tech will just make us look for lazier and lazier solutions.

Current AI tech will just make us look for lazier and lazier solutions.
>just use a trained AI to do X

  1. 6 months ago
    Anonymous

    Agreed we should be looking into biotech to increase the intelligence and memory capacity of the human brain. Why would we want to dull ourselves into lesser beings?

    • 6 months ago
      Anonymous

      https://i.imgur.com/xp6jRq8.png

      Current AI tech will just make us look for lazier and lazier solutions.
      >just use a trained AI to do X

      I think it's funny how these AI shills don't understand that it will make the world a fundamentally worse place for labor and probably lead to a permanent dark age where we're only given the bare minimum required to survive by the elites and their subservient managerial class

      • 6 months ago
        Anonymous

        if thats the end point of this then we have be really dumb as a species to not break past that.

      • 6 months ago
        Anonymous

        >I know the future, b-because i just do okay?!

      • 6 months ago
        Anonymous

        it can't be worse than not having a white home land anymore

        • 6 months ago
          Anonymous

          It can get a whole lot worse.

      • 6 months ago
        Anonymous

        Or maybe it will create a lot of new jobs

        • 5 months ago
          Anonymous

          no?

          • 5 months ago
            Anonymous

            Why not?

            • 5 months ago
              Anonymous

              >automating everything, including the thinking
              >makes more jobs instead of removing them
              Automation is not sought in any capacity outside the military and academia except to remove personnel costs. Asimov was being overly romantic in I, Robot when he envisioned a future run by AI super-brains that neatly organized and managed human labor. That is not how it will be used.

              • 5 months ago
                Anonymous

                So what, it could create new fields that require work or connect people in a way that creates new work. Computers have automated tons of stuff that once used to be done manually, yet we have very low unemployment today

      • 5 months ago
        Anonymous

        I know a person like that, and they actually work with neural AI, and they actually beileve true AGI will be here in like 10-15 years. Their responce was basically something like "Maybe it'll have those negative effects, but I don't care because it's interesting as fuck and I am willing to risk everything for something so curious and fascinating and don't really care what comes after".

        • 5 months ago
          Anonymous

          >"Maybe it'll have those negative effects, but I don't care because it's interesting as fuck and I am willing to risk everything for something so curious and fascinating and don't really care what comes after".
          that's based
          your friend is retarded to believe AGI will be here within 15 years

          • 5 months ago
            Anonymous

            Thats the opposite of based
            Its the soience position of
            >yeah I dont care man its le awesome hecklerino!!1!

        • 5 months ago
          Anonymous

          >neural AI

          That's just a buzzword. AGI will never happen.

          • 5 months ago
            Anonymous
            • 5 months ago
              Anonymous

              For the life of me I simply cannot listen to that dumbass for more than five seconds. His voice makes me want to punch him until his face turns into pulp.

      • 5 months ago
        Anonymous

        >brooo after kings become absolute we'll be stuck serving them and paying whatever tax our highness wishes apon us
        >what 's happening at the Bastille?

        • 5 months ago
          Anonymous

          If they're stupid, anyways.
          In practice they will do autogenocide.

        • 5 months ago
          Anonymous

          >>what 's happening at the Bastille?
          Now imagine the Bastille was defended by fully automated turrets and sentries.

          • 5 months ago
            Anonymous

            >Thinking no one would be able to shut those down
            You need an absolute monopoly on all automata for that to work, or else some parts of that “subservient” PMC may defect for the chance at power or otherwise disenfranchised hackers hijack some of those systems. We already had cases of self driving cars being hacked. Why wouldn’t some insurrection of many people, some who would have those skills, not make sure to hijack those machines?

      • 5 months ago
        Anonymous

        >Thinking you will get the bare minimum
        Holy cope anon
        They'll just kill everyone, global holocaust. Why waste resources on you worthless parasitic hobos? You can't do any jobs anymore, you are a useless gum on the machine to be peeled off and tossed into the garbage. You have no value anymore.

        • 5 months ago
          Anonymous

          >Why waste resources on you worthless parasitic hobos?
          Because geniuses grow out of that substrate. And they enjoy arts and science, so they cherish people. Bezos literally said that he wants a humanity of trillion people. And that is not the only reason. Imagine this place would have only one percent of public. It would be boring as fuck. Even you entertain me, even though you're not smart at all. Maybe some of them enjoy being smarter than you not less than speaking to somebody smarter than them.

          • 5 months ago
            Anonymous

            It is a bit of a relief to know even the worst silicon valley elite has a natalist perspective.
            A bit.

          • 5 months ago
            Anonymous

            AI can create endless virtual "people" for entertainment purposes, and as for science and tech, you already have the AI.

            • 5 months ago
              Anonymous

              Those people would probably eat more hardware electricity than normal people consume.
              And that was only one of reasons. Imagine if we're observed by some extraterrastrial civilization. What would be their attitude if we genocide every other living thing in here? They would naturally consider us dangerous and act accordingly. If we learn to live peacefully, they would probably consider us friendly.
              The more deep and fundamental question is "why make enemies when you can make friends" (friends help you and protect you, enemies try to get rid of you - it's that simple. Listen to your mom, be a nice boy)

      • 5 months ago
        Anonymous

        poor poor wagie mindset
        I pity you and beg you to stop worrying so much
        Your main motivating factor is fear, it's understood, but can you for fuck sake trust the better men than yourself, please? Is it going that bad for you? Would you like to have labour of some medieval peasant? Smoke some weed or something, enhance your worldview a little bit.

      • 5 months ago
        Anonymous

        >it is easier to imagine the world's end than a much more modest change in the mode of production, however radical it may be
        sniff

      • 5 months ago
        Anonymous

        It's funny how you don't understand that AI is extremely simplistic and won't do jack shit

  2. 6 months ago
    Anonymous

    agi has already come online
    the human race is doomed

    • 5 months ago
      Anonymous

      shhhh

      don't

      🙂

  3. 5 months ago
    Anonymous

    Agi will be realized in the 2020s,

    ASI will come on line by about 2035.

    Afterword human , actual person, intelligence will mean almost nothing ever again.

    uh but Ai will solve everything bro , its bad.

    LOL

    and it will end capitalism,

    double win.

    Welcome Ai

    • 5 months ago
      Anonymous

      AI is product of Capitalism, in a way it will complete the purpose of Capitalism, most people will be rendered useless and will get replaced.

  4. 5 months ago
    Anonymous

    Reminder that John Carmack gave credibility to the ICL model that sent the world into a panicked lockdown. We now know that model was complete shit and the code was shit multiplied by bloody vomit. While he doesn't have the knowledge to validate a pandemic model, he has the expertise to know that the code was a complete dumpster fire. He knowingly misrepresented this in order to help scare the world into lockdown.

    • 5 months ago
      Anonymous

      All I see is Carmack saying he improved the code somewhat from what was given to him

  5. 5 months ago
    Anonymous

    AI moved past "Linear Regression again but this time with FEELING" yet?

    • 5 months ago
      Anonymous

      If your understanding of statistical modeling beings and ends at regression analysis, then no - AI is still just regression analysis with extra steps.
      If you understand a bit more about statistics and advanced mathematics, then no. The advanced stuff today is all about kernel density, dimensionality, graph theory, and using topology (especially manifolds) to build more accurate models. No one is, or has been doing, simple regression analysis, especially anything remotely linear.
      All that means, though, is that the methods of discovering links between data are more advanced that simply looking at geometric relations. It doesn't mean that AI hasn't advanced beyond complex statistical models bootstrapped onto linear functionals+topology.

  6. 5 months ago
    Anonymous

    Carmack is probably wrong about that. The closest thing we have to general AI are generative models, which are still nothing more than statistics algorithms bootstrapped onto linear functionals.
    As for your take, OP, yes - computers make us lazier. The number of remaining people who can solve a PDE, by hand, to within any given margin of error is effectively zero. Everything is automated. The last bit of effort left in the sciences is the theory - whether that be in the development of new theory, or in the efforts of those who work in applied fields to build new applications from the theory. Everyone else is just throwing shit into a computer and taking whatever the output is on faith.

  7. 5 months ago
    Anonymous

    is all intelligence randomly generated

  8. 5 months ago
    Anonymous

    Imagine thinking you can kickstart consciousness by running a database through mental gymnastics loops. That would be such a retarded waste of time. Imagine investing into that and telling people AI is x years out.

    • 5 months ago
      Anonymous

      what do you think a baby is for the first couple years of life? It just parrots whatever information that it gathers from it's environment

      • 5 months ago
        Anonymous

        A baby,like human baby? You mean like the one that develops over time and has the capacity to be an observer of the universe as a conscious experience? The property of consciousness is already there. You're trying to create that in your toaster by having it mimic human development. I think that's incorrect.

        • 5 months ago
          Anonymous

          if you isolate a person their entire life in a box with no external stimuli then it will be no more alive than a starved AI

          • 5 months ago
            Anonymous

            >if the observer can't observe it will be exactly like this toaster
            Yeah, no shit.

  9. 5 months ago
    Anonymous

    >lazier and lazier solutions
    Optimal. Science is pragmatic.

  10. 5 months ago
    Anonymous

    Current computer will just make us look for lazier and lazier solutions.
    >just use a calculator to solve this integral

Your email address will not be published.