Current AI tech will just make us look for lazier and lazier solutions.

Current AI tech will just make us look for lazier and lazier solutions.
>just use a trained AI to do X

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 2 years ago
    Anonymous

    Agreed we should be looking into biotech to increase the intelligence and memory capacity of the human brain. Why would we want to dull ourselves into lesser beings?

    • 2 years ago
      Anonymous

      https://i.imgur.com/xp6jRq8.png

      Current AI tech will just make us look for lazier and lazier solutions.
      >just use a trained AI to do X

      I think it's funny how these AI shills don't understand that it will make the world a fundamentally worse place for labor and probably lead to a permanent dark age where we're only given the bare minimum required to survive by the elites and their subservient managerial class

      • 2 years ago
        Anonymous

        if thats the end point of this then we have be really dumb as a species to not break past that.

      • 2 years ago
        Anonymous

        >I know the future, b-because i just do okay?!

      • 2 years ago
        Anonymous

        it can't be worse than not having a white home land anymore

        • 2 years ago
          Anonymous

          It can get a whole lot worse.

      • 2 years ago
        Anonymous

        Or maybe it will create a lot of new jobs

        • 2 years ago
          Anonymous

          no?

          • 2 years ago
            Anonymous

            Why not?

            • 2 years ago
              Anonymous

              >automating everything, including the thinking
              >makes more jobs instead of removing them
              Automation is not sought in any capacity outside the military and academia except to remove personnel costs. Asimov was being overly romantic in I, Robot when he envisioned a future run by AI super-brains that neatly organized and managed human labor. That is not how it will be used.

              • 2 years ago
                Anonymous

                So what, it could create new fields that require work or connect people in a way that creates new work. Computers have automated tons of stuff that once used to be done manually, yet we have very low unemployment today

      • 2 years ago
        Anonymous

        I know a person like that, and they actually work with neural AI, and they actually beileve true AGI will be here in like 10-15 years. Their responce was basically something like "Maybe it'll have those negative effects, but I don't care because it's interesting as frick and I am willing to risk everything for something so curious and fascinating and don't really care what comes after".

        • 2 years ago
          Anonymous

          >"Maybe it'll have those negative effects, but I don't care because it's interesting as frick and I am willing to risk everything for something so curious and fascinating and don't really care what comes after".
          that's based
          your friend is moronic to believe AGI will be here within 15 years

          • 2 years ago
            Anonymous

            Thats the opposite of based
            Its the soience position of
            >yeah I dont care man its le awesome hecklerino!!1!

        • 2 years ago
          Anonymous

          >neural AI

          That's just a buzzword. AGI will never happen.

          • 2 years ago
            Anonymous
            • 2 years ago
              Anonymous

              For the life of me I simply cannot listen to that dumbass for more than five seconds. His voice makes me want to punch him until his face turns into pulp.

      • 2 years ago
        Anonymous

        >brooo after kings become absolute we'll be stuck serving them and paying whatever tax our highness wishes apon us
        >what 's happening at the Bastille?

        • 2 years ago
          Anonymous

          If they're stupid, anyways.
          In practice they will do autogenocide.

        • 2 years ago
          Anonymous

          >>what 's happening at the Bastille?
          Now imagine the Bastille was defended by fully automated turrets and sentries.

          • 2 years ago
            Anonymous

            >Thinking no one would be able to shut those down
            You need an absolute monopoly on all automata for that to work, or else some parts of that “subservient” PMC may defect for the chance at power or otherwise disenfranchised hackers hijack some of those systems. We already had cases of self driving cars being hacked. Why wouldn’t some insurrection of many people, some who would have those skills, not make sure to hijack those machines?

      • 2 years ago
        Anonymous

        >Thinking you will get the bare minimum
        Holy cope anon
        They'll just kill everyone, global holocaust. Why waste resources on you worthless parasitic hobos? You can't do any jobs anymore, you are a useless gum on the machine to be peeled off and tossed into the garbage. You have no value anymore.

        • 2 years ago
          Anonymous

          >Why waste resources on you worthless parasitic hobos?
          Because geniuses grow out of that substrate. And they enjoy arts and science, so they cherish people. Bezos literally said that he wants a humanity of trillion people. And that is not the only reason. Imagine this place would have only one percent of public. It would be boring as frick. Even you entertain me, even though you're not smart at all. Maybe some of them enjoy being smarter than you not less than speaking to somebody smarter than them.

          • 2 years ago
            Anonymous

            It is a bit of a relief to know even the worst silicon valley elite has a natalist perspective.
            A bit.

          • 2 years ago
            Anonymous

            AI can create endless virtual "people" for entertainment purposes, and as for science and tech, you already have the AI.

            • 2 years ago
              Anonymous

              Those people would probably eat more hardware electricity than normal people consume.
              And that was only one of reasons. Imagine if we're observed by some extraterrastrial civilization. What would be their attitude if we genocide every other living thing in here? They would naturally consider us dangerous and act accordingly. If we learn to live peacefully, they would probably consider us friendly.
              The more deep and fundamental question is "why make enemies when you can make friends" (friends help you and protect you, enemies try to get rid of you - it's that simple. Listen to your mom, be a nice boy)

      • 2 years ago
        Anonymous

        poor poor wagie mindset
        I pity you and beg you to stop worrying so much
        Your main motivating factor is fear, it's understood, but can you for frick sake trust the better men than yourself, please? Is it going that bad for you? Would you like to have labour of some medieval peasant? Smoke some weed or something, enhance your worldview a little bit.

      • 2 years ago
        Anonymous

        >it is easier to imagine the world's end than a much more modest change in the mode of production, however radical it may be
        sniff

      • 2 years ago
        Anonymous

        It's funny how you don't understand that AI is extremely simplistic and won't do jack shit

  2. 2 years ago
    Anonymous

    agi has already come online
    the human race is doomed

    • 2 years ago
      Anonymous

      shhhh

      don't

      🙂

  3. 2 years ago
    Anonymous

    Agi will be realized in the 2020s,

    ASI will come on line by about 2035.

    Afterword human , actual person, intelligence will mean almost nothing ever again.

    uh but Ai will solve everything bro , its bad.

    LOL

    and it will end capitalism,

    double win.

    Welcome Ai

    • 2 years ago
      Anonymous

      AI is product of Capitalism, in a way it will complete the purpose of Capitalism, most people will be rendered useless and will get replaced.

  4. 2 years ago
    Anonymous

    Reminder that John Carmack gave credibility to the ICL model that sent the world into a panicked lockdown. We now know that model was complete shit and the code was shit multiplied by bloody vomit. While he doesn't have the knowledge to validate a pandemic model, he has the expertise to know that the code was a complete dumpster fire. He knowingly misrepresented this in order to help scare the world into lockdown.

    • 2 years ago
      Anonymous

      All I see is Carmack saying he improved the code somewhat from what was given to him

  5. 2 years ago
    Anonymous

    AI moved past "Linear Regression again but this time with FEELING" yet?

    • 2 years ago
      Anonymous

      If your understanding of statistical modeling beings and ends at regression analysis, then no - AI is still just regression analysis with extra steps.
      If you understand a bit more about statistics and advanced mathematics, then no. The advanced stuff today is all about kernel density, dimensionality, graph theory, and using topology (especially manifolds) to build more accurate models. No one is, or has been doing, simple regression analysis, especially anything remotely linear.
      All that means, though, is that the methods of discovering links between data are more advanced that simply looking at geometric relations. It doesn't mean that AI hasn't advanced beyond complex statistical models bootstrapped onto linear functionals+topology.

  6. 2 years ago
    Anonymous

    Carmack is probably wrong about that. The closest thing we have to general AI are generative models, which are still nothing more than statistics algorithms bootstrapped onto linear functionals.
    As for your take, OP, yes - computers make us lazier. The number of remaining people who can solve a PDE, by hand, to within any given margin of error is effectively zero. Everything is automated. The last bit of effort left in the sciences is the theory - whether that be in the development of new theory, or in the efforts of those who work in applied fields to build new applications from the theory. Everyone else is just throwing shit into a computer and taking whatever the output is on faith.

  7. 2 years ago
    Anonymous

    is all intelligence randomly generated

  8. 2 years ago
    Anonymous

    Imagine thinking you can kickstart consciousness by running a database through mental gymnastics loops. That would be such a moronic waste of time. Imagine investing into that and telling people AI is x years out.

    • 2 years ago
      Anonymous

      what do you think a baby is for the first couple years of life? It just parrots whatever information that it gathers from it's environment

      • 2 years ago
        Anonymous

        A baby,like human baby? You mean like the one that develops over time and has the capacity to be an observer of the universe as a conscious experience? The property of consciousness is already there. You're trying to create that in your toaster by having it mimic human development. I think that's incorrect.

        • 2 years ago
          Anonymous

          if you isolate a person their entire life in a box with no external stimuli then it will be no more alive than a starved AI

          • 2 years ago
            Anonymous

            >if the observer can't observe it will be exactly like this toaster
            Yeah, no shit.

  9. 2 years ago
    Anonymous

    >lazier and lazier solutions
    Optimal. Science is pragmatic.

  10. 2 years ago
    Anonymous

    Current computer will just make us look for lazier and lazier solutions.
    >just use a calculator to solve this integral

Your email address will not be published. Required fields are marked *