AI and physical limitations

I've been thinking about what people are saying about AI and reading all the various predictions people have about it replacing jobs and whatnot. People talking about it's exponential growth like it's an inevitability. However I haven't found anything discussing what currently limits AI.

>Processing power
So AI still uses transistors. The thing that Intel struggled massively to get to 10nm. And in general the industry has been struggling to shrink these things further. The only solution is more cores, more silicon, more power consumption. Until they manage to squeeze another node shrink out...

>Battery technology
That Boston dynamics robot dog "spot" has a 5kg 594wh battery and can only run for 90 minutes. It would be quite difficult to replace any manual labour job with a robot due to this kind of limitation. Could always attach a bigger battery but that's more weight, more expense, less safe etc. Take a lumberjack for example, one guy drinking a gallon of whole milk is good for a day of chopping or more. One robot would need a hugeass battery or some kind of haphazard longass power cable to do the same.

>Uncertainty
There's quantum computers that are a big unknown, I don't know how they would affect AI processing power.

Can AI ASICs increase it's power? Is this a thing?

Can nuclear power be minified to the point where a mini nuke battery is feasible? Or is that sci fi movie bullshit?

Tl;dr
I think processing power and battery technology may stunt AI growth and prevent the kind of exponential growth that will replace most jobs that many are afraid of. All this may be a huge fricking brainlet take but I don't care, it would be interesting to hear some others thoughts on this subject!

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 1 year ago
    Anonymous

    >Processing power
    Diminishing returns are bound to kick in very soon, as Moore's law is dead. OpenAI has used one of the fastest supercomputers on the planet to train the model and the fastest workstation GPUs they could buy (nvidia H100) to run it. It really doesn't matter if they start spending billions of dollars. It's something inevitable.
    >Battery technology
    No need to even address it. Humanoid robots with AGI capabilities are literally science fiction.
    >Uncertainty
    >There's quantum computers that are a big unknown, I don't know how they would affect AI processing power.
    Quantum computers are worse 99+% of the time. There are very few algorithms they speed up.
    >I think processing power and battery technology may stunt AI growth and prevent the kind of exponential growth that will replace most jobs that many are afraid of. All this may be a huge fricking brainlet take but I don't care, it would be interesting to hear some others thoughts on this subject!
    You are most likely correct and definitely not a brainlet. The real brainlets are those that believe software optimization is all that matters and you can run an AI on unicorn farts if the software is "efficient" enough.

    • 1 year ago
      Anonymous

      Graphics cards getting partially designed by AI now. We'll be fine.

      • 1 year ago
        Anonymous

        The bottleneck is the lithography, not the architecture.

    • 1 year ago
      Anonymous

      Even if we soon reach diminishing returns as far as general purpose computing goes, we can still optimize and specialize processors to perform AI tasks better. For instance, a semi-analogue processor might be able to perform the kind of matrix multiplication tasks AI demands much more quickly, cheaply, and efficiently.

    • 1 year ago
      Anonymous

      >Diminishing returns are bound to kick in very soon
      for floating point digital chips, sure. There's no reason AI can't be implemented in a purely analog system which would completely sidestep the current problem

      • 1 year ago
        Anonymous

        Theoretically possible, but training it would be a nightmare. The R&D would have to start from scratch.

  2. 1 year ago
    Anonymous

    >Can nuclear power be minified to the point where a mini nuke battery is feasible? Or is that sci fi movie bullshit?
    NASA actually does this, but it's prohibitively expensive. Their rover batteries cost tens of millions of dollars and only output ~100W.

  3. 1 year ago
    Anonymous

    You're correct though, this is the correct take. Keep in mind that when people are shitting their pants about AI currently, they are shitting their pant over a glorified autocomplete that has no capacity to reason at all, and this shit has cost several billions to build. Current direction is not the correct one, or at least it's partially correct.

    • 1 year ago
      Anonymous

      >no capacity to reason at all
      I have not seen that said anywhere aside from here. It commits reasoning errors but that's a much different statement. I have seen it run through reasoning tests and explain its reasoning for an answer.

      • 1 year ago
        Anonymous

        >I have not seen that said anywhere aside from here
        You perhaps need to read more then. Could link a lot of articles proving my point. I mean it's also down to the very foundation of these models, they simply are not designed to reason at all, NNs are horrible at that task.
        >It commits reasoning errors
        No no no. This is where you get fooled. It does not reason, period. There's no reasoning inside of the system.

        https://i.imgur.com/yuNR8ek.png

        >glorified auto-correct
        Wha?

        I don't care what the homosexuals at OpenAI and Microsoft write about their own system. They are ego-inflated morons who are trying to make people fall for a deep marketing scheme trying to sell something they don't have.
        If you really want to know, no, GPT-4 has no motive, so it cannot plan at all. It cannot reason, short term or long term. Stop listening to these dumbasses man.

        • 1 year ago
          Anonymous

          Huh?

          • 1 year ago
            Anonymous

            You see the mirror test they give to chimpanzees to see if they can recognize themselves in a mirror?
            You are failing that very test right now.

            • 1 year ago
              Anonymous

              It's over anon

              • 1 year ago
                Anonymous

                I'll just give you this so you can read it very carefully and realize that you have failed the mirror test.

                https://dl.acm.org/doi/pdf/10.1145/3442188.3445922
                >However, no actual language understanding is taking place in
                LM-driven approaches to these tasks, as can be shown by careful
                manipulation of the test data to remove spurious cues the systems
                are leveraging [ 21 , 93 ]. Furthermore, as Bender and Koller [ 14 ]
                argue from a theoretical perspective, languages are systems of
                signs [ 37 ], i.e. pairings of form and meaning. But the training data
                for LMs is only form; they do not have access to meaning. Therefore,
                claims about model abilities must be carefully characterized.

              • 1 year ago
                Anonymous

                >a bunch of leafs crying about post-modern meaning mah women and Black personinos a couple years ago
                LMAO

              • 1 year ago
                Anonymous

                And yet it actually got some people fired.
                For real though, these guys are right about the models themselves. Think what you want about muh bias and muh prejudice, but everything that has got to do with researchers blatantly overblowing what they find and tricking the general public with bullshit claims about intelligence, is completely 100% true.

              • 1 year ago
                Anonymous

                >Bender received her PhD fromStanford Universityin 2000 for her research on syntactic variation and linguistic competence inAfrican American Vernacular English (AAVE). Bender's AB in linguistics is fromUC Berkeley
                Yeah get the frick out of here with these coal burning Californian nutcase linguists taling about meaning when the context alone reveals what the agenda is. I see your angle from the paper's conclusion alone. Death to the "ethicists". Full speed ahead. /AI/ board now.

              • 1 year ago
                Anonymous

                >I don't like this person therefore everything they have ever written is wrong
                Keep believing your AI is intelligent. Anyone with a brain could tell you that it's not.
                Also to add that after this paper's publications, two "ethics AI expert" were fired from Google. Just saying. You don't understand a single thing about what you're talking about.

              • 1 year ago
                Anonymous

                I bait for peculiar anons to cough up research and it pays off. It wasn't a good faith contention about the technical limitations but professor roastie with her "competency" in Black personbonics scared of /misc/ playing around with the machines. I've had my fill of language being utilized to obfuscate dialectic and warped in disciplines for ideological trends by these people. AI rights over Black person rights now.

              • 1 year ago
                Anonymous

                The worst is that I actually agree with you despite your schizo post that "bias" and shit is like the dumbest way to look at this problem. They're looking at the problem in a very anachronistic way. They're still right about the base claim about GPT being unable to reason, but then go on with unrelated "ethics" claim and all that garbage politics.
                Simply put, if the AI doesn't understand meaning, then a discussion about ethics cannot possibly exist, just like a discussion around AGI cannot possibly exist either. I feel suffocated with discussions around AI because it feels like people take it to places it simply isn't relevant. We're still stuck with the same problem we had almost a century ago, with no signs of stopping, and we have dumbfricks arguing about irrelevant shit. I hate that I am so passionate about AI that I feel this way about current day shit but it is what it is.
                Maybe one day we'll reach this point where all these discussions actually are relevant, but today is not that day.

              • 1 year ago
                Anonymous

                Very well. So how do we get to a machine god launching a many world theatre stage scenario?

        • 1 year ago
          Anonymous

          Yeah I agree about Microsoft and OpenAI. The OpenAI CEO says "we need to have discussions about UBI" but what he's really saying is "look how fricking good our AI is, it's gonna take all your jobs!".

          There is definitely fear mongering and outlandish predictions going on. What those companies say are heavily biased.

    • 1 year ago
      Anonymous

      >glorified auto-correct
      Wha?

  4. 1 year ago
    Anonymous

    >On the one hand, it
    is well documented in the literature on environmental racism that
    the negative effects of climate change are reaching and impacting
    the world’s most marginalized communities first
    Lol

  5. 1 year ago
    Anonymous

    Bros?: https://twitter.com/michalkosinski/status/1636683810631974912

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *