Can anyone realistically challenge Nvidia's monopoly on data science/AI hardware at this point?

Can anyone realistically challenge Nvidia's monopoly on data science/AI hardware at this point? They're so far ahead of everyone else it's not even funny

Thalidomide Vintage Ad Shirt $22.14

Nothing Ever Happens Shirt $21.68

Thalidomide Vintage Ad Shirt $22.14

  1. 11 months ago
    Anonymous

    no for training. yes for edge/inference. ryzen 7040, for example

  2. 11 months ago
    Anonymous

    >using GRAPHICS CARDS for AI
    Is there like any ACTUAL AI hardware on the horizon?

    • 11 months ago
      Anonymous

      yes, Apple M2 neural engine

      • 11 months ago
        Anonymous

        >yes, Apple M2 neural engine
        That's like saying the NPU built into Rockchip SOCs is a viable, competitive AI product.
        If it doesn't scale, and it's not available outside of a consumer product, it doesn't count.

        • 11 months ago
          Anonymous

          This, Apple could challenge NVIDIA but they are busy with their own business model.

    • 11 months ago
      Anonymous

      Yea, they already lost to the H100 though.

    • 11 months ago
      Anonymous

      It's coming soon. The idea of how AIs work has mostly been solved at this point. There's not much difference at this point, some are better than others but they're based on the same paper, more or less.

      Now that we know it definitely has tangible use coupled with high demand, we'll get dedicated hardware for it soon. Microsoft in particular really wants this for next-gen PCs.

      When it happens, it will result in GPU prices falling. I can imagine that PC motherboards one day will have a dedicated socket for these things. Much, much later down the road it might be integrated into CPU silicon.

      • 11 months ago
        Anonymous

        The answer is FPGAs. AI is too broad a field for a one size fits all solution. FPGAs let you have application-specific hardware for every application.

        • 11 months ago
          Anonymous

          >The answer is FPGAs
          Wow, what a naive take. Have you priced FPGAs with a macrocell count useful to AI-type computations? They'll make Nvidia cards look cheap and good luck finding opensource tools to program them, as well as good luck even finding them in stock.

      • 11 months ago
        Anonymous

        >I can imagine that PC motherboards one day will have a dedicated socket for these things

        I think this will be the case as well. Vidya games with AI generative content on-the-fly will probably push a huge market for something like this.

      • 11 months ago
        Anonymous

        Eventually the CPU, GPU and AI units will all come in a single package just like the upcoming Meteor Lake chips but with bigger GPU and AI chiplets.

      • 11 months ago
        Anonymous

        >I can imagine that PC motherboards one day will have a dedicated socket for these things. Much, much later down the road it might be integrated into CPU silicon.
        Surely PCIe slots would be enough..

      • 11 months ago
        Anonymous

        Current AI tech is limited by the Von Neumann Bottleneck.

        • 11 months ago
          Anonymous

          Hilariously techlet moronpost.

          • 11 months ago
            Anonymous

            It's kind of moronic to design neural networks with centralised processing. The processing belongs finely distributed in memory, but inertia keeps things moronic for now.

            • 11 months ago
              Anonymous

              Stop being clinically moronic anytime.

      • 11 months ago
        Anonymous

        What paper?

    • 11 months ago
      Anonymous

      NOVIDEO has had dedicated AI hardware in their GPUs for a while now.
      NOVIDEO Hopper is not a GPU, it has no graphics hardware - just compute.

      yes, Apple M2 neural engine

      Not enough.

      It's coming soon. The idea of how AIs work has mostly been solved at this point. There's not much difference at this point, some are better than others but they're based on the same paper, more or less.

      Now that we know it definitely has tangible use coupled with high demand, we'll get dedicated hardware for it soon. Microsoft in particular really wants this for next-gen PCs.

      When it happens, it will result in GPU prices falling. I can imagine that PC motherboards one day will have a dedicated socket for these things. Much, much later down the road it might be integrated into CPU silicon.

      GPU prices will not fall because you can't even use consumer GPUs in production for CUDA - the EULA forbids it. It's fine for development and fricking around, but you will never get the VRAM that professional versions get.

      The answer is FPGAs. AI is too broad a field for a one size fits all solution. FPGAs let you have application-specific hardware for every application.

      FPGAs are too expensive, too inefficient, too hard to use and program for.
      NOVIDEO won because of their software and the availability of compute hardware in ther consumer GPUs, competition can't compete.

      • 11 months ago
        Anonymous

        >GPU prices will not fall because you can't even use consumer GPUs in production for CUDA - the EULA forbids it.
        Nvidia will not last.

        Trying to keep serverside is the way of the dodo.

        • 11 months ago
          Anonymous

          They had a slump in revenue but are recovering nicely.
          They will sell every single supercomputer they can make. They are totally independent in terms of hardware now (CPU, SoC, GPU, networking are all in-house). There isn't a single vendor like that bar maybe Intel, but they can't compete yet. Ponte Vecchio is a disaster and its flagship usage - the Aurora supercomputer is still not ready.

          • 11 months ago
            Anonymous

            >the important problems require expensive servers
            I'll take that bet and say no, actually.

            A great example is global warming. Simulation finished. Yes, you are fat, quit eating so much. That will be $999 billion.

            • 11 months ago
              Anonymous

              Literally every business disagrees, but you go you.
              The beauty of NOVIDEO's solution is that it's not really bound to AI/ML but to general HPC so even if the fad fades away the hardware can still be used to whatever you want.

              • 11 months ago
                Anonymous

                >Literally every business disagrees, but you go you.
                Literally none of them have any ai successes :^)

              • 11 months ago
                Anonymous

                Yeah, clearly all those hundreds of millions of dollars earned on this fad is not a "success".

              • 11 months ago
                Anonymous

                yeeeessssssss? let's hear a success.

      • 11 months ago
        Anonymous

        There's a reason AMD is betting on FPGAs for AI despite being the only other player in the CUDA game. Intel is going the FPGA route as well.

      • 11 months ago
        Anonymous

        >FPGAs are too expensive, too inefficient, too hard to use and program for.
        Clash exists, so too hard to program for is no longer an excuse.

    • 11 months ago
      Anonymous

      computations are just math
      all math is math

      • 11 months ago
        Anonymous

        You don't need exact math for simulating brains.

    • 11 months ago
      Anonymous

      a100 and h100s?

    • 11 months ago
      Anonymous

      Congress will outlaw it because it's full of wokies and social c**tservatives who don't want people to have sex with language models. And because SAm alTmAN will convince them to let him build a monopoly.

    • 11 months ago
      Anonymous

      But you're probably not thinking this. You probably want a hardware that you can purchase and put in your computer. For that, you're stuck with GPU for forseeable future.

    • 11 months ago
      Anonymous

      analog processing like
      https://mythic.ai/products/m1076-analog-matrix-processor/
      or at least digital in memory processing

  3. 11 months ago
    Anonymous

    >space age
    >heavy as an elephant

  4. 11 months ago
    Anonymous
    • 11 months ago
      Anonymous

      > Time travels into past to give his gay "warning" and not doing the dirty work himself
      What a Black person. Team openai all the way.

      • 11 months ago
        Anonymous

        You can't just send yourself into the past, stupid. He utilized a small wormhole that was only stable enough to send a brief message before it collapsed.

        • 11 months ago
          Anonymous

          Might you be a Time traveler?

  5. 11 months ago
    Anonymous

    AI/ML is a meme. It's more or less peaked

  6. 11 months ago
    Anonymous

    The real moat for nvidia isn’t their gpu specs, it’s their CUDA LIBRARIES that are integrated into every open source AI framework. If AMD put in the a little software development effort to make a drop in replacement for oyCUDA that works with their GPUs then they would have a fighting chance.

    • 11 months ago
      Anonymous

      Or Intel could do it, or even crApple

  7. 11 months ago
    Anonymous

    AMD has already shown for a long time now that MI300 will be the same thing as grace but with ryzen cores instead of ARM. The problem is they get fricked by everyone else when it comes to software.
    Look for example at pytorch:
    >Cuda 12.1 released april 18th
    Already had a nightly build for a few weeks now
    >ROCM 5.5 released may 2nd
    Torch said rocm 5.5 support soon about a month ago but they are still on 5.4
    Arch took a 1week break from package building due to some move to github. Been a month, still no rocm 5.5 on arch. So yes, the fact that amd has only 10% of userbase fricks it simply because people that have the power to merge shit don't care about it and reply with "wtf is rocm?" "Sorry this works only on cuda".
    But to get to your original question, MI300 is the same thing that Nvidia showed at computex but with Ryzen cores instead of ARM.

    • 11 months ago
      Anonymous

      >AMD has already shown for a long time now that MI300 will be the same thing as grace but with ryzen cores instead of ARM.
      The difference is that AMD has nothing like NVLINK, nor do they have 400/800 gbit/s networking. NV has everything ready and now, you truckload dollars and get a complete working system. Without having to wait for any system integrators.

      There's a reason AMD is betting on FPGAs for AI despite being the only other player in the CUDA game. Intel is going the FPGA route as well.

      >There's a reason AMD is betting on FPGAs for AI
      It was announced over a year ago and nothing more is known yet.
      Where's the tooling? Where's the libraries? Where's the software ecosystem? Without it it's just another proprietary DOA crap.
      >despite being the only other player in the CUDA game.
      That's not even true, if anything Intel is the other player because oneAPI is more advanced than anything AMD has. AMD's tooling is also atrocious in comparison, and always has been.
      >Intel is going the FPGA route as well.
      At least here the software is already available, and actually working. AMD has literally nothing.

      • 11 months ago
        Anonymous

        >The difference is that AMD has nothing like NVLINK
        They do, here's the user manual: https://www.amd.com/system/files/TechDocs/56978.pdf
        >nor do they have 400/800 gbit/s networking
        Their interconnect is fast enough for the fastest supercomputer in the world, that's good enough for me.

  8. 11 months ago
    Anonymous

    >Can anyone realistically challenge Nvidia's monopoly on data science/AI hardware at this point?
    Tenstorrent or Google if sells TPUs clusters.
    Every ASIC IA chip loss because optimize for narrow architectures like CNN or lack flexibility.
    Tenstorrent do Custom Risc V cpus plus ASIC AI.
    Google can optimize Deep Learning architectures for TPU systems.

  9. 11 months ago
    Anonymous

    Does Nvidia really have a monopoly though? Large segments of the AI training and inference market are already dominated by proprietary solutions. For example Google and Amazon have their own AI hardware that uses their own custom chip designs. Eventually a lot of AI work will be handled by ASICs and FPGAs just like what happened with crypto.

  10. 11 months ago
    Anonymous

    They're not even much ahead. The problem is cuda. All software depends heavily on it, including being extremely optimized for it and for nvidia hardware. If someone came up with optimized opencl kernels for amd gpus or something, you'd get the same performance and thus the same results. Except you can't just change frameworks overnight, their popularity leads them credence (you "know" the results they output is correct) in science and that if not, you can track what's wrong where and when, unlike on a no-name framework. You also can't easily add the code to shit like pytorch, because it means rewriting fricktons of shit just to provide full support, even though a lot of it is not necessary. Only when the software is there could people possibly consider AMD, and not this time around but the next time they refresh their hardware. And since nvidia is the safe choice, they'll go with it by default unless you prove amd is superior in some palpable way (e.g. you can train the same model in 1/2 the time, or you can train 10 such models on the same gpu at once without losing speed, or whatever).

    • 11 months ago
      Anonymous

      This is the same as Proton vs Native linux games. There is no need to bother writing shitty inexperienced code specifically for AMD as rocm already handles cuda translation for AMD cards. Just write the software for cuda and it will work on AMD.

      • 11 months ago
        Anonymous

        rocm is slow, buggy, and doesn't support everything.
        Hardware does not work the same under the hood. you ALWAYS need hardware-specific optimizations at these levels.

    • 11 months ago
      Anonymous

      I predict that open standards like OpenCL will take over, since heterogeny in the space ill be valued, and as the market gets bigger taking even a small slice of it will generate enough revenue to justify GPU R&D, and open source GPUS will be a thing eventually. if nvidia gives people any reason to ditch them that timeline will accelerate

      • 11 months ago
        Anonymous

        Why would it be valued?
        Why would opencl, which has already been deprecated, take over?
        Are you moronic?

        • 11 months ago
          Anonymous

          portability, for one. code to run on consumer hardware, write once run anywhere, flexability in vendors, etc. nvidia having a monopoly would let them charge higher prices, and so that creates motivation for competition. if moore's law dies on us anytime soon computer hardware will become comodified real fast

          >Why would opencl, which has already been deprecated, take over?

          I think vulkan's the new one, but I did a quick wikipedia check while I was writing that post, and vulkan looked to mostly be about graphicws and opencl was updated 6 months ago

          • 11 months ago
            Anonymous

            That is the dumbest post I've ever seen anywhere on BOT, and I've been here since 2006. Holy hell what even is happening. Are you even a living being? Even a dog is smarter than that. What. The. FRICK.

            • 11 months ago
              Anonymous

              >AD HOMINEM
              Black person
              If you're not contributing to the discussion with actual arguments consider necking yourself

        • 11 months ago
          Anonymous

          he means vulkan compute, he's just behind on the times. Also wrong, because no one will catch CUDA unless a state-level actor interferes.

          So I guess that means OneAPI has a chance, but it's not actually open source so meh.

          • 11 months ago
            Anonymous

            geopolitics is already driving countries to have in-house chip fabs for security reasons. since AI is the future of everything everyone's gonna want to be able to have their own fab so a diplomatic SNAFU or war won't cut them off. if for no other reason this will drive open standards and heterogeneity since those cards will need to be usable by the widest possible slice of use cases to make back the investment

            also open hardware is a benefit, locked down/backdoored/etc. hardware can turn on governments and corporations just as easy as the end user and with a diverse global market it's easy to just buy the card that doesn't suck ass

          • 11 months ago
            Anonymous

            Isnt geohotz tinygrad able to change it? Heard hes developing a framework called tinygrad to improve AMD cards for ML

  11. 11 months ago
    Anonymous

    What the frick does AI even do except produce a bunch of worthless content to entertain morons?

    • 11 months ago
      Anonymous

      >produce a bunch of worthless content to entertain morons?
      but that's the whole tech sector

    • 11 months ago
      Anonymous

      Deep learning is in everything you do. Your camera super resolution, your bank automatically reading your cheque numbers when you cash one in, modern bioinformatics tools, etc.
      Just because you're tech illiterate and only become aware of things once it becomes entertainment doesn't mean "AI" was born yesterday.

      • 11 months ago
        Anonymous

        >t. moron

        • 11 months ago
          Anonymous

          >waaah mommy he said facts waaaaah

  12. 11 months ago
    Anonymous

    Tesla has their DOJO architecture
    Google has some TPU
    Tenstorrent is working on some, but they're smaller company.

    Honestly, if I was a betting man, I'd go with Tesla as a dark horse as they have the machines necessary to scale up in consumer hardware (their robots and their cars allows them to sell millions of their chips potentially each year and possibly 10s of millions in few more years) and Google as potential to catch up. But Google maybe able to do one or two of exascale supercomputers

  13. 11 months ago
    Anonymous

    No, they've won and nothing will topple them until Intel comes up with something entirely out of left field that revolutionizes training and so on

  14. 11 months ago
    Anonymous

    How much does NVIDIA's AI revenue depend on Microsoft I wonder? ChatGPT which started this recent AI craze runs on Azure using Nvidia hardware but it seems all the other big potential AI customers like Google, Amazon and Tesla mainly use their own proprietary hardware. The need for AI hardware will certainly grow but will Nvidia have customers if all the big players are doing their own thing? TSMC should do well though barring an invasion since almost everyone depends on their chips.

    • 11 months ago
      Anonymous

      nvidia's play is to capture the gaming market. nvidia sells ~25 million GPUs each year. nVidia's annual rev is only $27B and netincome of ~9.75B with ~37% profit margin. Tesla's annual was $81B with ~40-50% YOY growth. And their net income was $12.58B, a 15.5% profit margin. This year, Tesla's will be ~113B revenue and ~$17.5B net income.

      Market is hoping other companies pick up drop what they're doing and pick up the nvidia GPU due to LLM, but they forget that everyone else is also in the game for a reason. Google is doing LLM of their own with their own TPUs. Microsoft will surely do their own thing. Apple will probably spin off their own LLM chip infrastructure. Tesla has been doing large NN chip for 1-2 years now. Scaling/optimizing as they go along, probably the most mature out of all the NN competitors, but its not out there for traditional computers, just on their cars/robots.

      • 11 months ago
        Anonymous

        >Google is doing LLM of their own with their own TPUs
        Wrong. It's a GCP pet project.

        • 11 months ago
          Anonymous

          They've been forced into front stage due to OpenAI

          • 11 months ago
            Anonymous

            Google AI? Yes, TPU no. No one cares about it, not even google.

            • 11 months ago
              Anonymous

              >Google is doing LLM of their own with their own TPUs
              Wrong. It's a GCP pet project.

              These. I'm tired of techlets screaming about whatever the msm is shilling at the moment. Why is nuBOT like this anyway?

  15. 11 months ago
    Anonymous

    Huge money will be spent on AI. The problem for Nvidia is that the heaviest users of AI like Google will want to keep this money to themselves and build their own systems rather than give the cash to Nvidia.

  16. 11 months ago
    Anonymous

    TPUv4 is superior by far on any model with high activation sparsity (ie. Relu). Investors are throwing money at AI though and a lot of that money flows to NVIDIA.

    Realistically, technological superiority doesn't really matter here.

  17. 11 months ago
    Anonymous

    i thought most entperise customers and super computers and shit run on linux which nvidia has shit drivers for. am i wrong?

    • 11 months ago
      Anonymous

      yes
      nvidia's drivers are good for cuda and other server shit
      less good for actual desktop, but still fine unless you're trying bleeding edge wayland or expect VRR to work

  18. 11 months ago
    Anonymous

    You can simply protest by recording yourself throwing balloons full of seamen at nvidia gpu.
    Don't tell the viewers it's cum to avoid getting deplatformed on YouTube

  19. 11 months ago
    Anonymous

    THE more you BUY THE more you save THE MORE YOU BUY THE MORE YOU SAVEEEE
    THE more you BUY THE more you save THE MORE YOU BUY THE MORE YOU SAVEEEE

  20. 11 months ago
    Anonymous

    Why is rocm so shit? I can't believe that AMD is so israeli and can't fix their own shit fast if they wanted.

    Spend some millions in a dev team (with kernel devs involved) to fix rocm and this can be fixed in some months.

    • 11 months ago
      Anonymous

      this
      if AMD made ROCm not shit, then it'd be solid for some things

  21. 11 months ago
    Anonymous

    Sam Altman will be the bring the ruin of the free world.

  22. 11 months ago
    Anonymous

    Lisa Su is not an visionary, she just got lucky with Ryzen.

  23. 11 months ago
    Anonymous

    Of course not. I mean have you seen Jensen's jackets? They are all made of leather.

  24. 11 months ago
    Anonymous

    >1 exaflops
    >144 TB VRAM
    wow
    although this means nothing if they're going to use moronic algos with quadratic complexity

  25. 11 months ago
    Anonymous

    Imagine the e-girl chatbots

    • 11 months ago
      Anonymous

      *slaps U2 server* this baby can hold so many horny little anime girls

  26. 11 months ago
    Anonymous

    AMD's MI300 is significantly faster than anything Nvidia has on the market at the moment, the problem is that AMD doesn't have the software built out yet so the only entities interested are going to be Hyper Scalars.

    • 11 months ago
      Anonymous

      The MI300 is a multi-chip APU and it's nowhere near out you dumb msm-lapping consooomer

    • 11 months ago
      Anonymous

      The Grace super chip literally does the same thing but better and you can actually backorder it now. You just have to wait 6 months because of the order backlog
      >M-MUH APU
      literally of no significance. AMD's only customers are a handful of supercomputers that generate an amortized revenue of like $50M a year

  27. 11 months ago
    Anonymous

    Tenstorrent

Your email address will not be published. Required fields are marked *