Why doesn't Nvidia release a card with an insane amount of VRAM? They're supposed to be the AI company

Why doesn't Nvidia release a card with an insane amount of VRAM? They're supposed to be the AI company

Nothing Ever Happens Shirt $21.68

Tip Your Landlord Shirt $21.68

Nothing Ever Happens Shirt $21.68

  1. 2 months ago
    Anonymous

    because you can link cards together, numb nuts.

    • 2 months ago
      Anonymous

      how does that not crash your home grid?

      • 2 months ago
        Anonymous

        By living in a country with a proper grid.

        • 2 months ago
          Anonymous

          Your home grid. Not the countries grid, you moron.

          • 2 months ago
            Anonymous

            In 230/240V countries houses can get up to 20kW connections easily.
            That photo shows only 1 PSU with IEC C20 plug powering all those cards, so that's 16A of 230V = 3,7kW.

      • 2 months ago
        Anonymous

        Probably living on a 220v country, so he can take 3000W out of a wall plug.

    • 2 months ago
      Anonymous

      Each card still has a ridiculously low amount of VRAM, if you stacked ten cards with 128 GB VRAM you'd be getting a lot more powerful than stacking ten shitty cards

    • 2 months ago
      Anonymous

      >all that for 80GB VRAM
      There are other more efficient and painless solutions to achieve the same thing

  2. 2 months ago
    Anonymous

    That's for workstation cards.

  3. 2 months ago
    Anonymous

    You are confusing the consumer market with the business market, they have enterprise gpus with over 48gb of vram but they cost like $40k and are only sold to businesses, why would they sell a cheap consumer card with that amount of vram which would undercut their business cards?

  4. 2 months ago
    Anonymous

    Money

  5. 2 months ago
    Anonymous

    They want academics and corporations to pay for expensive Teslas, not cheap GeForces.

  6. 2 months ago
    Anonymous

    I feel like all this just screams that the desktops today need to entirely change their computer architecture. It should be possible to have a single instance of a modular RAM that can be used by separated processing units without compromising in latency and modularity. The current model is just outdated at this point.

    • 2 months ago
      Anonymous

      People are already talking about combining the CPU and GPU into one chip for desktops

      • 2 months ago
        Anonymous

        Like we've been doing since 2010 when Intel launched their first iGPU on CPU die?

    • 2 months ago
      Anonymous

      having separated processing units for personal devices at all is outdated, it would be much more efficient to have a workload-agnostic shared backend that can be used for both scalar and vector operations
      e.g. https://libre-soc.org/3d_gpu/architecture/

      • 2 months ago
        Anonymous

        Dude what if we used SSDs as ram!?!?

        • 2 months ago
          Anonymous

          yes i do in fact approve of the research into techniques that would allow dropping the distinction between storage and memory but what i posted could be done today so i don't see the relevance

        • 2 months ago
          Anonymous

          >he doesn't know how useful optane was
          it only got discontinued because morons couldn't use it, AWS has tons of its replacement in their infrastructure

  7. 2 months ago
    Anonymous

    They already sell the H100 with 80GB

    • 2 months ago
      Anonymous

      And it's still less than a Mac I can get at Best Buy

      • 2 months ago
        Anonymous

        H100 can actually scale well, unlike the mac

  8. 2 months ago
    Anonymous

    go buy an A100

    • 2 months ago
      Anonymous

      A100 tops off at 80GB.
      H100 has a special 144GB VRAM (141 usable) version.

  9. 2 months ago
    Anonymous

    Because the limiting factor for AI outside of poor consumers is compute, not memory capacity. 40GB A100s are actually more popular for inference than 80 GB A100s, because they're cheaper and to get decent speeds you have to use so many of them that you end up with enough memory to run 1T+ models in full FP16.

  10. 2 months ago
    Anonymous

    >Why doesn't Nvidia
    Artificial scarcity is in their business model.

  11. 2 months ago
    Anonymous

    They have cards with well over 100gb of vram, but those are comically expensive.

Your email address will not be published. Required fields are marked *