Why doesn't Nvidia release a card with an insane amount of VRAM? They're supposed to be the AI company

Why doesn't Nvidia release a card with an insane amount of VRAM? They're supposed to be the AI company

A Conspiracy Theorist Is Talking Shirt $21.68

Nothing Ever Happens Shirt $21.68

A Conspiracy Theorist Is Talking Shirt $21.68

  1. 1 month ago
    Anonymous

    because you can link cards together, numb nuts.

    • 1 month ago
      Anonymous

      how does that not crash your home grid?

      • 1 month ago
        Anonymous

        By living in a country with a proper grid.

        • 1 month ago
          Anonymous

          Your home grid. Not the countries grid, you moron.

          • 1 month ago
            Anonymous

            In 230/240V countries houses can get up to 20kW connections easily.
            That photo shows only 1 PSU with IEC C20 plug powering all those cards, so that's 16A of 230V = 3,7kW.

      • 1 month ago
        Anonymous

        Probably living on a 220v country, so he can take 3000W out of a wall plug.

    • 1 month ago
      Anonymous

      Each card still has a ridiculously low amount of VRAM, if you stacked ten cards with 128 GB VRAM you'd be getting a lot more powerful than stacking ten shitty cards

    • 1 month ago
      Anonymous

      >all that for 80GB VRAM
      There are other more efficient and painless solutions to achieve the same thing

  2. 1 month ago
    Anonymous

    That's for workstation cards.

  3. 1 month ago
    Anonymous

    You are confusing the consumer market with the business market, they have enterprise gpus with over 48gb of vram but they cost like $40k and are only sold to businesses, why would they sell a cheap consumer card with that amount of vram which would undercut their business cards?

  4. 1 month ago
    Anonymous

    Money

  5. 1 month ago
    Anonymous

    They want academics and corporations to pay for expensive Teslas, not cheap GeForces.

  6. 1 month ago
    Anonymous

    I feel like all this just screams that the desktops today need to entirely change their computer architecture. It should be possible to have a single instance of a modular RAM that can be used by separated processing units without compromising in latency and modularity. The current model is just outdated at this point.

    • 1 month ago
      Anonymous

      People are already talking about combining the CPU and GPU into one chip for desktops

      • 1 month ago
        Anonymous

        Like we've been doing since 2010 when Intel launched their first iGPU on CPU die?

    • 1 month ago
      Anonymous

      having separated processing units for personal devices at all is outdated, it would be much more efficient to have a workload-agnostic shared backend that can be used for both scalar and vector operations
      e.g. https://libre-soc.org/3d_gpu/architecture/

      • 1 month ago
        Anonymous

        Dude what if we used SSDs as ram!?!?

        • 1 month ago
          Anonymous

          yes i do in fact approve of the research into techniques that would allow dropping the distinction between storage and memory but what i posted could be done today so i don't see the relevance

        • 1 month ago
          Anonymous

          >he doesn't know how useful optane was
          it only got discontinued because morons couldn't use it, AWS has tons of its replacement in their infrastructure

  7. 1 month ago
    Anonymous

    They already sell the H100 with 80GB

    • 1 month ago
      Anonymous

      And it's still less than a Mac I can get at Best Buy

      • 1 month ago
        Anonymous

        H100 can actually scale well, unlike the mac

  8. 1 month ago
    Anonymous

    go buy an A100

    • 1 month ago
      Anonymous

      A100 tops off at 80GB.
      H100 has a special 144GB VRAM (141 usable) version.

  9. 1 month ago
    Anonymous

    Because the limiting factor for AI outside of poor consumers is compute, not memory capacity. 40GB A100s are actually more popular for inference than 80 GB A100s, because they're cheaper and to get decent speeds you have to use so many of them that you end up with enough memory to run 1T+ models in full FP16.

  10. 1 month ago
    Anonymous

    >Why doesn't Nvidia
    Artificial scarcity is in their business model.

  11. 1 month ago
    Anonymous

    They have cards with well over 100gb of vram, but those are comically expensive.

Your email address will not be published. Required fields are marked *