How long until I can run top tier AI with a 250 usd gpu?

How long until I can run top tier AI with a 250 usd gpu?

I want to run locally SD in Krita so I can make gifs of waifus being fricked.

DMT Has Friends For Me Shirt $21.68

Yakub: World's Greatest Dad Shirt $21.68

DMT Has Friends For Me Shirt $21.68

  1. 1 month ago
    Anonymous

    Long as m' dick.

  2. 1 month ago
    Anonymous

    When it comes to generative AI, VRAM is key, my friend. The more VRAM a GPU has, the better.

  3. 1 month ago
    Anonymous

    dear bot user,
    rethink your existance
    stop sinning

  4. 1 month ago
    Anonymous

    Top tier AI will always require top tier hardware

  5. 1 month ago
    Anonymous

    Aren't 12GB RTX 3060 below $250 now? What can't they do? I paid mine about this price a year and a half ago when prices were still crazy, refurbished by PNY on their ebay shop. I don't generate animations with it but I suppose it can.

    • 1 month ago
      Anonymous

      Can you use two for 24 gb?

  6. 1 month ago
    Anonymous

    >250 usd
    with inflation? never

  7. 1 month ago
    Anonymous

    u are thinking smol brained. in 10 years ppl will have a home server cluster and run ai to beam your waifu to ur headset with 180 degree fov 60 ppd hyper realistic screens and a 3d scanned house so it can dance on ur table and sit on ur couch and get fookd however u want and talk to u etcetc

  8. 1 month ago
    Anonymous
  9. 1 month ago
    Anonymous

    have a nice day Cris

  10. 1 month ago
    Anonymous

    https://www.ebay.com/itm/204546819237?_trkparms=amclksrc%3DITM%26aid%3D777008%26algo%3DPERSONAL.TOPIC%26ao%3D1%26asc%3D20240315173020%26meid%3D43197066fbc641c1a48d5af68193229c%26pid%3D102069%26rk%3D1%26rkt%3D1%26itm%3D204546819237%26pmt%3D0%26noa%3D1%26pg%3D4375194%26algv%3DRecentlyViewedItemsV2SignedOutMobile%26brand%3DNVIDIA&_trksid=p4375194.c102069.m5481&_trkparms=parentrq%3A79d2994d18f0a8dae7e50badffff0e8c%7Cpageci%3Ac61520e4-1258-11ef-be63-da0ce33a2a5d%7Ciid%3A1%7Cvlpname%3Avlp_homepage

    Pardon the link but you want this and a blower fan lmao.

    • 1 month ago
      Anonymous

      why this is cheaper than an rtx 5000?

      • 1 month ago
        Anonymous

        no gaming (video out), loud + hot, runs models 1/4th the speed of a 3090, 1/2 the speed of a 2080 TI hacked with 22gb of vram.
        The 5090 is a scam (especially if it's 24gb, not 32gb), however but you are paying the price of new hardware + warranty + compatibility with the newest software (arguably all of this is ignorable).

  11. 1 month ago
    Anonymous

    You can do that now. Are you moronic per chance?

  12. 1 month ago
    Anonymous

    Old server card slower than the new stuff but shit tons of vram for inference.

  13. 1 month ago
    Anonymous

    stable diffusion is souless for porn, unless you have a thing for uncanny valley / e-girl.
    you can already erp with 6gb of vram, but you are not going to have the smartest models (you can load q4 quant and it would run at like 15tk/s on my 1660 ti, ideally you want q8 which uses 2x more vram and functions close enough to unquantized models).
    I personally use colab, but 3 days ago after loading my futanari frickventures card I got my first disconnect due to running inappropriate material, so take colab with a grain of salt.
    But colab is cool because you can technically use it to learn how to to setup AI with python, and you can technically rent a GPU from vast.ai and run the same colab script inside the server.
    I am 100% sure that google spy on colab just a little bit (they probably store the log + includes what you downloaded), but google already knows my all my fetishes so I don't really care, also the nvidia T4 GPU is pretty good with 16gb of vram, you would need to buy a weird card like the 4060 TI 16gb for like $400 to get a similar experience (the flaw of the 4060 TI is that it's fine but really sucks compared to a used 3090 if you wanted to scale up with multiple GPU's because typically every time you double the vram use, the token speed is halved, and the 3090 has a lot of bandwidth headroom (you get like 30tk/s using all 24gb), while the T4 and 4060 TI gets 15tk/s using 16gb.
    Owning your hardware is great for gaming + starts AI quickly (colab takes a few minutes to start up, combined with vast.ai probably takes more steps and minutes).
    Personally I don't think gpu's will ever get cheap enough for AI, by the time we can run sora AI on the Nvidia 9090 with 1tb of vram and generate videos without a DGX with 8 h200's, the next best AI model using a petabyte of vram is going to be hosted on the cloud for a few bucks for a video.
    If you want a industrial grade tier AI locally, you need to pay industrial grade money for the hardware (+the secret model)

    • 1 month ago
      Anonymous

      >Personally I don't think gpu's will ever get cheap enough for AI, by the time we can run sora AI on the Nvidia 9090 with 1tb of vram and generate videos without a DGX with 8 h200's, the next best AI model using a petabyte of vram is going to be hosted on the cloud for a few bucks for a video.
      I mean by that point we're probably going to have AIs that are 5-10x better than the current ones we have - more efficient, higher throughputs, multimodality, agent functionality built-in, and system workflows like ComfyUI/Automatic1111 where we can easily spin-up local AIs and immediately use it without doing tens of steps for configuration. That would be enough for most of us and the degenerates. That's the goal.

      no gaming (video out), loud + hot, runs models 1/4th the speed of a 3090, 1/2 the speed of a 2080 TI hacked with 22gb of vram.
      The 5090 is a scam (especially if it's 24gb, not 32gb), however but you are paying the price of new hardware + warranty + compatibility with the newest software (arguably all of this is ignorable).

      Nvidia can release a 5090 version that uses HBM2e so they can finally go for a high-end consumer product with 48GB VRAM tailored for AI usage for consumers but they won't do that because they don't have any reason to. Technically they don't even need HBM2e - the Quadro RTX 8000 has 48GB of VRAM using GDDR6 and it was released in 2018 lmao. Think about it: their NVIDIA L40 GPU has 48GB GDDR6 while only consuming 300W compared to the 450W power consumption by RTX 4090, all while being the better card - yet it's only available for data centers. They don't even need to do anything new to release GPUs that will be tailored to consoomers - just release the L40 under a rebranded name and that's it.

  14. 1 month ago
    Anonymous

    Literally just work a mcjob for a couple weeks and you can afford a used 3090

  15. 1 month ago
    Anonymous

    Frick off Cris

Your email address will not be published. Required fields are marked *