Nvidia had a good timing on this. They had workstation cards before, but multicore cpus is what helped them out. Amd had their Athlons 64x2 and intel released core2 chips, and that meant multicore/threaded development was becoming much more approachable, instead of having to hog lab setups or supercomupters at the time. They bought Ageia few years after that, which gave them physx, and then they worked on putting it together with the gpu on a single card.
Then it was as simple as pitching an idea to companies and labs, and it made sense for them. Instead of buying additional servers and core licenses, why not add these pcie cards to your server and split the workload into small simple chunks which gpus excell at. And given how ati was losing ground after 6000 series to nvidia, and amd got blown away by core2 duos and quads, we’re lucky they’ve made it to 2016.
I mean, feel free to train your models on 16GB cards without ANY of the CUDA goodness and libraries. You will come to Nvidia, you will (Unless mama Lisa Su buys a CUDA license)
Because Nvidia invested hard in it, and AyyMD at the time was too busy being bankrupt to invest in theirs.
Fun fact: Nvidia offered AMD a CUDA license for $0.50 per GPU. AMD declined. Thanks Raja. Very helpful.
Google will time you out of colab after a few hours, so you're very limited in the complexity of models you train. But for learning basic concepts it's fine. You can rent a GPU on gcp but that quickly becomes expensive.
Look, to do any serious work, no personal desktop is going to be enough. To do serious work you need multiple GPUs that are designed for Deep Learning like NVIDIA V100 and A100, which you can only rent via AWS or shit like vast.ai.
BUT, a decent GPU with at least 6GB VRAM is pretty nice to have if you want to learn while being independent from Kaggle and Colab and their shitty jupyter notebooks.
You don't need a powerful computer, you just need an Nvidia computer.
Man, why did proprietary CUDA get to win over OpenCL? Imagine what the world would be like if it were the other way around.
Nvidia had a good timing on this. They had workstation cards before, but multicore cpus is what helped them out. Amd had their Athlons 64x2 and intel released core2 chips, and that meant multicore/threaded development was becoming much more approachable, instead of having to hog lab setups or supercomupters at the time. They bought Ageia few years after that, which gave them physx, and then they worked on putting it together with the gpu on a single card.
Then it was as simple as pitching an idea to companies and labs, and it made sense for them. Instead of buying additional servers and core licenses, why not add these pcie cards to your server and split the workload into small simple chunks which gpus excell at. And given how ati was losing ground after 6000 series to nvidia, and amd got blown away by core2 duos and quads, we’re lucky they’ve made it to 2016.
im not /v/ermin
Only /v/ermins use AyyMD. If you do Machine Learning, you have to get Nvidia. If you do anything scientific, you get Nvidia.
False.
I mean, feel free to train your models on 16GB cards without ANY of the CUDA goodness and libraries. You will come to Nvidia, you will (Unless mama Lisa Su buys a CUDA license)
ROCm is good now
bump
Yes, it will everything you will throw at it.
just run the model on the CPU, yea itl take hours but at least you aren't limited
just ask your IT department to get access to a computing cluster
My IT department's computing cluster is 5 unpowered chinkpads in a warehouse.
Because Nvidia invested hard in it, and AyyMD at the time was too busy being bankrupt to invest in theirs.
Fun fact: Nvidia offered AMD a CUDA license for $0.50 per GPU. AMD declined. Thanks Raja. Very helpful.
>Fun fact: Nvidia offered AMD a CUDA license for $0.50 per GPU. AMD declined. Thanks Raja. Very helpful.
Get tha fuck outta here! Seriously?
Yes, seriously.
just use colab
Google will time you out of colab after a few hours, so you're very limited in the complexity of models you train. But for learning basic concepts it's fine. You can rent a GPU on gcp but that quickly becomes expensive.
Look, to do any serious work, no personal desktop is going to be enough. To do serious work you need multiple GPUs that are designed for Deep Learning like NVIDIA V100 and A100, which you can only rent via AWS or shit like vast.ai.
BUT, a decent GPU with at least 6GB VRAM is pretty nice to have if you want to learn while being independent from Kaggle and Colab and their shitty jupyter notebooks.
>v100 and a100
<not dgx
You can just use the CPU to start follow along with tutorials and courses.
I did my first "real" working tensorflow project on an athlon 5350.
Just use whatever you typed your post on until you actually have competence and start running into performance issues with your code.
not really, you can do all your models in google colab.
google TPU has a free tier I think