Do I need an Nvidia GPU for AI voices?
I already got cucked out of stable diffusion, flowframes and chatbots...
Depends what you want to do.
I'm using an integrated GPU on a laptop but I end up needing to cut up audio into one minute intervals in order for conversions.
Then you're even more doomed than normal. AMD only cares about AI in the consumer space to the extent that they can use it for hype to snag scientific computing contracts.
At least on Linux if you have one of the few cards that usually works you can probably install or compile code with rocm support. Windows still has nothing but onnx and opencl.
The GPGPU world is only barely starting to take its first baby steps away from CUDA. Just assume that you need an Nvidia GPU for everything unless you are specifically told otherwise.
https://vocaroo.com/1o8eAidN8B8v
Anyway I'm a newbie to this but I'm having a lot of fun with it. Took a while to get anything decent but I think this one sounds pretty good. Could definitely be better if I knew what I was doing though.
in general, the best answer right is yes. But personally I'm seeing a lot of progress outside of nvidia, lots of work with apple's metal/coreml, intel and AMD but rn the best bet is nvidia, you just can't compare with how well everything just works and they have the market dominance with CUDA for decades. No big surprise to see them join the trillion dollar club today
Actually a good question, I want to know too.
Probably belongs in one of the many AI generals though
Depends what you want to do.
I'm using an integrated GPU on a laptop but I end up needing to cut up audio into one minute intervals in order for conversions.
As for generations, I don't know.
>another thread of a local linchud that can't into X because of incompatibility
many such cases
I don't use loonix...
Then you're even more doomed than normal. AMD only cares about AI in the consumer space to the extent that they can use it for hype to snag scientific computing contracts.
At least on Linux if you have one of the few cards that usually works you can probably install or compile code with rocm support. Windows still has nothing but onnx and opencl.
>dumb winchud is dumb
picture me surprised.
I think you can use a amd GPU with mrq tortoise. But I think rvc only supports Nvidia (at least cpu works fast with it).
>chatbots
No anon, real chads use CPU for chatbots.
Just use colab or whatever.
No point I'm buying a new expensive graphics card just for a hobby you'll probably grow out of
>Just use colab or whatever
y-yeah I love Google IP locking me out of the shit for an entire week after 15 minutes of using it
can you guys post the rentry with the voice samples.
The GPGPU world is only barely starting to take its first baby steps away from CUDA. Just assume that you need an Nvidia GPU for everything unless you are specifically told otherwise.
nu
https://vocaroo.com/1o8eAidN8B8v
Anyway I'm a newbie to this but I'm having a lot of fun with it. Took a while to get anything decent but I think this one sounds pretty good. Could definitely be better if I knew what I was doing though.
Carl singing Country Roads.
It's your best bet as long as Pytorch and CUDA reign supreme in Machine Learning
in general, the best answer right is yes. But personally I'm seeing a lot of progress outside of nvidia, lots of work with apple's metal/coreml, intel and AMD but rn the best bet is nvidia, you just can't compare with how well everything just works and they have the market dominance with CUDA for decades. No big surprise to see them join the trillion dollar club today
Stable diffusion runs fine with amd cards
On fucking linux. I'm never touching that shit.