Hi,
I don't see any sort of mention for AMD AI support yet I know AMD has an open source AI pipeline library for training/inference that can be used on AMD Radeon cards.
An interesting open source item AMD is supporting for consumer AI seems to be the ROCm on Radeon project: https://rocm.docs.amd.com/projects/radeon/en/latest/
I want to speed up my captures and am running an AMD Radeon RX 7900XT
There does appear to be windows support for higher end (and cheaper) AMD Radeon cards to perform AI operations. This would open up a few more folks to jump on the ToneX/AmpliTube platform if they can use their existing gaming GPU for AI
Is there some limit to how you're performing your captures? Are things written in CUDA in anyway for Nvidia hardware only?
If so there's libraries in windows supported by Microsoft that allow for both CUDA workloads to run on AMD hardware. If you search Onnxruntime there's a shim layer that can be used to interchangeably allow for both CUDA and AMD cards to be used, as well as potentially other CPU based training/graph optimizer for your modeller
There's a training package that can be built and loaded for this and training can be done through Onnxruntime for your model. Not sure if your developers are aware of this info
If so, is there a timeline for such a feature ask? Is it a skills thing? It appears AMD is supporting Onnxruntime which allows for different providers (Like CUDA, or AMD's ROCm) to work interchangeably on different AI operations.
See:
https://rocm.docs.amd.com/projects/rade ... ecommended