Embed Notice
HTML Code
Corresponding Notice
- Embed this notice@sunmcnukes @BowsacNoodle
I could be mistaken of course, but.....
Probably because most ai engineers work with high level sdk like tensorflow or pytorch, and they already support any backends for gpu. Especially pytorch which can even run on vulkan. So it might be more efficient for intel and amd to contribute to pytorch to provide proper rocm/oneapi support, instead of creating a cuda compatibility layer.
I believe the original purpose of creating this thing was to support nvidia's proprietary ai tools like dlss/dlaa, and that's a pretty small market.