This has anecdotally been true since forever. Back in the day, OpenCL implementations were passing conformance test but performance was poor. They could not turn hardware capabilities into performance for compute users. Drivers were buggy. Documentation was poor compared to NVidia's docs and forum. Offerings were inconsistent (look up Sycl from Codeplay) and ownership of what it is like to develop for AMD was unclear. The notion that it might not have improved or is only now improving is puzzling. It can't be for the lack of recognizing the problem. Intuitively it does not seem so difficult. I'm curious what the reasons are.
When I was working for a Unix commercial graphics software company, the CTO told me how bad the information he received under ATI’s NDA was: different revisions of the same chipset had contradictory register settings, so the driver had to identify the revision before writing a value to the write-only configuration registers. The same chipset might need a 0 or a 1; writing the wrong values could crash the driver.
All in all, it's not that the drivers performance was poor per se, but AMD did nothing about providing a software ecosystem, which amount to its hardware wasn't realistically usable unless your pockets were so big that you can do AMD's job and fund the re-development of the whole ecosystem from scratch.
In other words, it made MUCH better ROI to just use Nvidia, pay a little bit more for the hardware, and save millions on software :)
Genuine question, I have not followed this topic closely for years :)
>development using pytorch
Would probably still play it nvidia safe for more adventurous stuff than token generation even if it has improved
I remember when it came out a little over a year ago, and its just as wrong as it is today as it was then.
After all, if the Software does not work, its just a Paperweight