- most of the trillion dollar companies have their own chips with AI features (Apple, Google, MS, Amazon, etc.). Gpus and AI training are among their biggest incentives. They are super motivated to not donate major chunks of their revenue to nvidia.
- Mac users don't generally use nvidia anymore with their mac hardware and the apple's CPUs are a popular platform for doing stuff with AI.
- AMD, Intel and other manufacturers want in on the action
- The Chinese and others are facing export restrictions for Nvidia's GPUs.
- Platforms like mojo (a natively compiled python with some additional language features for AI) and others are getting traction.
- A lot of the popular AI libraries support things other than Nvidia at this point.
This just adds to that. Nvidia might have to open up CUDA to stay relevant. They do have a performance advantage. But forcing people to chose, inevitably leads to plenty of choice being available to users. And the more users choose differently the less relevant CUDA becomes.
Some more info in this issue: https://github.com/triton-lang/triton/issues/7392
autogluon is popular as well: https://github.com/autogluon/autogluon
Do any of y’all have clear ideas about why it is that way? Why not have a really great bespoke language?
But they end up adding super sophisticated concepts to the familiar language. Makes me wonder if the end result is actually better than having a bespoke language.
Is there a big reason why Triton is considered a "failure"?