However you want to dissect this specific issue, I'd generally consider this a positive step and nice to see it hit the front page.
https://www.reddit.com/r/ROCm/comments/1i5aatx/rocm_feedback...
I think AMDs offer was fair (full remote access to several test machines), then again just giving tinycorp the boxes on their terms with no strings attached as a kind of research grant would have earned them some goodwill with that corner of the community.
Either way both parties will continue making controversial decisions.
Another neocloud, that is funded directly by AMD, also offered to buy him boxes. He refused. It had to come from AMD. That's absurd and extortionist.
Long thread here: https://x.com/HotAisle/status/1880467322848137295
It's like asking a tire manufacturer to give you a car for free.
Just uploaded some pictures of how complex these machines really are...
> Now, why don't they send me the two boxes? I understand when I was asking for firmware to be open sourced that that actually might be difficult for them, but the boxes are on eBay with a simple $$ cost. It was never about the boxes themselves, it was a test to see if software had any budget or power. And they failed super hard
If I ask a company for a $100,000 grant, and they're not willing, it doesn't seem like correct logic to assume that means they don't have the budget for it. Maybe they just don't want to spend $100,000 on me.
Why does this mean they don't have a budget or power?
Let's imagine he's indeed correct. He receives the hardware, get's hacking and solves all of AMDs problem, the stock surges and tinygrad becomes a major deep learning framework.
That would be a collosal embarrassment for AMDs software department.
I'm on the wrong side of the Twitter wall to read the source, but that doesn't sound absurd. Extortionist, maybe. Hotz's major complaint (last time I checked, anyway) is pretty close to one I have - AMD appears to have between little and no strategic interest in consumer grade graphics cards having strong GPGPU support leading to random crashes from the kernel drivers and a certain attitude of "meh, whatever" from AMD corporate when dealing with that.
I doubt any specific boxes or testing regime are his complaint, he'd be much more worried about whether AMD management have any interest in companies like his succeeding. Third parties providing some support doesn't sound like it'd cut it. The process of being burned by AMD leaves one a little leery of any alleged support without some serious guarantees that more major changes are afoot in their management view.
This reads as incredibly entitled. AMD owes him nothing, especially if he's opposed to the leadership's vision[1] and being belligerent about it.
There is maybe 1 or 2 companies with enough cachet to demand management changes at a supplier like AMD - and they have market caps in the trillions.
1. Lisa Su hasn't been shy about AMD being all about partnering with large partners who can move volume. My interpretation of this is AMD prefers dealing with Sony, Microsoft, hyperscalers, and HPC builders, then possibly tier II OEMs. Small startups are probably much further down the line, close to consumers at the tail end of AMD's attention queue. I don't like it as a consumer, but it seems like a sound strategy since the partners will shoulder most of the software effort, which is a weakness AMD has against Nvidia. They can focus on cranking out ok-to-great hardware at more-than-ok prices and build up a warchest for future investments, and who knows when this hype bubble will burst and take VC dollars with it, or someone invents an architecture that's less demanding on compute (if you're more optimistic)
I doubt AMD are going to listen to him. They're in a great spot and are probably going to tap into the market in a big way. But Hotz isn't crazy to test them in an odd way - although he'd probably be better off dropping AMD cards like most other people in his price range would.
He should have just read the Lisa Su interview from Q1 2024 where ahe laid out AMDs strategy without equivocating
> ... although he'd probably be better off dropping AMD cards
I think this is what's best for everyone. Looking at his recent track record[1], he seems like a person who's gets really excited by kicking things off and experiencing the exponentially growth phase, and then when it flattens out into a sigmoid curve, he dusts his hands and declares his work done, and moves to the next thing.
. 1. Hired by Elon to "fix" Twitter, CommaAI, and soon, Tiny
One might argue he's had a pattern for even longer. While he did do some early hypervisor glitching, even his PS3 root key release was basically just applying fail0verflow's ECDSA exploit (fail0verflow didn't release the keys specifically because they didn't want to get sued ... so that was a pretty dick move [1]).
For his projects, I think it's important to look at what he's done that's cool (eg, reversing 7900XTX [2], creating a user-space driver that completely bypasses AMD drivers for compute [3]) and separating it from his (super cringe) social media postings/self-hype.
Still, at the end of the day, here's hoping that someone at AMD realizes that having terrible consumer and workstation support will basically continue to be a huge albatross/handicap - it cuts them off basically all academic/research development (almost every single ML library and technique you can name/used in production is CUDA first because of this) and the non-hyperscaler enterprise market as well. Any dev can get a PO for a $500 Nvidia GPU (or has one on their workstation laptop already). What's the pathway for ROCm? (honestly, if I were in charge, my #1 priority would be to make sure ROCm is installed and works w/ every single APU installed, even the 2CU ones).
[1] https://en.wikipedia.org/wiki/Sony_Computer_Entertainment_Am...
[2] https://github.com/tinygrad/7900xtx
[3] https://github.com/tinygrad/tinygrad/blob/master/docs/develo...
He was just at CES promoting Comma: https://youtu.be/GLGuA2qF3Kk
Meta and Microsoft are big enough they could just build their own TPUs with a stable software stack and cut off Nvidia and AMD at the same time.
From this perspective, AMD only ever makes sense as an "also ran company" for a few niche use cases.
A generation ago, everyone in sales and developer relations understood that "the customer is always right". Remember a sweaty dude on stage jumping about screaming "developers! developers! developers"? It was exhausting dealing with all the free software and hardware sent to developers, not to mention the endless free conferences for even the most backwater developer community. But that's an ethos for boomers, I guess.
On the one hand "incredibly entitled" and on the other you talk about AMD's leadership vision. Your long closing paragraph shows that entitlement of a developer has nothing to do with anything and isn't relevant in the conversation (I can show you guys at OEMs who are incredibly arrogant and entitled or outright a$$holes but so what?). It's just an opinion based on your personal bias.
In reality, AMD simply doesn't care about small AI startups or developers as you've noted. They don't care about me wanting to run all my AI locally so that I can manage my dairy farm with a modest fleet of robots. If they cared, and they sent him MI300s immediately (or sent them to the other 8 startups that asked for them), you wouldn't be chastising him about being "incredibly entitled".
AMD has little interest in software support in general.
Their Adrenalin software is riddled with bugs that have been here for years.
They are serious, they just don't respond to his demands.
That's worth 100M. And they won't even send us 2 ~100k boxes. In what world does that make sense, except in a world where decisions are made based on pride instead of ROI. Culture issue."
Take the free offer, prove everyone wrong and then start to tell us how great you are. https://x.com/HotAisle/status/1880507210217750550
He picked his problem better. The whole reason that tinygrad is, well, tiny, is that it limits the amount of overhead to onboard people and perform maintenance and rewrites. My strong impression is that the ROCm codebase is simply much too large for AMD's dev resources. You're trying to race NVidia on their turf with less resources. It's brave, but foolish.
I can see how Tinygrad could succeed. The story makes sense. AMD's doesn't, neither logically nor empirically. NVidia would have to seriously fumble.
Worked for AMD in the CPU market.
That said I'm deeply worried about anyone whose based their company on amd gpus. The only reason why they do well in hpc is because there's an army of dreadfully underpaid and over performing grand students to pick up the slack from AMD. Trying to do that in a corporate environment is company suicide.
Sony Interactive and Microsoft XBox seem to be doing great without an army of underpaid students. AMD does great at the top and bottom: the corporates in the middle that are unwilling or unable to pay people to author/tweak their software for AMD GPUs will do better going with Nvidia, which has great OOTB software, and a premium to go with it.
I suppose if AMD had infinite resources, it'd fix this post-haste.
That's the entire point of AMD partnering with larger companies, rather than going all-in with consumers and small startups at this point in time.
> Chiplets are also enabled by TSMC technology, CoWoS.
Interesting, my mistake. Thank you for pointing that out!
This would end up costing maybe tens of millions at most, but the potential return is indeed measured in billions.
And yep, lots of people like geohot are (to put it mildly) eccentric. So deal with it. They are not merely your customers, they are your freaking sales people.
As it is, I work in a startup that does a bit of AI vision-related stuff. I'm not going to even touch AMD because I don't want to deal with divas on the AMD board in future. NVidia is more expensive right now, but they're far more predictable.
That doesn't help if the drivers are buggy. AMD needs to send hardware to their own driver developers.
Do you really want all AI hardware and software dominated by a monopoly? We're not looking to "beat" Nvidia, we are looking to offer a compelling alternative. MI300x is compelling. MI355x is even more compelling.
If there is another company out there making a compelling product, send them my way!
I'm willing to try AMD, and I even built an AMD-based machine to experiment with AI workflows. So far it has been failing miserably. I don't care that MI300X is compelling when I can't make samples work both on my desktop and on a cloud-based MI300X. I don't care about their academic collaborations, I'm not in the business of producing papers.
I'll just pay for H100 in the cloud to be sure that I will be able to run the resulting models on my 3090 locally and/or deploy to 4090 clusters.
If AMD shows some sense, commits to long-term support for their hardware with reasonable feature-parity across multiple generations, I'll reconsider them.
And AMD has a history of doing that! Their CPU division is _excellent_, they are renowned for having long-term support for motherboard socket types. I remember being able to buy a motherboard and then not worrying about upgrading the CPU for the next 3-4 years.
Anush was actively looking for feedback on this on github today...
https://www.reddit.com/r/ROCm/comments/1i5aatx/rocm_feedback...
All AMD had to do was support open standards. They could have added OpenCL/SYCL/Vulkan Compute backends to Tensorflow and Pytorch and covered 80% of ML use cases. Instead of differentiating themselves with actual working software, they decided to become an inferior copy of NVIDIA.
I recently switched from Tensorflow to Tinygrad for personal projects and haven't looked back. The performance is similar to Tensorflow with JIT [0]. The difference is that instead of spending 5 hours fixing things when NVIDIA's proprietary kernel modules update or I need a new box, it actually Just Works when I do "pip install tinygrad".
0: https://cprimozic.net/notes/posts/machine-learning-benchmark...
So it is all shit, but tinygrad saves the day?
I don't know of any other autograd libraries with a non-CUDA backend, but I'd be interested to learn about them.
> It most definitely is about “beating” NVIDIA.
Hard disagree, but we are just going to have to agree to disagree on that.
However it would also raise future revenue, which should be what's reflected by the market.
So it would still be something that's good for the company, but not nearly 100B good.
And how's that been going? The AMD stock price compared to NVidia seems to speak volumes about the efficacy of these projects.
IREE has been around for 5 years, without producing anything overtly practical. They seem to be focused more on academic jobs and citations. It's also focused on the general case of a compiler for "all" AI-type tasks, supporting everything from WASM to CUDA.
OpenXLA seems to be a bit more practical, but I spent the last 2 hours trying to make it work on my AMD card (Radeon Pro W7900) and failing.
I personally don't like Tinygrad's approach of doing their own thing rather than integrating into PyTorch/JAX/..., but it at least is _practical_ with a reasonable end-goal. Is it going to be successful? Who knows. But it's more practical than anything AMD has done within the recent 5 years.
Those academic publications are a sign that the people involved actually know what they’re doing, and are making sure their work holds up to scrutiny.
I've been hearing about MLIR and OpenXLA for years through Tensorflow, but I've never seen an actual application using them. What out there makes use of them? I'd originally hoped it'd allow Tensorflow to support alternate backends, but that doesn't seem to be the case.
0: https://cprimozic.net/notes/posts/machine-learning-benchmark...
I don’t really think TinyCorp has anything to offer AMD.
I personally couldn't think of a better reason to never buy AMD GPUs ever again by the way.
There is 1000 reasons why your one GPU could have crashed, what does it say in the logs before it crashed?
It's also a strange value proposition. If I'm a programmer in some super computer facility and my boss has bought a new CDNA based computer, fine, I'll write AMD specific code for it. Otherwise why should I? If I want to write proprietary GPU code I'll probably use the de facto industry standard from the industry giant and pick CUDA.
AMD could be collaborating with Intel and a myriad of other companies and organizations and focus on a good open cross platform GPU programming platform. I don't want to have to think about who makes my GPU! I recently switched from an Intel CPU to an AMD, obviously to problem. If I had to get new software written for AMD processors I would have just bought a new Intel, even though AMD are leading in performance at the moment. Even Windows on ARM seems to work ok, because most things aren't written in x86 assembly anymore.
Get behind SYCL, stop with the platform specific compilation nonsense, and start supporting consumer GPUs on Windows. If you provide a good base the rest of the software community will build on top. This should have been done ten years ago.
Honestly, the problem isn't just which devices, but even more so, this (from the page, not your comment):
> No guarantees of future support but we will try hard to add support.
During the Great GPU Shortage, I bought an AMD RX5xx card for ML work. It was explicitly advertised to work with ROCm. Within a couple of months, AMD dropped ROCm support. EOLing an actively-sold product from being used for an advertised purpose within the warranty period was, if I understand consumer protection laws in my state correctly, fraud. There was no support from either the card vendor (MSI). No support from AMD. No support from the reseller. Short of small claims, which was not worth it, there was no recourse.
This is on a long list of issues AMD needs to sort out to be a credible player in this space:
* Those are the kinds of experiences which cause people to drop a vendor and not look back. AMD needs to either support cards forever, or at the very least, have an advertised expiration date (like Chromebooks and Android phones).
* Broad support is helpful from a consumer perspective from the simply pragmatic point of view that only a tiny fraction of the population has the time to read online forums, footnotes, or fine print. People should be able to buy a card on Amazon, at Best Buy, and Microcenter, and expect things to Just Work.
* Being able to plan is essential for enterprise use. I can't build a system around AMD if AMD might stop supporting their platform on 0 days notice, and the next day, there might be a security exploit which requires a version bump.
I'm hoping Intel gets their act together here, since NVidia needs a credible competitor. I've given up on AMD.
Though AMD doesn't have the same "virtual ISA" as PTX right now there are increasing levels of such abstraction available in compiled flows with MLIR / Linalg etc. Those are higher level and can be compiled / jitted in realtime to obviate the need for a low level virtual ISA.
All because they went with a boneheaded decision to require per-device code compilation (gfx1030, gfx1031...) instead of compiling to an intermediate representation like CUDA's PTX. Doubly boneheaded considering the graphics API they developed, Vulkan, literally does that via SPIR-V!
The author of the issue comments that they'll eventually support all cards. What he really is asking for, is what cards people want them to prioritize, not just support.
Edit:
> No guarantees of future support but we will try hard to add support.
> No guarantees of future support but we will try hard to add support.
AMD reps told me exactly the same thing years ago about how they'd love to support all cards, when RDNA2 had just launched. Fast forward, only W6800 is properly supported from that gen. The last time I tried, it had tons of kernel bugs that caused hard freezes outside most basic cases.
You need to come out and say that you will support all cards, no ifs or buts, by a hard deadline.
Either the management at AMD is not smart enough to understand that without the computing software side they will always be a distant number 2 to NVIDIA, or the management at AMD considers it hopeless to ever be able to create something as good as CUDA because they don’t have and can’t hire smart enough people to write the software.
Really, it’s just baffling why they continue on this path to irrelevance. Give it a few years and even Intel will get ahead of them on the GPU side.
Which is exactly what NVIDIA seems to be doing.
AMD's ROCm software group seems far behind, is probably understaffed, and probably is paid a fraction of what NVIDIA pays its CUDA software groups.
AMD also has to catch up with NVlink and Spectrum-X (and/or InfiniBand.)
AMD's main leverage point is its CPUs, and its raw GPU hardware isn't bad, but there is a long way to go in terms of GPU software ecosystem and interconnect.
They had the exact same kind of support issues back in the OpenCL days, where they didn't manage to provide cross platform, cross card support for same versions of the platform.
I have never been able to reconcile it with their turnaround and newfound competence on the CPU side.
i wonder if you've considered the possibility that there's some component/dimension of this that you're simply unaware of? that it's not as straightforward as whatever reductive mental model you have? is that even like within the universe of possibilities?
By the time an (consumer) AMD device is supported by ROCm it'll only have a few years of ROCm support left before support is removed. Lifespan of support for AMD cards with ROCm is very short. You end up having to use Vulkan which is not optimized, of course, and a bit slower. I once bought an AMD GPU 2 years after release and 1 year after I bought it ROCm support was dropped.
Note that these are not the packages distributed by AMD. They are the packages in the OS repositories. Not all the ROCm packages are there, but most of them are. The biggest downside is that some of them are a little old and don't have all the latest performance optimizations for RDNA 3.
Those operating systems will be around for the next decade, so that should at least provide one option for users of older hardware.
For example, my 780m gets 1-2 inferences from llama.cpp before dropping off the bus due to a segfault in the driver. It’s a bad enough lockup that linux can’t cleanly shutdown and will hang under hard rebooted.
I have dozens of different AMD GPUs and I personally host most of the Debian ROCm Team's continuous integration servers. Over the past year, I have worked together with other members of the Debian project to ensure that every potentially affected ROCm library is tested on every discrete consumer AMD GPU architecture since Vega whenever a new version of a package is uploaded to Debian.
FWIW, Framework Computers donated a few laptops to Debian last year, which I plan to use to enable the 780m too. I just haven't had the time yet. Fedora has some patches that add support for that architecture.
Support is coming in three months!
To
This card is ancient and will be no longer developed for. Buy our brand new card released in three months!
Every damned time.
With situations like this, its not hard to see why Nvidia totally dominates in the compute/ai market.
https://salsa.debian.org/rocm-team/rocm-hipamd/-/raw/d6d2014... (one patch of many)
Now, for some other GPU architectures, you're absolutely right. There are indeed important patches in Debian that enable its extra-wide hardware compatibility.
AMD Instinct is also more power efficient and has comparable (if not better) performance for the same (or less) price.
5 years is not very long tbh.
AMD are merging the architectures (UDNA) like nVidia but it's not going to be before 2026. (https://wccftech.com/amd-ryzen-zen-6-cpus-radeon-udna-gpus-u...)
https://github.com/superjamie/rocswap
I ran this on an 5600 XT, just recently switched to nVidia.
It is clear that AMD's approach isn't working and they need to change their balance.
AMD are smart, and they solve big problems in ways that are baffling to many. They're very sensitive to moats and position themselves with products or frameworks to drain them.
I consider their primary product "engineering competence as a service", but when no one external picks up the reigns, they don't try very hard to play market maker. I remember when Intel's R&D budget was more than AMD's market cap– they're effective both at and when running lean.
The reality here is that people don't have grievances with CUDA and Nvidia aren't doing anything egregious with it. But whether that's due to ROCm's existence... we can only speculate.
Correct. Lots of people also developed specifically for Internet Explorer too.
They are a monopoly and if that is important to you, then you'll want alternative solutions to avoid putting all your eggs in one basket.
People have short term memory loss and forget that just a few months ago, H100's were impossible to get and the price skyrocketed. Given the "insane demand" of Nvidia compute (and compute in general), these sorts of supply/demand issues will be indefinitely ongoing. How many times will people need to get burned until they start to seek alternatives? Hard to say...
Roadmap: 3 to 5 years.
https://www.techpowerup.com/324171/amd-is-becoming-a-softwar...
(Okay, maybe their super high end unobtanium-level GPUs are better hardware-wise. Don't know, don't care about enterprise-only hardware that is unbuyable by mere mortals.)
The fact support still isn't there, they've had 2 years since Stable Diffusion to get a serious team up and shipping and they still don't even have enough resources pointed at this to not have to be asking what should be prioritized.
The only way to fix their culture/priorities is to stop buying their cards.
But that's why my business exists... https://news.ycombinator.com/item?id=42759191
Compare this to nvidia where I just imported the go nvml library and it built the cgo code and automatically links to nvidia-ml.so at runtime.
- E-SMI inband library ("https://github.com/amd/esmi_ib_library") - ROCm SMI library("https://github.com/ROCm/rocm_smi_lib") - AMDSMI library("https://github.com/ROCm/amdsmi") - goamdsmi_shim library ("https://github.com/amd/goamdsmi/goamdsmi_shim")”
First of all this link is dead: https://github.com/amd/goamdsmi/goamdsmi_shim
Second: these dependencies should all be packaged into deb/rpm
Third: there should be a goamdsmi package which has a proper dependency tree. I should be able to do ‘apt-get install goamdsmi’ and it should install everything I need. This is how it works with go-nvml.
Windows support is also bad, but supports significantly more than one GPU.
The GitHub discussion page in the title lists RX 6800 (and a bunch of RX 7xxx GPUs) as supported, and some lower-end RX 6xxx ones as supported for runtime. The same comment also links to a page on the AMD website for a "compatibility matrix" [1].
That page only shows RX 7900 variants as supported on the consumer Radeon tab. On the workstation side, Radeon Pro W6800 and some W7xxx cards are listed as supported. It also suggests to see the "Use ROCm on Radeon GPU documentation" page [2] if using ROCm on Radeon or Radeon Pro cards.
That link leads to a page for "compatibility matrices" -- again. If you click the link for Linux compatibility, you get a page on "Linux support matrices by ROCm version" [3].
That "by ROCm version" page literally only has a subsection for ROCm 6.2.3. It only lists RX 7900 and Pro W7xxx cards as supported. No mention of W6800.
(The page does have an unintuitively placed "Version List" link through which you can find docs for ROCm 5.7 [4]. Those older docs are no more useful than the 6.2.3 ones.)
Is RX 6800 supported? Or W6800? Even the amd.com pages seem to contradict each other on the latter.
Maybe the pages on the AMD site only list official production support or something. In any case it's confusing as hell.
Nothing against the GitHub page author who at least seems to try and be clear but the official documentation leaves a lot to be desired.
[1] https://rocm.docs.amd.com/projects/install-on-linux/en/lates...
[2] https://rocm.docs.amd.com/projects/radeon/en/latest/docs/com...
[3] https://rocm.docs.amd.com/projects/radeon/en/latest/docs/com...
[4] https://rocm.docs.amd.com/projects/radeon/en/docs-5.7.0/docs...
Exactly.
I have a 6700 XT with 12 gig ram and a 5700 with 8 gig ram.
If i ctrl+f for either of those numbers on the GH issue, I get one hit. For the 6700, it's a single row that has a green check for "runtime" and a red x for "HIP SDK". For the 5700 card, it's somebody in the peanut gallery saying "don't forget about us!".
HIP is the c++ "flavor" that can compile down to work on amd _and_ nvidia gpus. If the 6700 has support for the "runtime" but not HIP ... what does that even mean for me?
And as you pointed out, the 6800 series card has green checks for both so that means it's fully supported? But ... it's not listed on AMD's site?!
Bad docs are how you cement a reputation of "just buy nvidia and install their latest drivers and it'll be fine".
Having said that, on the weekend I set up ROCm on Linux on my 6800XT and it seems to work just fine.
Just weird the official thing doesn't work.
Please read the link before commenting on future. We do that here. This info is is an early comment by an AMD employee.
I read the link and I upvoted the "just support all GPUs you recently produced" comment.
I don't think the solution to bad software support is the prioritization. The prioritization is causing even more discrimination among different GPUs and different customers.
You can say whatever you want, and downvote whatever you want. However, that doesn't solve the real problem.
I use a fork of the stable diffusion webui [0] which, for me, handled memory better. Setup was relatively easy: install the pytorch packages from the ROCm repo and it worked.
[0]: https://github.com/lllyasviel/stable-diffusion-webui-forge
What are the chances for amd to consider alternatives: - adopt oneapi and try to fight Nvidia together with intel - Vulkan and implement pytorch backend - sycl
And they drop support too quickly too. The Radeon Pro VII is already out of support. It's barely 5 years since release.
This way it will never be a counterpart to CUDA.
Furthermore, don't people use PyTorch (and other libraries? I'm not really clear on what ML tooling is like, it feels like there's hundreds of frameworks and I haven't seen any simplified list explaining the differences. I would love a TLDR for this) and not ROCm/CUDA directly anyways? So the main draw can't be ergonomics, at least.
however separately, installing drivers and the correct CUDA/CuDNN libraries is the responsibility of the user. this is sometimes slightly finicky.
with ROCm, the problem is that 1) PyTorch/Jax don't support it very well, for whatever reason which may be partly to do with the quality of ROCm frustrating PyTorch/Jax devs, 2) installing drivers and libraries is a nightmare. it's all poorly documented and constantly broken. 3) hardware support is very spotty and confusing.
Why do they have ROCm/CUDA backends in the first place though? Why not just Vulkan?
My own experience is that half-assed knowledge of C/C++, and a basic idea of how GPUs are architected, is enough to write a decent custom CUDA kernel. It's not that hard to do. No idea how I would get started with Vulkan, but I assume it would require a lot more ceremony, and that writing compute shaders is less intuitive.
there is also definitely a "worse is better" effect in this area. there are some big projects that tried to be super general and cover all use cases and hardware. but a time-crunched PhD student or IC just needs something they can use now. (even Tensorflow, which was relatively popular compared to some other projects, fell victim to this.)
George Hotz seems like a weird guy in some respects, but he's 100% right that in ML it is hard enough to get anything working at all under perfect conditions, you don't need fighting with libraries and build tools on top of that, or the mental overhead of learning how to use this beautiful general API that supports 47 platforms you don't care about.
except also "worse is better is better" -- e.g. because they were willing to make breaking changes and sacrifice some generality, Jax was able to build something really cool and innovative.
Examples of rendering solutions using CUDA,
https://www.nvidia.com/en-us/design-visualization/solutions/...
https://home.otoy.com/render/octane-render/
It is definitely ergonomics and tooling.
Cuda the ecosystem is a massive pile of libraries for lots of different domains written to make it easier to use GPUs to do useful work. This is perhaps something of a judgement on how easy it is to write efficient programs using cuda.
ROCm contains a language called HIP which behaves pretty similarly to Cuda. OpenCL is the same sort of thing as well. It also contains a lot of library code, in this case because people using Cuda use those libraries and don't want to reimplement them. That's a bit of a challenge because nvidia spent 20 years writing these libraries and is still writing more, yet amd is expected to produce the same set in an order of magnitude less time.
If you want to use a GPU to do maths, you don't actually need any of this stuff. You need the GPU, something to feed it data (e.g. a linux host) and some assembly. Or LLVM IR / freestanding c++ if you prefer. This whole cuda / rocm thing really is intended to make them easier to program.
But this APU hack might work:
https://blog.machinezoo.com/Running_Ollama_on_AMD_iGPU https://github.com/ollama/ollama/pull/6282
Linux since 6.1 made some changes to allocate the memory to the GPU from userspace, but it seems the GTT method has degrated performance even more.