They have had about 15 years to move beyond C99, stone age workflows to compile GLSL and C99 with their drivers, no libraries ecosystem, and printf debugging.
Eventually some of the issues have been fixed, after they started seeing only hardliners would put with such development experience, and then it was too late.
OneAPI builds on top of SYSCL, is basically Intel's CUDA, which it is already the second attempt to have C++ in OpenCL, during OpenCL 2.x, an effort that worked so well, that OpenCL 3.0 is basically a reboot back to OpenCL 1.0.
Also even SYSCL only got a proper kick-off after CodePlay came up with its implementation, nowadays they sell oneAPI support and tooling, after being acquired by Intel.
We don't have to wait for singular companies or foundations to fix ecosystem problems. Only the means of coordination are needed. https://prizeforge.com isn't there yet, but it is already capable of bootstrapping its own development. Matching funds, joining the team, or contributing on MuTate will all make the ball pick up speed faster.
Geohot has been working on this for about a year, and every roadblock he's encountered he has had to damn near pester Lisa Su about getting drivers fixed. If you want the CUDA replacement that would work on AMD, you need to wait on AMD. If there is a bug in the AMD microcode, you are effectively "stopped by AMD".
The Tile dialect is pretty much independent of the nvidia ecosystem so all it takes is one good set of MLIR transform passes to run anything on the CUDA stack that compiles to tile out of the nvidia ecosystem prison.
So if anything this is actually a massive opportunity to escape vendor lock in if it catches on in the CUDA ecosystem.
Google leading XLA & IREE, with awesome intermediate representations, used by lots of hardware platforms, and backing really excellent Jax & Pytorch implementations, having tools for layout & optinization folks can share: they really build an amazing community.
There's still so much room for planning/scheduling, so much hardware we have yet to target. RISC-V has really interesting vector instructions, for example, and it seems like there's so much exploration / work to do to better leverage that.
Nvidia has partners everywhere now. Nvlink is used by Intel, AWS Tritanium, others. Yesterday the Groq exclusive license that Nvidia paid to give to Groq?! Seeing how and when CUDA Tiles emerges: will be interesting. Moving from fabric partnerships, up up up the stack.
Ah, and Nsight debugging also supports Python CUDA Tiles debugging.
https://developer.nvidia.com/blog/simplify-gpu-programming-w...
this is nicely illustrated by this recent article:
non-exclusive license actually.
IREE hasn't been at G for >2 years.
I'm tired of people shilling things they don't understand.
Second-rate libraries like OpenCL had industry buy-in because they were open. They went through standards committees and cooperated with the rest of the industry (even Nvidia) to hear-out everyone's needs. Lattner gave up on appealing to that crowd the moment he told Khronos to pound sand. Nobody should be wondering why Apple or Nvidia won't touch Mojo with a thirty-nine and a half foot pole.
CUDA Tile was exactly designed to give parity to Python in writing CUDA kernels, acknowledging the relevance of Python, while offering a path researchers don't need to mess with C++.
It was announced at this years GTC.
NVidia has no reason to use Mojo.
Julia, Python GPU JITs work great on Windows, and many people only get Windows systems as default at work.
When is the Year of NPUs on Linux?
https://www.pcspecialist.de/kundenspezifische-laptops/nvidia...
Which as usual, kind of work but not really, in GNU/Linux.
1) Install Linux
2) Summon Chris Lattner to play you a sad song on the world's smallest violin in honor of the Windows devs that refuse to install WSL.
What about that outcome?
I lost count at five or six. Define your acronyms on first use, people.
How close was I?
Stop carrying water for poor documentation practice.
Just say to the AI, "Explain THIS".
Just say to the AI, "Explain THIS".
Also HN: "Not like that"
However, MLIR is a highly-specialized term. The problem with failing to define a term like that is that I don't know up front if I'm the target audience for the article. I had to Google it, and when I did that, all I found at first were yet more articles that failed to define it.
Wikipedia gets the job done, but these days, Wikipedia is often a long way down the Google search results list. I think they downranked it when they started force-feeding AI answers (which also didn't help).
Get better at computers and stop needing to be spoon-fed information, people!
I won't argue, but there is a middle ground between articles consisting of pure JAFAs and this:
> accommodate readers who won’t even type four letters into a search bar
I think it helps if acronyms are expanded at least once or in a footnote so that the potential new reader can follow along and does not need to guess what ACMV^ means.
^: Awesome Combobulating Method by VTimofeenko, patent pending.
Telling people who want to have that participation and discussion to “RTFM” is not a good response.
Often you’ll come across the authors on these posts that can shed direct, 1st person evidence, of what we’re talking about.
So please, when someone asks “what is that?” Don’t respond with “RTFM”.
HN already assumes a baseline of technical literacy. When something falls outside that baseline, the usual move is to ask for context or links, not to reframe personal unfamiliarity as an author failure.
So please, don’t normalize treating "I don’t know this yet" as a failure of the post.
If that is your answer, please just don’t comment.
When confusion gets framed as "this is substandard writing", it rewards showing up and performing a lack of context rather than engaging with the substance or asking clarifying questions. Over time that creates pressure to write to the lowest common denominator, instead of the audience the author is clearly aiming at.
HN already operates on an implicit baseline (CUDA, open source, LLVM, etc.) and mostly lets comments fill in gaps. That usually produces better discussions than treating every unfamiliar term as an author failure, especially when someone is just trying to share or explain something they care about.
So yeah, I am genuinely curious why you see personal unfamiliarity as something the entire discussion should reorganize itself around.
(Shrug) The fact is that all major style guides -- APA, MLA, AP, Chicago, probably some others -- call for potentially-unfamiliar acronyms to be defined on first use, and it's common enough to do so. For some reason, though, essentially nobody who writes about this particular topic agrees with that.
Which is cool -- it's not my field, so I don't really GAF. I'm mostly just remarking on how unusually difficult it was to drill down on this particular term. I'll avoid derailing the topic further than I already have.