CubeCL: GPU Kernels in Rust for CUDA, ROCm, and WGPU
149 points
11 hours ago
| 7 comments
| github.com
| HN
rfoo
1 hour ago
[-]
I'd recommend having a "gemm with a twist" [0] example in the README.md instead of having an element-wise example. It's pretty hard to evaluate how helpful this is for AI otherwise.

[0] For example, gemm but the lhs is in fp8 e4m3 and rhs is in bf16 and we want fp32 accumulation, output to bf16 after applying GELU.

reply
ashvardanian
37 minutes ago
[-]
Agreed! I was looking through the summation example < https://github.com/tracel-ai/cubecl/blob/main/examples/sum_t...> and it seems like the primary focus is on the more traditional pre-2018 GPU programming without explicit warp-level operations, asynchrony, atomics, barriers, or countless tensor-core operations.

The project feels very nice and it would be great to have more notes in the README on the excluded functionality to better scope its applicability in more advanced GPGPU scenarios.

reply
kookamamie
2 hours ago
[-]
This reminds me of Halide (https://halide-lang.org/).

In Halide, the concept was great, yet the problems in kernel development were moved to the side of "scheduling", i.e. determining tiling/vectorization/parallellization for the kernel runs.

reply
the__alchemist
8 hours ago
[-]
Love it. I've been using cudarc lately; would love to try this since it looks like it can share data structures between host and device (?). I infer that this is a higher-level abstraction.
reply
gitroom
3 hours ago
[-]
Gotta say, the constant dance between all these GPU frameworks kinda wears me out sometimes - always chasing that better build, you know?
reply
zekrioca
10 hours ago
[-]
Very interesting project! I am wondering how it compare against OpenCL, which I think adopts the same fundamental idea (write once, run everywhere)? Is it about CUbeCL's internal optimization for Rust that happens at compile time?
reply
fc417fc802
5 hours ago
[-]
This appears to be single source which would make it similar to SYCL.

Given that it can target WGPU I'm really wondering why OpenCL isn't included as a backend. One of my biggest complaints about GPGPU stuff is that so many of the solutions are GPU only, and often only target the vendor compute APIs (CUDA, ROCm) which have much narrower ecosystem support (versus an older core vulkan profile for example).

It's desirable to be able to target CPU for compatibility, debugging, and also because it can be nice to have a single solution for parallelizing all your data heavy work. The latter reduces mental overhead and permits more code reuse.

reply
zekrioca
2 hours ago
[-]
Makes sense. And indeed, having OpenCL as a backend would be a very interesting extension.
reply
nathanielsimard
9 hours ago
[-]
A lot of things happen at compile time, but you can execute arbitrary code in your kernel that executes at compile time, similar to generics, but with more flexibility. It's very natural to branch on a comptime config to select an algorithm.
reply
LegNeato
10 hours ago
[-]
See also this overview for how it compares to other projects in the Rust and GPU ecosystem: https://rust-gpu.github.io/ecosystem/
reply
qskousen
9 hours ago
[-]
Surprised this doesn't mention candle: https://github.com/huggingface/candle
reply
the__alchemist
8 hours ago
[-]
I don't think that fits; that's a ML framework. The others in the link are general GPU frameworks.
reply
adastra22
7 hours ago
[-]
Where is the Metal love…
reply
Almondsetat
5 hours ago
[-]
Why would anyone love something born out of pure spite for industry standards?
reply
m-schuetz
49 minutes ago
[-]
To be fair, the industry standards all suck except for CUDA.
reply
pjmlp
4 hours ago
[-]
For the same reason CUDA and ROCm are supported.
reply
miohtama
1 hour ago
[-]
Apple is known to be not that great contributor to open source, unlike Nvidia, AMD, Intel.
reply
pjmlp
35 minutes ago
[-]
You should check Linus opinion on those.

Also, to whom do you have to thank LLVM exists in first place, and has not fizzled out as yet another university compiler research project?

reply
syl20bnr
7 hours ago
[-]
It also compiles directly to MSL, it is just missing from the post title.
reply
adastra22
5 hours ago
[-]
No it compiles indirectly through wgpu, which means it doesn’t have access to any Metal extensions not exposed by the wgpu interface.
reply
moffkalast
6 hours ago
[-]
From the moment I understood the weakness of my flesh, it disgusted me. I craved the strength and certainty of steel. I aspired to the purity of the Blessed Machine.
reply