remix logo

Hacker Remix

CubeCL: GPU Kernels in Rust for CUDA, ROCm, and WGPU

182 points by ashvardanian 18 hours ago | 33 comments

rfoo 7 hours ago

I'd recommend having a "gemm with a twist" [0] example in the README.md instead of having an element-wise example. It's pretty hard to evaluate how helpful this is for AI otherwise.

[0] For example, gemm but the lhs is in fp8 e4m3 and rhs is in bf16 and we want fp32 accumulation, output to bf16 after applying GELU.

ashvardanian 6 hours ago

Agreed! I was looking through the summation example < https://github.com/tracel-ai/cubecl/blob/main/examples/sum_t...> and it seems like the primary focus is on the more traditional pre-2018 GPU programming without explicit warp-level operations, asynchrony, atomics, barriers, or countless tensor-core operations.

The project feels very nice and it would be great to have more notes in the README on the excluded functionality to better scope its applicability in more advanced GPGPU scenarios.

nathanielsimard 5 hours ago

We support warp operations, barriers for Cuda, atomics for most backends, tensor cores instructions as well. It's just not well documented on the readme!

0x7cfe 5 hours ago

CubeCL is the computation backend for Burn (https://burn.dev/) - ML framework done by the same team which does all the tensor magic like autodiff, op fusion and dynamic graphs.

wingertge 4 hours ago

We don't yet support newer types like fp8 and fp4, that's actually my next project. I'm the only contributor with the hardware to actually use the new types, so it's a bit bottlenecked on a single person right now. But yes, the example is rather simplistic, should probably work on that some time once I'm done updating the feature set to Blackwell.

nathanielsimard 5 hours ago

One of the main author here, the readme isn't really well up-to-date. We have our own gemm implementation based on CubeCL. It's still moving a lot, but we support tensor cores, use warp operations (Plane Operations in CubeCL), we even added TMA instructions for CUDA.

kookamamie 9 hours ago

This reminds me of Halide (https://halide-lang.org/).

In Halide, the concept was great, yet the problems in kernel development were moved to the side of "scheduling", i.e. determining tiling/vectorization/parallellization for the kernel runs.

the__alchemist 14 hours ago

Love it. I've been using cudarc lately; would love to try this since it looks like it can share data structures between host and device (?). I infer that this is a higher-level abstraction.

gitroom 9 hours ago

Gotta say, the constant dance between all these GPU frameworks kinda wears me out sometimes - always chasing that better build, you know?

nathanielsimard 5 hours ago

The need to build CubeCL came from the Burn deep learning framework (https://github.com/tracel-ai/burn), where we want to easily build algorithms like in CUDA with a real programming language, while also being able to integrate those algorithms inside a compiler at runtime to fuse dynamic graphs.

Since we don't want to rewrite everything multiple times, it also has to be multi-platform and optimal, so the feature set must be per-device, not per-language. I'm not aware of a tool that does that, especially in Rust (which Burn is written in).