I'd recommend having a "gemm with a twist" [0] example in the README.md instead of having an element-wise example. It's pretty hard to evaluate how helpful this is for AI otherwise.
[0] For example, gemm but the lhs is in fp8 e4m3 and rhs is in bf16 and we want fp32 accumulation, output to bf16 after applying GELU.
Agreed! I was looking through the summation example < https://github.com/tracel-ai/cubecl/blob/main/examples/sum_t...> and it seems like the primary focus is on the more traditional pre-2018 GPU programming without explicit warp-level operations, asynchrony, atomics, barriers, or countless tensor-core operations.
The project feels very nice and it would be great to have more notes in the README on the excluded functionality to better scope its applicability in more advanced GPGPU scenarios.
We support warp operations, barriers for Cuda, atomics for most backends, tensor cores instructions as well. It's just not well documented on the readme!
CubeCL is the computation backend for Burn (https://burn.dev/) - ML framework done by the same team which does all the tensor magic like autodiff, op fusion and dynamic graphs.
We don't yet support newer types like fp8 and fp4, that's actually my next project. I'm the only contributor with the hardware to actually use the new types, so it's a bit bottlenecked on a single person right now. But yes, the example is rather simplistic, should probably work on that some time once I'm done updating the feature set to Blackwell.
One of the main author here, the readme isn't really well up-to-date. We have our own gemm implementation based on CubeCL. It's still moving a lot, but we support tensor cores, use warp operations (Plane Operations in CubeCL), we even added TMA instructions for CUDA.
In Halide, the concept was great, yet the problems in kernel development were moved to the side of "scheduling", i.e. determining tiling/vectorization/parallellization for the kernel runs.
Love it. I've been using cudarc lately; would love to try this since it looks like it can share data structures between host and device (?). I infer that this is a higher-level abstraction.
The need to build CubeCL came from the Burn deep learning framework (https://github.com/tracel-ai/burn), where we want to easily build algorithms like in CUDA with a real programming language, while also being able to integrate those algorithms inside a compiler at runtime to fuse dynamic graphs.
Since we don't want to rewrite everything multiple times, it also has to be multi-platform and optimal, so the feature set must be per-device, not per-language. I'm not aware of a tool that does that, especially in Rust (which Burn is written in).
Very interesting project! I am wondering how it compare against OpenCL, which I think adopts the same fundamental idea (write once, run everywhere)? Is it about CUbeCL's internal optimization for Rust that happens at compile time?
This appears to be single source which would make it similar to SYCL.
Given that it can target WGPU I'm really wondering why OpenCL isn't included as a backend. One of my biggest complaints about GPGPU stuff is that so many of the solutions are GPU only, and often only target the vendor compute APIs (CUDA, ROCm) which have much narrower ecosystem support (versus an older core vulkan profile for example).
It's desirable to be able to target CPU for compatibility, debugging, and also because it can be nice to have a single solution for parallelizing all your data heavy work. The latter reduces mental overhead and permits more code reuse.
There's infrastructure in the SPIR-V compiler to be able to target both OpenCL and Vulkan, but we don't currently use it because OpenCL would require a new runtime, while Vulkan can simply use the existing wgpu runtime and pass raw SPIR-V shaders.
One thing I've never investigated is how performance OpenCL actually is for CPU. Do you happen to have any resources comparing it to a more native CPU implementation?
A lot of things happen at compile time, but you can execute arbitrary code in your kernel that executes at compile time, similar to generics, but with more flexibility. It's very natural to branch on a comptime config to select an algorithm.
wow, what's the downsides to this? It feels like it could be one of the biggest leaps in programming in a long time, does it keep rusts safety aspects? How does it compare with say openCL?
We have safe and unsafe version for launching kernels where we can ensure that a kernel won't corrupt data elsewhere (and therefore won't create memory error or segfaults). But within a kernel ressources are mutable and shared between GPU cores, since that's how GPUs work.
I am the coder of the MSL dialect for the CubeCL CPP compiler. Since 0.5 release it directly compiles to MSL and support simdgroup matrix functions for instance. It does use wgpu for the runtime but without naga as we added msl pass through to wgpu just for this.
wgpu has some options to access backend-specific types and shader passthrough (i.e., you provide your own shader for a backend directly).
Generally wgpu is open to supporting any Metal extensions you need. There's usually an analogous extension in one of the other backends (e.g., Vulkan, DX12) anyway.
From the moment I understood the weakness of my flesh, it disgusted me. I craved the strength and certainty of steel. I aspired to the purity of the Blessed Machine.
[0] For example, gemm but the lhs is in fp8 e4m3 and rhs is in bf16 and we want fp32 accumulation, output to bf16 after applying GELU.
The project feels very nice and it would be great to have more notes in the README on the excluded functionality to better scope its applicability in more advanced GPGPU scenarios.
In Halide, the concept was great, yet the problems in kernel development were moved to the side of "scheduling", i.e. determining tiling/vectorization/parallellization for the kernel runs.
Since we don't want to rewrite everything multiple times, it also has to be multi-platform and optimal, so the feature set must be per-device, not per-language. I'm not aware of a tool that does that, especially in Rust (which Burn is written in).
Given that it can target WGPU I'm really wondering why OpenCL isn't included as a backend. One of my biggest complaints about GPGPU stuff is that so many of the solutions are GPU only, and often only target the vendor compute APIs (CUDA, ROCm) which have much narrower ecosystem support (versus an older core vulkan profile for example).
It's desirable to be able to target CPU for compatibility, debugging, and also because it can be nice to have a single solution for parallelizing all your data heavy work. The latter reduces mental overhead and permits more code reuse.
One thing I've never investigated is how performance OpenCL actually is for CPU. Do you happen to have any resources comparing it to a more native CPU implementation?
Also, to whom do you have to thank LLVM exists in first place, and has not fizzled out as yet another university compiler research project?
Generally wgpu is open to supporting any Metal extensions you need. There's usually an analogous extension in one of the other backends (e.g., Vulkan, DX12) anyway.