[CUDA] Implement BlockMaskedMM#3299
Merged
zcbenz merged 9 commits intoml-explore:mainfrom Mar 26, 2026
Merged
Conversation
Add CUDA implementation for block-masked matrix multiplication. The approach pre-masks input matrices with a simple CUDA kernel, calls cuBLAS GEMM, then applies the output mask.
Replace the two-pass approach (contiguous_copy_gpu + apply_block_mask) with a single copy_with_block_mask kernel that reads source data and applies the mask in one pass.
8956bdb to
b258254
Compare
There was a problem hiding this comment.
Pull request overview
This PR implements the mx.block_masked_mm primitive for the CUDA backend, enabling block/tile-masked matrix multiplication on NVIDIA GPUs and unskipping the corresponding CUDA tests.
Changes:
- Add CUDA
BlockMaskedMM::eval_gpuimplementation wired into the CUDA matmul path. - Introduce CUDA kernels/helpers to apply and fuse-copy block masks (
apply_block_mask,copy_with_block_mask). - Enable CUDA CI coverage for
test_block_masked_matmuland add a Python benchmark script.
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| python/tests/cuda_skip.py | Unskips the block-masked matmul test on CUDA. |
| mlx/backend/cuda/primitives.cpp | Removes the “no CUDA implementation” stub for BlockMaskedMM. |
| mlx/backend/cuda/matmul.cpp | Adds BlockMaskedMM::eval_gpu and integrates mask-copy/mask-apply around GEMM. |
| mlx/backend/cuda/gemms/block_mask.h | Declares CUDA block-mask helper APIs. |
| mlx/backend/cuda/gemms/block_mask.cu | Implements CUDA kernels for masked copy and in-place output masking. |
| mlx/backend/cuda/CMakeLists.txt | Adds gemms/block_mask.cu to the CUDA build. |
| benchmarks/python/block_masked_mm_bench.py | Adds a benchmark + optional correctness check vs naive expand+matmul. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
zcbenz
reviewed
Mar 25, 2026
zcbenz
reviewed
Mar 26, 2026
zcbenz
approved these changes
Mar 26, 2026
Collaborator
zcbenz
left a comment
There was a problem hiding this comment.
Looks good to me, thanks!
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Proposed changes
Implement
mx.block_masked_mmfor the CUDA backendPerformance
Compared against naive MLX expand + matmul
float32:
float16:
Checklist
Put an
xin the boxes that apply.pre-commit run --all-filesto format my code / installed pre-commit prior to committing changes