FlashInfer-Bench Integration for vLLM #29695
Draft
+2,710
−17
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Purpose
Enable automated FlashInfer trace collection and optimized kernel deployment via
flashinfer-bench apply. This allows users to:Changes
1. Full FlashInfer Backend (
VLLM_USE_FLASHINFER=1)Extended FlashInfer integration beyond attention to all supported operators:
2. Test & Trace Scripts
tests/kernels/run_flashinfer_test.py- E2E test for FlashInfer operatorsVLLM_USE_FLASHINFER=1tests/kernels/generate_flashinfer_traces.py- Trace generation for FlashInfer-Bench3. Tested Models
4. AllReduce Fusion Fix
Fixed the
std::optional→cuda::std::optionalbug in FlashInfer'strtllm_allreduce_fusion.cuh. The fix is documented indocs/source/design/flashinfer_integration_issues.mdwith a patch that can be applied to the installed FlashInfer package.Usage
TODOs
See
docs/source/design/flashinfer_integration_issues.mdfor detailed issue analysis.FlashInfer Team
Fix(patch available)std::optional→cuda::std::optionalintrtllm_allreduce_fusion.cuhvLLM Integration
Test Plan
Test Result
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.