Skip to content

Conversation

achartier
Copy link
Collaborator

@achartier achartier commented Sep 30, 2025

Summary by CodeRabbit

  • New Features
    • Added a configuration option to disable deep GEMM in Qwen3 attention, giving users finer control over attention compute behavior.
    • The MoE variant now initializes attention with deep GEMM disabled by default for consistent behavior out of the box.
  • Chores
    • Updated initialization pathways to propagate the new configuration option without altering existing defaults for non-MoE setups.

Description

Disable DeepGEMM for Qwen3 MoE Attention layers. !8030 caused a regression as Qwen3 MoE models also use the Qwen3Attention class. Benchmarking shows DG has slightly worse performance, so disabling it for MoE models

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>
@achartier
Copy link
Collaborator Author

/bot run

@achartier achartier requested a review from byshiue September 30, 2025 03:56
Copy link
Contributor

coderabbitai bot commented Sep 30, 2025

📝 Walkthrough

Walkthrough

Introduces a new boolean parameter disable_deep_gemm in Qwen3Attention, forwards it to the base class, and sets it to True when constructing attention within Qwen3MoEDecoderLayer. No other logic or behavior is modified.

Changes

Cohort / File(s) Summary
Qwen3 Attention deep GEMM flag integration
tensorrt_llm/_torch/models/modeling_qwen3.py, tensorrt_llm/_torch/models/modeling_qwen3_moe.py
Add disable_deep_gemm parameter to Qwen3Attention.init, propagate to super().init; update MoE decoder to instantiate Qwen3Attention with disable_deep_gemm=True.

Sequence Diagram(s)

sequenceDiagram
    actor Dev as Init
    participant MoE as Qwen3MoEDecoderLayer
    participant Attn as Qwen3Attention
    participant Base as BaseAttention

    Dev->>MoE: __init__()
    MoE->>Attn: Qwen3Attention(..., disable_deep_gemm=True)
    Attn->>Base: super().__init__(..., disable_deep_gemm=True)
    Note right of Base: Base receives flag and configures internals

    rect rgba(230,240,255,0.5)
    Note over MoE,Attn: MoE path explicitly disables deep GEMM
    end

    Dev->>Attn: Qwen3Attention(..., disable_deep_gemm=False) 
    Attn->>Base: super().__init__(..., disable_deep_gemm=False)
    Note right of Base: Default behavior when not in MoE
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The PR description includes the Description, PR Checklist, and GitHub Bot Help sections as required by the repository template but leaves the Test Coverage section empty and retains the “@coderabbitai summary” placeholder without an actual summary. The template instructs authors to provide a clear summary at the top and to list relevant test cases under Test Coverage to ensure sufficient coverage for new code paths. Because these mandatory pieces of information are missing, the PR description is incomplete relative to the repository’s standards and additional details are needed before merging. Please replace the “@coderabbitai summary” placeholder with an actual summary of the PR and populate the Test Coverage section with the names of new or updated test cases that validate the disable_deep_gemm behavior. Ensuring these details are present will satisfy the repository’s description template and support maintainability and CI validation. Once updated, the PR description should fully comply with the template.
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The title “[None][fix] Disable DeepGEMM for Qwen3 MoE Attention layers” succinctly follows the repository’s prefix convention and accurately communicates the primary change of disabling DeepGEMM in Qwen3 MoE attention modules. It is clear, specific, and concise, without extraneous detail or vague terminology. Therefore, it effectively summarizes the main intent of the pull request.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b0cb9ca and 4a62018.

📒 Files selected for processing (2)
  • tensorrt_llm/_torch/models/modeling_qwen3.py (2 hunks)
  • tensorrt_llm/_torch/models/modeling_qwen3_moe.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/_torch/models/modeling_qwen3.py
  • tensorrt_llm/_torch/models/modeling_qwen3_moe.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/_torch/models/modeling_qwen3.py
  • tensorrt_llm/_torch/models/modeling_qwen3_moe.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/_torch/models/modeling_qwen3.py
  • tensorrt_llm/_torch/models/modeling_qwen3_moe.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (3)
tensorrt_llm/_torch/models/modeling_qwen3.py (1)

37-37: LGTM! Parameter addition and forwarding are correct.

The disable_deep_gemm parameter is properly added with a sensible default of False, maintaining existing behavior for non-MoE Qwen3 models while allowing MoE models to disable DeepGEMM. The parameter is correctly forwarded to the base class.

Also applies to: 75-75

tensorrt_llm/_torch/models/modeling_qwen3_moe.py (2)

168-172: LGTM! DeepGEMM correctly disabled for MoE attention.

The change appropriately disables DeepGEMM for Qwen3 MoE attention layers by passing disable_deep_gemm=True, addressing the performance regression identified in the PR objectives.


168-172: No action needed: Qwen3Attention instantiations are correctly configured – it’s used only in modeling_qwen3.py (default disable_deep_gemm=False) and modeling_qwen3_moe.py (disable_deep_gemm=True); no other occurrences found.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🧪 Early access (Sonnet 4.5): enabled

We are currently testing the Sonnet 4.5 model, which is expected to improve code review quality. However, this model may lead to increased noise levels in the review comments. Please disable the early access features if the noise level causes any inconvenience.

Note:

  • Public repositories are always opted into early access features.
  • You can enable or disable early access features from the CodeRabbit UI or by updating the CodeRabbit configuration file.

Comment @coderabbitai help to get the list of available commands and usage tips.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20326 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20326 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15330 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants