Skip to content

Conversation

HuiGao-NV
Copy link
Collaborator

@HuiGao-NV HuiGao-NV commented Sep 30, 2025

Summary by CodeRabbit

  • New Features
    • Added controls to prefer a CUDA memory pool during graph capture/replay, with graceful fallback when unavailable.
    • Introduced simple APIs to set/get a preferred memory pool and a context manager to apply it, improving memory efficiency and stability.
  • Refactor
    • Reorganized CUDA graph capture/replay flow to integrate memory pool handling without changing existing usage.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@HuiGao-NV HuiGao-NV requested review from a team as code owners September 30, 2025 08:21
@HuiGao-NV
Copy link
Collaborator Author

/bot run

Copy link
Contributor

coderabbitai bot commented Sep 30, 2025

📝 Walkthrough

Walkthrough

Introduces a memory pool preference mechanism and integrates it into CUDA graph execution and tensor allocation. Adds pool accessors, a context manager to set preferred pools, an accessor in CUDAGraphRunner, updates model_engine forward to capture/replay under preferred pool, and modifies buffer allocation to try the pool before falling back.

Changes

Cohort / File(s) Summary
Memory pool API
tensorrt_llm/_torch/utils.py
Adds global pool management: set_mem_pool, get_graph_pool, and set_prefer_mem_pool context manager to temporarily prefer a given buffer pool.
CUDA graph runner accessor
tensorrt_llm/_torch/pyexecutor/cuda_graph_runner.py
Adds CUDAGraphRunner.get_graph_pool() returning self.memory_pool; minor formatting only.
Model engine integration
tensorrt_llm/_torch/pyexecutor/model_engine.py
Uses set_prefer_mem_pool around CUDA graph capture/replay; refactors capture flow via closures (capture_forward_fn, capture_postprocess_fn); adjusts control flow for capture vs replay vs eager fallback.
Memory-buffer allocation path
tensorrt_llm/_torch/memory_buffer_utils.py
Attempts allocation within get_graph_pool context; on failure falls back to standard CUDA tensor allocation; logs pool allocation failure.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant ModelEngine
  participant CUDAGraphRunner as GraphRunner
  participant Utils as Utils (pool API)
  participant CUDA as CUDA Runtime

  User->>ModelEngine: forward(inputs)
  alt Using CUDA graph
    ModelEngine->>GraphRunner: need_capture?
    alt Capture required
      ModelEngine->>GraphRunner: get_graph_pool()
      GraphRunner-->>ModelEngine: memory_pool
      ModelEngine->>Utils: set_prefer_mem_pool(memory_pool) [enter]
      activate Utils
      note over ModelEngine,GraphRunner: Define capture_forward_fn & capture_postprocess_fn
      ModelEngine->>GraphRunner: register(capture_forward_fn, capture_postprocess_fn)
      ModelEngine->>GraphRunner: capture_and_replay(inputs)
      GraphRunner->>CUDA: capture graph & allocate via pool
      GraphRunner-->>ModelEngine: outputs
      Utils-->>ModelEngine: restore previous pool [exit]
      deactivate Utils
    else Replay
      ModelEngine->>GraphRunner: get_graph_pool()
      GraphRunner-->>ModelEngine: memory_pool
      ModelEngine->>Utils: set_prefer_mem_pool(memory_pool) [enter]
      activate Utils
      ModelEngine->>GraphRunner: replay(inputs)
      GraphRunner->>CUDA: launch graph
      GraphRunner-->>ModelEngine: outputs
      Utils-->>ModelEngine: restore previous pool [exit]
      deactivate Utils
    end
  else Eager path
    ModelEngine->>CUDA: execute forward eagerly
  end
Loading
sequenceDiagram
  autonumber
  participant AllocCaller as Buffer Utils
  participant Utils as Utils (pool API)
  participant CUDA as CUDA Runtime

  AllocCaller->>Utils: get_graph_pool()
  Utils-->>AllocCaller: pool or None
  alt Pool available
    AllocCaller->>CUDA: try allocate in pool
    alt Success
      CUDA-->>AllocCaller: pooled tensor
    else Exception
      AllocCaller->>AllocCaller: log debug pool alloc failure
      AllocCaller->>CUDA: standard allocation
      CUDA-->>AllocCaller: tensor
    end
  else No pool
    AllocCaller->>CUDA: standard allocation
    CUDA-->>AllocCaller: tensor
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The pull request description only contains the untouched template content and lacks any actual summary of the problem addressed, details of the solution, or identification of relevant tests, leaving all required sections empty. This omission fails to satisfy the repository’s PR description template, which mandates a filled‐out Description and Test Coverage sections at minimum. Please replace the placeholder template text with a concise Description explaining what issue the change solves and how, and list the Test Coverage by naming or linking the specific tests or scenarios that validate the new memory pool integration.
Docstring Coverage ⚠️ Warning Docstring coverage is 9.09% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The title follows the repository’s required format with “[None][feat]” and succinctly summarizes the primary change—reusing the CUDA graph memory pool during regular forward execution—and directly reflects the main modification introduced by this pull request. It is specific enough for teammates to grasp the intent without unnecessary detail or noise.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🧪 Early access (Sonnet 4.5): enabled

We are currently testing the Sonnet 4.5 model, which is expected to improve code review quality. However, this model may lead to increased noise levels in the review comments. Please disable the early access features if the noise level causes any inconvenience.

Note:

  • Public repositories are always opted into early access features.
  • You can enable or disable early access features from the CodeRabbit UI or by updating the CodeRabbit configuration file.

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 948b8b9 and e29858a.

📒 Files selected for processing (4)
  • tensorrt_llm/_torch/memory_buffer_utils.py (2 hunks)
  • tensorrt_llm/_torch/pyexecutor/cuda_graph_runner.py (2 hunks)
  • tensorrt_llm/_torch/pyexecutor/model_engine.py (2 hunks)
  • tensorrt_llm/_torch/utils.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/_torch/memory_buffer_utils.py
  • tensorrt_llm/_torch/pyexecutor/model_engine.py
  • tensorrt_llm/_torch/utils.py
  • tensorrt_llm/_torch/pyexecutor/cuda_graph_runner.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/_torch/memory_buffer_utils.py
  • tensorrt_llm/_torch/pyexecutor/model_engine.py
  • tensorrt_llm/_torch/utils.py
  • tensorrt_llm/_torch/pyexecutor/cuda_graph_runner.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/_torch/memory_buffer_utils.py
  • tensorrt_llm/_torch/pyexecutor/model_engine.py
  • tensorrt_llm/_torch/utils.py
  • tensorrt_llm/_torch/pyexecutor/cuda_graph_runner.py
🧬 Code graph analysis (4)
tensorrt_llm/_torch/memory_buffer_utils.py (3)
tensorrt_llm/_torch/pyexecutor/cuda_graph_runner.py (1)
  • get_graph_pool (197-198)
tensorrt_llm/_torch/utils.py (1)
  • get_graph_pool (325-327)
tensorrt_llm/logger.py (1)
  • debug (144-145)
tensorrt_llm/_torch/pyexecutor/model_engine.py (3)
tensorrt_llm/_torch/utils.py (4)
  • set_prefer_mem_pool (331-337)
  • set_torch_compiling (33-35)
  • with_model_extra_attrs (67-77)
  • get_graph_pool (325-327)
tensorrt_llm/_torch/pyexecutor/cuda_graph_runner.py (4)
  • get_graph_pool (197-198)
  • needs_capture (193-195)
  • capture (200-270)
  • replay (272-306)
tensorrt_llm/_torch/modules/fused_moe/moe_load_balancer.py (1)
  • MoeLoadBalancerIterContext (995-1034)
tensorrt_llm/_torch/utils.py (1)
tensorrt_llm/_torch/pyexecutor/cuda_graph_runner.py (1)
  • get_graph_pool (197-198)
tensorrt_llm/_torch/pyexecutor/cuda_graph_runner.py (1)
tensorrt_llm/_torch/utils.py (1)
  • get_graph_pool (325-327)
🪛 Ruff (0.13.1)
tensorrt_llm/_torch/memory_buffer_utils.py

93-93: Do not catch blind exception: Exception

(BLE001)


96-96: f-string without any placeholders

Remove extraneous f prefix

(F541)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (3)
tensorrt_llm/_torch/pyexecutor/model_engine.py (3)

51-52: LGTM!

The imports are correctly added and used within the file.


2190-2190: LGTM!

The memory pool context manager correctly wraps the CUDA graph execution flow, ensuring consistent memory allocation during graph capture and replay operations.


2197-2218: No additional MoeLoadBalancerIterContext needed for replay after capture
The iteration context is correctly applied inside capture_forward_fn during graph capture, so wrapping the subsequent replay() call is unnecessary.

Likely an incorrect or invalid review comment.

Comment on lines 87 to 101
new_buffer_tensor = None
try:
with torch.cuda.memory.use_mem_pool(get_graph_pool()):
new_buffer_tensor = torch.zeros((required_memory_size, ),
device='cuda',
dtype=torch.uint8)
except Exception:
# Need to check if this is an OOM exception
logger.debug(
f"Exception happened to create tensor from given memory pool")
new_buffer_tensor = torch.zeros((required_memory_size, ),
device='cuda',
dtype=torch.uint8)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Refine exception handling and fix static analysis issues.

The current implementation has several issues:

  1. Catches bare Exception (too broad)
  2. Missing null check for get_graph_pool()
  3. Unnecessary f-string prefix in log message
  4. No comment explaining fallback logic

Apply this diff to address these concerns:

         # No suitable buffer was found, so allocate a new one.
         # The new buffer is created with uint8 to represent raw bytes.
         new_buffer_tensor = None
+        mem_pool = get_graph_pool()
+        # Attempt pool-backed allocation if a pool is configured; fall back to standard allocation
+        # if the pool is unavailable, exhausted, or incompatible with the requested size.
-        try:
-            with torch.cuda.memory.use_mem_pool(get_graph_pool()):
+        if mem_pool is not None:
+            try:
+                with torch.cuda.memory.use_mem_pool(mem_pool):
+                    new_buffer_tensor = torch.zeros((required_memory_size, ),
+                                                    device='cuda',
+                                                    dtype=torch.uint8)
+            except (RuntimeError, torch.cuda.OutOfMemoryError) as e:
+                logger.debug(
+                    "Failed to allocate from memory pool, falling back to standard allocation: %s", e)
+                new_buffer_tensor = None
+        
+        if new_buffer_tensor is None:
-                new_buffer_tensor = torch.zeros((required_memory_size, ),
-                                                device='cuda',
-                                                dtype=torch.uint8)
-        except Exception:
-            # Need to check if this is an OOM exception
-            logger.debug(
-                f"Exception happened to create tensor from given memory pool")
             new_buffer_tensor = torch.zeros((required_memory_size, ),
                                             device='cuda',
                                             dtype=torch.uint8)

This change:

  • Checks for None before using the pool
  • Catches specific exceptions (RuntimeError, OutOfMemoryError)
  • Fixes the f-string issue flagged by static analysis
  • Adds explanatory comment
  • Makes fallback logic clearer

Based on static analysis.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
new_buffer_tensor = None
try:
with torch.cuda.memory.use_mem_pool(get_graph_pool()):
new_buffer_tensor = torch.zeros((required_memory_size, ),
device='cuda',
dtype=torch.uint8)
except Exception:
# Need to check if this is an OOM exception
logger.debug(
f"Exception happened to create tensor from given memory pool")
new_buffer_tensor = torch.zeros((required_memory_size, ),
device='cuda',
dtype=torch.uint8)
# No suitable buffer was found, so allocate a new one.
# The new buffer is created with uint8 to represent raw bytes.
new_buffer_tensor = None
mem_pool = get_graph_pool()
# Attempt pool-backed allocation if a pool is configured; fall back to standard allocation
# if the pool is unavailable, exhausted, or incompatible with the requested size.
if mem_pool is not None:
try:
with torch.cuda.memory.use_mem_pool(mem_pool):
new_buffer_tensor = torch.zeros((required_memory_size,), device='cuda', dtype=torch.uint8)
except (RuntimeError, torch.cuda.OutOfMemoryError) as e:
logger.debug(
"Failed to allocate from memory pool, falling back to standard allocation: %s", e
)
new_buffer_tensor = None
if new_buffer_tensor is None:
new_buffer_tensor = torch.zeros((required_memory_size,), device='cuda', dtype=torch.uint8)
🧰 Tools
🪛 Ruff (0.13.1)

93-93: Do not catch blind exception: Exception

(BLE001)


96-96: f-string without any placeholders

Remove extraneous f prefix

(F541)

🤖 Prompt for AI Agents
In tensorrt_llm/_torch/memory_buffer_utils.py around lines 87 to 99, replace the
bare Exception catch and f-string log with targeted handling: first check
get_graph_pool() is not None before entering with
torch.cuda.memory.use_mem_pool(...); catch specific exceptions
(torch.cuda.OutOfMemoryError and RuntimeError) rather than Exception; change
logger.debug to a plain string (no f-string) and add a short comment explaining
the fallback path; on exception explicitly create the fallback tensor on
device='cuda' and keep behavior identical otherwise.

Comment on lines 196 to 204

def get_graph_pool(self):
return self.memory_pool
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add docstring for the public accessor method.

The new get_graph_pool method is missing documentation. Per the coding guidelines, public methods should have Google-style docstrings.

Apply this diff:

     def get_graph_pool(self):
+        """Returns the CUDA memory pool used by this graph runner.
+        
+        Returns:
+            The CUDA memory pool associated with captured graphs, or None if
+            no graphs have been captured yet.
+        """
         return self.memory_pool
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def get_graph_pool(self):
return self.memory_pool
def get_graph_pool(self):
"""Returns the CUDA memory pool used by this graph runner.
Returns:
The CUDA memory pool associated with captured graphs, or None if
no graphs have been captured yet.
"""
return self.memory_pool
🤖 Prompt for AI Agents
In tensorrt_llm/_torch/pyexecutor/cuda_graph_runner.py around lines 196 to 198,
the public accessor method get_graph_pool lacks a Google-style docstring; add a
concise docstring immediately above the method that describes what the method
returns, documents that there are no arguments, and specifies the return type
(e.g., the memory pool object or its class) and any important behavior or
ownership semantics. Ensure the docstring follows Google style with a short
summary line, an optional longer description if needed, an Args: section (empty
or omitted), and a Returns: section naming the memory pool type and what callers
can expect.

Comment on lines 317 to 337
_buffer_pool = None


def set_mem_pool(buffer_pool):
global _buffer_pool
_buffer_pool = buffer_pool


def get_graph_pool():
global _buffer_pool
return _buffer_pool
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

Add docstrings and consider thread safety.

The new memory pool API lacks documentation. Given that this file uses threading.local() for other global state (lines 43, 50), consider whether _buffer_pool should also be thread-local to avoid race conditions in multi-threaded scenarios.

Add Google-style docstrings to document the purpose, parameters, and return values:

 _buffer_pool = None
 
 
 def set_mem_pool(buffer_pool):
+    """Sets the global memory pool for buffer allocation.
+    
+    Args:
+        buffer_pool: A CUDA memory pool object to use for allocations.
+    """
     global _buffer_pool
     _buffer_pool = buffer_pool
 
 
 def get_graph_pool():
+    """Retrieves the current global memory pool.
+    
+    Returns:
+        The current memory pool, or None if not set.
+    """
     global _buffer_pool
     return _buffer_pool

Additionally, verify whether concurrent access patterns require thread-local storage:


🏁 Script executed:

#!/bin/bash
# Search for multi-threaded usage patterns of the new memory pool API
rg -n -C3 'set_mem_pool|get_graph_pool|set_prefer_mem_pool' --type=py

Length of output: 4137


🏁 Script executed:

#!/bin/bash
# Search for threading.local usage in utils.py to confirm thread-local patterns
rg -nC3 "threading\\.local" --type=py tensorrt_llm/_torch/utils.py

Length of output: 235


Add docstrings to memory‐pool API and align with thread-local patterns

  • Add Google-style docstrings to set_mem_pool, get_graph_pool, and the set_prefer_mem_pool context manager.
  • Replace the module-level _buffer_pool with a threading.local() instance to match existing per-thread state (e.g., _global_attrs at line 43) and avoid race conditions.
 _buffer_pool = None
 
 
 def set_mem_pool(buffer_pool):
+    """Sets the global memory pool for buffer allocation.
+
+    Args:
+        buffer_pool: A CUDA memory pool object to use for allocations.
+    """
     global _buffer_pool
     _buffer_pool = buffer_pool
 
 
 def get_graph_pool():
+    """Retrieves the current global memory pool.
+
+    Returns:
+        The current memory pool, or None if not set.
+    """
     global _buffer_pool
     return _buffer_pool

Apply a similar docstring to set_prefer_mem_pool and migrate _buffer_pool to threading.local().

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
_buffer_pool = None
def set_mem_pool(buffer_pool):
global _buffer_pool
_buffer_pool = buffer_pool
def get_graph_pool():
global _buffer_pool
return _buffer_pool
_buffer_pool = None
def set_mem_pool(buffer_pool):
"""Sets the global memory pool for buffer allocation.
Args:
buffer_pool: A CUDA memory pool object to use for allocations.
"""
global _buffer_pool
_buffer_pool = buffer_pool
def get_graph_pool():
"""Retrieves the current global memory pool.
Returns:
The current memory pool, or None if not set.
"""
global _buffer_pool
return _buffer_pool

Comment on lines 330 to 364
@contextlib.contextmanager
def set_prefer_mem_pool(mem_pool):
old_buffer_pool = get_graph_pool()
set_mem_pool(mem_pool)
try:
yield
finally:
set_mem_pool(old_buffer_pool)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add docstring and type hints for the context manager.

The set_prefer_mem_pool context manager is missing documentation and type hints, which are required by the coding guidelines.

Apply this diff:

 @contextlib.contextmanager
-def set_prefer_mem_pool(mem_pool):
+def set_prefer_mem_pool(mem_pool) -> contextlib.AbstractContextManager:
+    """Temporarily sets a preferred memory pool and restores the previous one on exit.
+    
+    This context manager allows temporarily switching to a different memory pool
+    for CUDA graph operations, ensuring the original pool is restored even if
+    an exception occurs.
+    
+    Args:
+        mem_pool: The memory pool to use within the context.
+        
+    Yields:
+        None
+        
+    Example:
+        >>> with set_prefer_mem_pool(graph_pool):
+        ...     # Allocations within this block use graph_pool
+        ...     tensor = allocate_buffer(...)
+    """
     old_buffer_pool = get_graph_pool()
     set_mem_pool(mem_pool)
     try:
         yield
     finally:
         set_mem_pool(old_buffer_pool)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@contextlib.contextmanager
def set_prefer_mem_pool(mem_pool):
old_buffer_pool = get_graph_pool()
set_mem_pool(mem_pool)
try:
yield
finally:
set_mem_pool(old_buffer_pool)
@contextlib.contextmanager
def set_prefer_mem_pool(mem_pool) -> contextlib.AbstractContextManager:
"""Temporarily sets a preferred memory pool and restores the previous one on exit.
This context manager allows temporarily switching to a different memory pool
for CUDA graph operations, ensuring the original pool is restored even if
an exception occurs.
Args:
mem_pool: The memory pool to use within the context.
Yields:
None
Example:
>>> with set_prefer_mem_pool(graph_pool):
... # Allocations within this block use graph_pool
... tensor = allocate_buffer(...)
"""
old_buffer_pool = get_graph_pool()
set_mem_pool(mem_pool)
try:
yield
finally:
set_mem_pool(old_buffer_pool)
🤖 Prompt for AI Agents
In tensorrt_llm/_torch/utils.py around lines 330-337, the context manager
set_prefer_mem_pool is missing a docstring and type hints; add a concise
docstring explaining that it temporarily sets the graph memory pool to mem_pool
and restores the previous pool on exit, annotate the parameter (e.g. mem_pool:
typing.Any) and the return type as -> typing.Iterator[None], and ensure typing
imports (from typing import Any, Iterator) are added if not already present.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20359 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20359 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15358 completed with status: 'FAILURE'

Signed-off-by: Hui Gao <huig@nvidia.com>
Signed-off-by: Hui Gao <huig@nvidia.com>
@HuiGao-NV
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20429 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20429 [ run ] completed with state DISABLED
L0 testing is limited to prioritized users. User HuiGao-NV is not in the prioritized list. L0 testing cannot be triggered.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants