Skip to content

[Relax][PyTorch] Simplify tensor args conversion in Dynamo#18726

Open
mshr-h wants to merge 3 commits intoapache:mainfrom
mshr-h:fix-dynamo-tensor-conversion
Open

[Relax][PyTorch] Simplify tensor args conversion in Dynamo#18726
mshr-h wants to merge 3 commits intoapache:mainfrom
mshr-h:fix-dynamo-tensor-conversion

Conversation

@mshr-h
Copy link
Contributor

@mshr-h mshr-h commented Feb 7, 2026

As per title.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @mshr-h, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the integration between PyTorch Dynamo and TVM Relax by optimizing tensor data transfer and improving GPU compilation. It introduces a more efficient mechanism for converting PyTorch tensors to TVM's internal representation and ensures proper scheduling is applied when targeting CUDA devices.

Highlights

  • PyTorch Tensor Conversion: Switched from using torch_tensor.numpy() to DLPack for converting PyTorch tensors to TVM tensors. This leverages DLPack for potentially more efficient, zero-copy data transfer.
  • CUDA Device Scheduling: Added a conditional DefaultGPUSchedule application for CUDA devices within the TVM Relax compilation flow, optimizing for GPU execution.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/tvm/relax/frontend/torch/dynamo.py
    • Updated to_tvm_tensor function to use DLPack for converting torch.Tensor to tvm.runtime.Tensor, replacing the numpy() intermediate step.
    • Introduced a conditional block to apply tvm.tir.transform.DefaultGPUSchedule when the target device is CUDA, ensuring optimized GPU code generation.
Activity
  • The author mshr-h created this pull request with the title "[Relax][Dynamo] Use DLPack to convert PyTorch Tensor to TVM Tensor" and a brief description "As per title."
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces two key improvements for the PyTorch Dynamo frontend. First, it leverages DLPack for converting PyTorch tensors to TVM tensors, which is a great performance optimization that avoids unnecessary data copies via NumPy, especially for GPU tensors. Second, it adds the DefaultGPUSchedule pass for CUDA targets, which is essential for generating runnable and performant code on GPUs. Both changes are valuable. I have one suggestion to consider generalizing the GPU support for better scalability in the future.

Comment on lines 132 to 134
if device.type == "cuda":
with target:
mod = tvm.tir.transform.DefaultGPUSchedule()(mod)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This correctly adds the default GPU scheduling pass for CUDA, which is a great step for enabling GPU support.

To make this more scalable for other GPU backends (e.g., ROCm, Metal), it would be beneficial to generalize this check. tvm.tir.transform.DefaultGPUSchedule is not CUDA-specific.

A future improvement could be to check for any GPU device type and apply this pass. This would need to be done in conjunction with updating the device/target creation logic around line 112 to support more GPU types.

@mshr-h mshr-h changed the title [Relax][Dynamo] Use DLPack to convert PyTorch Tensor to TVM Tensor [Relax][PyTorch][Dynamo] Use DLPack to convert PyTorch Tensor to TVM Tensor Feb 7, 2026
@mshr-h mshr-h changed the title [Relax][PyTorch][Dynamo] Use DLPack to convert PyTorch Tensor to TVM Tensor [Relax][PyTorch] Use DLPack to convert PyTorch Tensor to TVM Tensor in Dynamo Feb 7, 2026
@tqchen
Copy link
Member

tqchen commented Feb 7, 2026

likely with latest tvm-ffi, we don't need dlpack anymore, just pass in torch.Tensor directly should be fine

@mshr-h mshr-h changed the title [Relax][PyTorch] Use DLPack to convert PyTorch Tensor to TVM Tensor in Dynamo [Relax][PyTorch] Simplify PyTorch Tensor to TVM Tensor and apply DefaultGPUSchedule in Dynamo Feb 7, 2026
@mshr-h mshr-h changed the title [Relax][PyTorch] Simplify PyTorch Tensor to TVM Tensor and apply DefaultGPUSchedule in Dynamo [Relax][PyTorch] Simplify PyTorch Tensor to TVM Tensor in Dynamo Feb 7, 2026
@mshr-h mshr-h changed the title [Relax][PyTorch] Simplify PyTorch Tensor to TVM Tensor in Dynamo [Relax][PyTorch] Simplify tensor args conversion in Dynamo Feb 7, 2026
@mshr-h
Copy link
Contributor Author

mshr-h commented Feb 7, 2026

thanks. updated as suggested.

@mshr-h mshr-h force-pushed the fix-dynamo-tensor-conversion branch from e8e8f7c to 7fadb76 Compare February 8, 2026 02:24
@mshr-h mshr-h marked this pull request as ready for review February 8, 2026 03:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants