Skip to content

Conversation

@GDzhu01
Copy link

@GDzhu01 GDzhu01 commented Nov 29, 2025

Purpose

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

@chatgpt-codex-connector
Copy link

Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors.

You ask your reviewers to trigger select CI tests on top of fastcheck CI.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

@mergify mergify bot added the v1 label Nov 29, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an experimental 'balance scheduling' feature. My review identified a couple of critical issues that would cause runtime errors, preventing the feature from working as intended. Specifically, there's a method name mismatch between the caller and callee (running_gather vs. balance_gather) and an incorrect attempt to iterate over a method instead of a list (self.balance_gather vs. self.balance_queue). I have provided code suggestions to fix these critical bugs.

break

if self.vllm_config.scheduler_config.balance_scheduling:
balance_flag = max(t.item() for t in self.balance_gather) == self.max_num_running_reqs
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This line will raise a TypeError at runtime because self.balance_gather is a method, not an iterable. The balance_gather method is designed to populate self.balance_queue, which is the list you should be iterating over here.

Suggested change
balance_flag = max(t.item() for t in self.balance_gather) == self.max_num_running_reqs
balance_flag = max(t.item() for t in self.balance_queue) == self.max_num_running_reqs

)

if self.vllm_config.scheduler_config.balance_scheduling:
self.scheduler.running_gather(self.dp_group)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This call will result in an AttributeError because the scheduler object does not have a method named running_gather. The method defined in vllm/v1/core/sched/scheduler.py is balance_gather.

Suggested change
self.scheduler.running_gather(self.dp_group)
self.scheduler.balance_gather(self.dp_group)

]

def balance_gather(self, dp_group):
runing_tensor = torch.tensor([len(self.running)], dtype=torch.int, device="cpu")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There is a typo in the variable name runing_tensor. It should be running_tensor to improve code clarity and maintainability.

Suggested change
runing_tensor = torch.tensor([len(self.running)], dtype=torch.int, device="cpu")
running_tensor = torch.tensor([len(self.running)], dtype=torch.int, device="cpu")

Signed-off-by: GDzhu01 <809721801@qq.com>
@GDzhu01 GDzhu01 force-pushed the balance_scheduling branch from 8600cf3 to 8272787 Compare December 1, 2025 01:15
@ApostaC
Copy link
Collaborator

ApostaC commented Dec 1, 2025

cc @njhill @WoosukKwon

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants