Skip to content

Conversation

@njhill
Copy link
Member

@njhill njhill commented Nov 24, 2025

The front-end built-in tool calling code can make successive generate calls with the same top-level request id.

This can cause a problem with async scheduling since the prior request can still be present in the engine (in the case of stop/eos tokens for example, we generate one additional token).

This is a specific fix for the problematic "internal" request id duplication, but we need to fix things more generally. Our current duplicate id check is insufficient, especially when async scheduling is in use. Even if that's fixed to reject the successive requests in this case, it's still confusing for users who would expect that once a request has completed from their perspective, they should be able to make another one with the same id.

#27987 is related but I think we need something at the LLM/AsyncLLM level.

Signed-off-by: Nick Hill <nhill@redhat.com>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a race condition caused by duplicate request IDs being sent to the vLLM engine from internal sub-requests. The fix introduces unique sub_request_ids in two key scenarios: when a single chat completion request is split into multiple engine prompts, and during the iterative process of built-in tool calls. By appending a unique suffix to the original request ID for each sub-request, this change effectively prevents ID collisions within the engine. The implementation is clear, targeted, and correctly resolves the specific bug described.

Copy link
Collaborator

@heheda12345 heheda12345 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Agree with @robertgshaw2-redhat that we can add a bit more comments.

Signed-off-by: Nick Hill <nhill@redhat.com>
@njhill njhill added the ready ONLY add when PR is ready to merge/full CI is needed label Nov 24, 2025
@njhill njhill enabled auto-merge (squash) November 24, 2025 23:46
@njhill njhill changed the title [BugFix] Fix duplicate id tool-call race condition [BugFix] Fix duplicate req id tool-call race condition Nov 25, 2025
@njhill njhill merged commit a178a0b into vllm-project:main Nov 25, 2025
49 checks passed
@njhill njhill deleted the fix-responses-race branch November 25, 2025 02:33
@njhill njhill mentioned this pull request Nov 25, 2025
19 tasks
bringlein pushed a commit to bringlein/vllm that referenced this pull request Nov 26, 2025
devpatelio pushed a commit to SumanthRH/vllm that referenced this pull request Nov 29, 2025
markmc added a commit to markmc/vllm that referenced this pull request Dec 1, 2025
Since vllm-project#9550 and vllm-project#10968 we support client's supplying a custom
request ID. The motivation for this is that it can be very helpful
when you need to correlate vLLM logs with logs of a related service.

Since the request ID is used ubiquitously across vLLM as a unique
key, it obviously is problematic if we ever have multiple in-flight
requests using the same client-provided request ID.

We saw this happening recently when `vllm serve bench` started
including a request ID and the request IDs from multiple concurrent
instances caused collisions. See vllm-project#27723

We try to guard against request ID collisions currently in the
frontend in OutputProcessor:

```
    def add_request(...):
        if request_id in self.request_states:
            raise ValueError(f"Request id {request_id} already running.")
```

however, this is not always effective:

1) We can have abort race conditions where a request is no longer
   tracked by the frontend, but still not completed in the engine.
   See vllm-project#15326 for an attempt to fix this.
2) We can have async scheduling race conditions where a request
   ID is removed from the output processor and being scheduled
   while the older request with that ID is still being completed
   by the model runner. See vllm-project#29355
3) With P/D, a request will continue to be tracked by the prefill
   engine long after the prefill request has been completed in
   the frontend, while we wait for the decode side to fetch the
   KV blocks. See vllm-project#20139

Let's instead ensure we use a unique request ID internally, even
when a client provides a custom request ID. We can do this simply
by appending a short random suffix to any request ID provided
by the frontend.

We need to ensure we track the external->internal request ID
mapping because abort() will be supplied an external request ID.
In the case where an external request ID maps to multiple running
requests, we assume the caller requires all of those requests
to be aborted. The caller can use EngineCoreRequest.request_id
as the request ID if they want to be more specific.

A full 32 character random UUID would be overkill as a suffix,
so how many random characters would be sufficient? 8 characters
gives us 32 bits of entropy, or 16^8 possible prefixes.

Using the collision probability approximation from
https://preshing.com/20110504/hash-collision-probabilities:

N = 16^8 and k is the number of generated suffixes, then the
probability of collision is (k^2)/(2N), so If a client somehow
caused vLLM to hold 10k requests that reuse the same client-provided
ID, then there would be a 1.16% chance of collision:

```
>>> (k**2)/(2*N)
0.011641532182693481
```

That seems (super good enough)[https://hownot2.com/products/hownot2-super-good-enough-t-shirt].

Signed-off-by: Mark McLoughlin <markmc@redhat.com>
markmc added a commit to markmc/vllm that referenced this pull request Dec 1, 2025
Since vllm-project#9550 and vllm-project#10968 we support client's supplying a custom
request ID. The motivation for this is that it can be very helpful
when you need to correlate vLLM logs with logs of a related service.

Since the request ID is used ubiquitously across vLLM as a unique
key, it obviously is problematic if we ever have multiple in-flight
requests using the same client-provided request ID.

We saw this happening recently when `vllm serve bench` started
including a request ID and the request IDs from multiple concurrent
instances caused collisions. See vllm-project#27723

We try to guard against request ID collisions currently in the
frontend in OutputProcessor:

```
    def add_request(...):
        if request_id in self.request_states:
            raise ValueError(f"Request id {request_id} already running.")
```

however, this is not always effective:

1) We can have abort race conditions where a request is no longer
   tracked by the frontend, but still not completed in the engine.
   See vllm-project#15326 for an attempt to fix this.
2) We can have async scheduling race conditions where a request
   ID is removed from the output processor and being scheduled
   while the older request with that ID is still being completed
   by the model runner. See vllm-project#29355
3) With P/D, a request will continue to be tracked by the prefill
   engine long after the prefill request has been completed in
   the frontend, while we wait for the decode side to fetch the
   KV blocks. See vllm-project#20139

Let's instead ensure we use a unique request ID internally, even
when a client provides a custom request ID. We can do this simply
by appending a short random suffix to any request ID provided
by the frontend.

We need to ensure we track the external->internal request ID
mapping because abort() will be supplied an external request ID.
In the case where an external request ID maps to multiple running
requests, we assume the caller requires all of those requests
to be aborted. The caller can use EngineCoreRequest.request_id
as the request ID if they want to be more specific.

A full 32 character random UUID would be overkill as a suffix,
so how many random characters would be sufficient? 8 characters
gives us 32 bits of entropy, or 16^8 possible prefixes.

Using the collision probability approximation from
https://preshing.com/20110504/hash-collision-probabilities:

N = 16^8 and k is the number of generated suffixes, then the
probability of collision is (k^2)/(2N), so If a client somehow
caused vLLM to hold 10k requests that reuse the same client-provided
ID, then there would be a 1.16% chance of collision:

```
>>> (k**2)/(2*N)
0.011641532182693481
```

That seems (super good enough)[https://hownot2.com/products/hownot2-super-good-enough-t-shirt].

Signed-off-by: Mark McLoughlin <markmc@redhat.com>
kitaekatt pushed a commit to kitaekatt/vllm that referenced this pull request Dec 1, 2025
markmc added a commit to markmc/vllm that referenced this pull request Dec 1, 2025
Since vllm-project#9550 and vllm-project#10968 we support client's supplying a custom
request ID. The motivation for this is that it can be very helpful
when you need to correlate vLLM logs with logs of a related service.

Since the request ID is used ubiquitously across vLLM as a unique
key, it obviously is problematic if we ever have multiple in-flight
requests using the same client-provided request ID.

We saw this happening recently when `vllm serve bench` started
including a request ID and the request IDs from multiple concurrent
instances caused collisions. See vllm-project#27723

We try to guard against request ID collisions currently in the
frontend in OutputProcessor:

```
    def add_request(...):
        if request_id in self.request_states:
            raise ValueError(f"Request id {request_id} already running.")
```

however, this is not always effective:

1) We can have abort race conditions where a request is no longer
   tracked by the frontend, but still not completed in the engine.
   See vllm-project#15326 for an attempt to fix this.
2) We can have async scheduling race conditions where a request
   ID is removed from the output processor and being scheduled
   while the older request with that ID is still being completed
   by the model runner. See vllm-project#29355
3) With P/D, a request will continue to be tracked by the prefill
   engine long after the prefill request has been completed in
   the frontend, while we wait for the decode side to fetch the
   KV blocks. See vllm-project#20139

Let's instead ensure we use a unique request ID internally, even
when a client provides a custom request ID. We can do this simply
by appending a short random suffix to any request ID provided
by the frontend.

We need to ensure we track the external->internal request ID
mapping because abort() will be supplied an external request ID.
In the case where an external request ID maps to multiple running
requests, we assume the caller requires all of those requests
to be aborted. The caller can use EngineCoreRequest.request_id
as the request ID if they want to be more specific.

A full 32 character random UUID would be overkill as a suffix,
so how many random characters would be sufficient? 8 characters
gives us 32 bits of entropy, or 16^8 possible prefixes.

Using the collision probability approximation from
https://preshing.com/20110504/hash-collision-probabilities:

N = 16^8 and k is the number of generated suffixes, then the
probability of collision is (k^2)/(2N), so If a client somehow
caused vLLM to hold 10k requests that reuse the same client-provided
ID, then there would be a 1.16% chance of collision:

```
>>> (k**2)/(2*N)
0.011641532182693481
```

That seems (super good enough)[https://hownot2.com/products/hownot2-super-good-enough-t-shirt].

Signed-off-by: Mark McLoughlin <markmc@redhat.com>
markmc added a commit to markmc/vllm that referenced this pull request Dec 2, 2025
Since vllm-project#9550 and vllm-project#10968 we support client's supplying a custom
request ID. The motivation for this is that it can be very helpful
when you need to correlate vLLM logs with logs of a related service.

Since the request ID is used ubiquitously across vLLM as a unique
key, it obviously is problematic if we ever have multiple in-flight
requests using the same client-provided request ID.

We saw this happening recently when `vllm serve bench` started
including a request ID and the request IDs from multiple concurrent
instances caused collisions. See vllm-project#27723

We try to guard against request ID collisions currently in the
frontend in OutputProcessor:

```
    def add_request(...):
        if request_id in self.request_states:
            raise ValueError(f"Request id {request_id} already running.")
```

however, this is not always effective:

1) We can have abort race conditions where a request is no longer
   tracked by the frontend, but still not completed in the engine.
   See vllm-project#15326 for an attempt to fix this.
2) We can have async scheduling race conditions where a request
   ID is removed from the output processor and being scheduled
   while the older request with that ID is still being completed
   by the model runner. See vllm-project#29355
3) With P/D, a request will continue to be tracked by the prefill
   engine long after the prefill request has been completed in
   the frontend, while we wait for the decode side to fetch the
   KV blocks. See vllm-project#20139

Let's instead ensure we use a unique request ID internally, even
when a client provides a custom request ID. We can do this simply
by appending a short random suffix to any request ID provided
by the frontend.

We need to ensure we track the external->internal request ID
mapping because abort() will be supplied an external request ID.
In the case where an external request ID maps to multiple running
requests, we assume the caller requires all of those requests
to be aborted. The caller can use EngineCoreRequest.request_id
as the request ID if they want to be more specific.

A full 32 character random UUID would be overkill as a suffix,
so how many random characters would be sufficient? 8 characters
gives us 32 bits of entropy, or 16^8 possible prefixes.

Using the collision probability approximation from
https://preshing.com/20110504/hash-collision-probabilities:

N = 16^8 and k is the number of generated suffixes, then the
probability of collision is (k^2)/(2N), so If a client somehow
caused vLLM to hold 10k requests that reuse the same client-provided
ID, then there would be a 1.16% chance of collision:

```
>>> (k**2)/(2*N)
0.011641532182693481
```

That seems (super good enough)[https://hownot2.com/products/hownot2-super-good-enough-t-shirt].

Signed-off-by: Mark McLoughlin <markmc@redhat.com>
markmc added a commit to markmc/vllm that referenced this pull request Dec 3, 2025
Since vllm-project#9550 and vllm-project#10968 we support client's supplying a custom
request ID. The motivation for this is that it can be very helpful
when you need to correlate vLLM logs with logs of a related service.

Since the request ID is used ubiquitously across vLLM as a unique
key, it obviously is problematic if we ever have multiple in-flight
requests using the same client-provided request ID.

We saw this happening recently when `vllm serve bench` started
including a request ID and the request IDs from multiple concurrent
instances caused collisions. See vllm-project#27723

We try to guard against request ID collisions currently in the
frontend in OutputProcessor:

```
    def add_request(...):
        if request_id in self.request_states:
            raise ValueError(f"Request id {request_id} already running.")
```

however, this is not always effective:

1) We can have abort race conditions where a request is no longer
   tracked by the frontend, but still not completed in the engine.
   See vllm-project#15326 for an attempt to fix this.
2) We can have async scheduling race conditions where a request
   ID is removed from the output processor and being scheduled
   while the older request with that ID is still being completed
   by the model runner. See vllm-project#29355
3) With P/D, a request will continue to be tracked by the prefill
   engine long after the prefill request has been completed in
   the frontend, while we wait for the decode side to fetch the
   KV blocks. See vllm-project#20139

Let's instead ensure we use a unique request ID internally, even
when a client provides a custom request ID. We can do this simply
by appending a short random suffix to any request ID provided
by the frontend.

We need to ensure we track the external->internal request ID
mapping because abort() will be supplied an external request ID.
In the case where an external request ID maps to multiple running
requests, we assume the caller requires all of those requests
to be aborted. The caller can use EngineCoreRequest.request_id
as the request ID if they want to be more specific.

A full 32 character random UUID would be overkill as a suffix,
so how many random characters would be sufficient? 8 characters
gives us 32 bits of entropy, or 16^8 possible prefixes.

Using the collision probability approximation from
https://preshing.com/20110504/hash-collision-probabilities:

N = 16^8 and k is the number of generated suffixes, then the
probability of collision is (k^2)/(2N), so If a client somehow
caused vLLM to hold 10k requests that reuse the same client-provided
ID, then there would be a 1.16% chance of collision:

```
>>> (k**2)/(2*N)
0.011641532182693481
```

That seems (super good enough)[https://hownot2.com/products/hownot2-super-good-enough-t-shirt].

Signed-off-by: Mark McLoughlin <markmc@redhat.com>

[Core] Address review feedback for random internal IDs

Co-authored-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Mark McLoughlin <markmc@redhat.com>

[Core] Handle parent request abort with internal ID

Signed-off-by: Mark McLoughlin <markmc@redhat.com>
markmc added a commit to markmc/vllm that referenced this pull request Dec 4, 2025
Since vllm-project#9550 and vllm-project#10968 we support client's supplying a custom
request ID. The motivation for this is that it can be very helpful
when you need to correlate vLLM logs with logs of a related service.

Since the request ID is used ubiquitously across vLLM as a unique
key, it obviously is problematic if we ever have multiple in-flight
requests using the same client-provided request ID.

We saw this happening recently when `vllm serve bench` started
including a request ID and the request IDs from multiple concurrent
instances caused collisions. See vllm-project#27723

We try to guard against request ID collisions currently in the
frontend in OutputProcessor:

```
    def add_request(...):
        if request_id in self.request_states:
            raise ValueError(f"Request id {request_id} already running.")
```

however, this is not always effective:

1) We can have abort race conditions where a request is no longer
   tracked by the frontend, but still not completed in the engine.
   See vllm-project#15326 for an attempt to fix this.
2) We can have async scheduling race conditions where a request
   ID is removed from the output processor and being scheduled
   while the older request with that ID is still being completed
   by the model runner. See vllm-project#29355
3) With P/D, a request will continue to be tracked by the prefill
   engine long after the prefill request has been completed in
   the frontend, while we wait for the decode side to fetch the
   KV blocks. See vllm-project#20139

Let's instead ensure we use a unique request ID internally, even
when a client provides a custom request ID. We can do this simply
by appending a short random suffix to any request ID provided
by the frontend.

We need to ensure we track the external->internal request ID
mapping because abort() will be supplied an external request ID.
In the case where an external request ID maps to multiple running
requests, we assume the caller requires all of those requests
to be aborted. The caller can use EngineCoreRequest.request_id
as the request ID if they want to be more specific.

A full 32 character random UUID would be overkill as a suffix,
so how many random characters would be sufficient? 8 characters
gives us 32 bits of entropy, or 16^8 possible prefixes.

Using the collision probability approximation from
https://preshing.com/20110504/hash-collision-probabilities:

N = 16^8 and k is the number of generated suffixes, then the
probability of collision is (k^2)/(2N), so If a client somehow
caused vLLM to hold 10k requests that reuse the same client-provided
ID, then there would be a 1.16% chance of collision:

```
>>> (k**2)/(2*N)
0.011641532182693481
```

That seems (super good enough)[https://hownot2.com/products/hownot2-super-good-enough-t-shirt].

Signed-off-by: Mark McLoughlin <markmc@redhat.com>

[Core] Address review feedback for random internal IDs

Co-authored-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Mark McLoughlin <markmc@redhat.com>

[Core] Handle parent request abort with internal ID

Signed-off-by: Mark McLoughlin <markmc@redhat.com>
markmc added a commit to markmc/vllm that referenced this pull request Dec 4, 2025
Since vllm-project#9550 and vllm-project#10968 we support client's supplying a custom
request ID. The motivation for this is that it can be very helpful
when you need to correlate vLLM logs with logs of a related service.

Since the request ID is used ubiquitously across vLLM as a unique
key, it obviously is problematic if we ever have multiple in-flight
requests using the same client-provided request ID.

We saw this happening recently when `vllm serve bench` started
including a request ID and the request IDs from multiple concurrent
instances caused collisions. See vllm-project#27723

We try to guard against request ID collisions currently in the
frontend in OutputProcessor:

```
    def add_request(...):
        if request_id in self.request_states:
            raise ValueError(f"Request id {request_id} already running.")
```

however, this is not always effective:

1) We can have abort race conditions where a request is no longer
   tracked by the frontend, but still not completed in the engine.
   See vllm-project#15326 for an attempt to fix this.
2) We can have async scheduling race conditions where a request
   ID is removed from the output processor and being scheduled
   while the older request with that ID is still being completed
   by the model runner. See vllm-project#29355
3) With P/D, a request will continue to be tracked by the prefill
   engine long after the prefill request has been completed in
   the frontend, while we wait for the decode side to fetch the
   KV blocks. See vllm-project#20139

Let's instead ensure we use a unique request ID internally, even
when a client provides a custom request ID. We can do this simply
by appending a short random suffix to any request ID provided
by the frontend.

We need to ensure we track the external->internal request ID
mapping because abort() will be supplied an external request ID.
In the case where an external request ID maps to multiple running
requests, we assume the caller requires all of those requests
to be aborted. The caller can use EngineCoreRequest.request_id
as the request ID if they want to be more specific.

A full 32 character random UUID would be overkill as a suffix,
so how many random characters would be sufficient? 8 characters
gives us 32 bits of entropy, or 16^8 possible prefixes.

Using the collision probability approximation from
https://preshing.com/20110504/hash-collision-probabilities:

N = 16^8 and k is the number of generated suffixes, then the
probability of collision is (k^2)/(2N), so If a client somehow
caused vLLM to hold 10k requests that reuse the same client-provided
ID, then there would be a 1.16% chance of collision:

```
>>> (k**2)/(2*N)
0.011641532182693481
```

That seems (super good enough)[https://hownot2.com/products/hownot2-super-good-enough-t-shirt].

Signed-off-by: Mark McLoughlin <markmc@redhat.com>
markmc added a commit to markmc/vllm that referenced this pull request Dec 4, 2025
Since vllm-project#9550 and vllm-project#10968 we support client's supplying a custom
request ID. The motivation for this is that it can be very helpful
when you need to correlate vLLM logs with logs of a related service.

Since the request ID is used ubiquitously across vLLM as a unique
key, it obviously is problematic if we ever have multiple in-flight
requests using the same client-provided request ID.

We saw this happening recently when `vllm serve bench` started
including a request ID and the request IDs from multiple concurrent
instances caused collisions. See vllm-project#27723

We try to guard against request ID collisions currently in the
frontend in OutputProcessor:

```
    def add_request(...):
        if request_id in self.request_states:
            raise ValueError(f"Request id {request_id} already running.")
```

however, this is not always effective:

1) We can have abort race conditions where a request is no longer
   tracked by the frontend, but still not completed in the engine.
   See vllm-project#15326 for an attempt to fix this.
2) We can have async scheduling race conditions where a request
   ID is removed from the output processor and being scheduled
   while the older request with that ID is still being completed
   by the model runner. See vllm-project#29355
3) With P/D, a request will continue to be tracked by the prefill
   engine long after the prefill request has been completed in
   the frontend, while we wait for the decode side to fetch the
   KV blocks. See vllm-project#20139

Let's instead ensure we use a unique request ID internally, even
when a client provides a custom request ID. We can do this simply
by appending a short random suffix to any request ID provided
by the frontend.

We need to ensure we track the external->internal request ID
mapping because abort() will be supplied an external request ID.
In the case where an external request ID maps to multiple running
requests, we assume the caller requires all of those requests
to be aborted. The caller can use EngineCoreRequest.request_id
as the request ID if they want to be more specific.

A full 32 character random UUID would be overkill as a suffix,
so how many random characters would be sufficient? 8 characters
gives us 32 bits of entropy, or 16^8 possible prefixes.

Using the collision probability approximation from
https://preshing.com/20110504/hash-collision-probabilities:

N = 16^8 and k is the number of generated suffixes, then the
probability of collision is (k^2)/(2N), so If a client somehow
caused vLLM to hold 10k requests that reuse the same client-provided
ID, then there would be a 1.16% chance of collision:

```
>>> (k**2)/(2*N)
0.011641532182693481
```

That seems (super good enough)[https://hownot2.com/products/hownot2-super-good-enough-t-shirt].

Signed-off-by: Mark McLoughlin <markmc@redhat.com>
markmc added a commit to markmc/vllm that referenced this pull request Dec 4, 2025
Since vllm-project#9550 and vllm-project#10968 we support client's supplying a custom
request ID. The motivation for this is that it can be very helpful
when you need to correlate vLLM logs with logs of a related service.

Since the request ID is used ubiquitously across vLLM as a unique
key, it obviously is problematic if we ever have multiple in-flight
requests using the same client-provided request ID.

We saw this happening recently when `vllm serve bench` started
including a request ID and the request IDs from multiple concurrent
instances caused collisions. See vllm-project#27723

We try to guard against request ID collisions currently in the
frontend in OutputProcessor:

```
    def add_request(...):
        if request_id in self.request_states:
            raise ValueError(f"Request id {request_id} already running.")
```

however, this is not always effective:

1) We can have abort race conditions where a request is no longer
   tracked by the frontend, but still not completed in the engine.
   See vllm-project#15326 for an attempt to fix this.
2) We can have async scheduling race conditions where a request
   ID is removed from the output processor and being scheduled
   while the older request with that ID is still being completed
   by the model runner. See vllm-project#29355
3) With P/D, a request will continue to be tracked by the prefill
   engine long after the prefill request has been completed in
   the frontend, while we wait for the decode side to fetch the
   KV blocks. See vllm-project#20139

Let's instead ensure we use a unique request ID internally, even
when a client provides a custom request ID. We can do this simply
by appending a short random suffix to any request ID provided
by the frontend.

A full 32 character random UUID would be overkill as a suffix,
so how many random characters would be sufficient? 8 characters
gives us 32 bits of entropy, or 16^8 possible prefixes.

Using the collision probability approximation from
https://preshing.com/20110504/hash-collision-probabilities:

N = 16^8 and k is the number of generated suffixes, then the
probability of collision is (k^2)/(2N), so If a client somehow
caused vLLM to hold 10k requests that reuse the same client-provided
ID, then there would be a 1.16% chance of collision:

```
>>> (k**2)/(2*N)
0.011641532182693481
```

That seems (super good enough)[https://hownot2.com/products/hownot2-super-good-enough-t-shirt].

The key changes to support this are:

1. `InputProcessor.process_inputs()` - we add some randomness to the
request ID just before creating an `EngineCoreRequest`, and store both
the random "internal" request ID (as `request_id`) and the supplied
"external" request ID (as `external_req_id`) in the
`EngineCoreRequest`.
2. `RequestState.make_request_output()` - we ensure that
`RequestOutput.request_id` continues to be the external request ID
(for backwards compat) and add `internal_request_id`.
3. `OutputProcessor.abort_requests()` - we make `OutputProcessor`
track a mapping from external request ID to internal request IDs, so
`abort_requests()` can abort based on either ID.
4. `AsyncLLM` - we use `RequestOutputCollector` to track the internal
request ID, so we can use the internal ID to abort an in-progress
request. We also add an `internal` boolean flag to `abort()` so API
users can abort based on either ID.
5. `ParentRequest` - in the case of parallel sampling, we need to
track both the internal and external ID for the later creation of
`RequestOutput` aggregating the child outputs.

We need to ensure we track the external->internal request ID
mapping because abort() will be supplied an external request ID.
In the case where an external request ID maps to multiple running
requests, we assume the caller requires all of those requests
to be aborted. The caller can use EngineCoreRequest.request_id
as the request ID if they want to be more specific.

Signed-off-by: Mark McLoughlin <markmc@redhat.com>
markmc added a commit to markmc/vllm that referenced this pull request Dec 4, 2025
Since vllm-project#9550 and vllm-project#10968 we support client's supplying a custom
request ID. The motivation for this is that it can be very helpful
when you need to correlate vLLM logs with logs of a related service.

Since the request ID is used ubiquitously across vLLM as a unique
key, it obviously is problematic if we ever have multiple in-flight
requests using the same client-provided request ID.

We saw this happening recently when `vllm serve bench` started
including a request ID and the request IDs from multiple concurrent
instances caused collisions. See vllm-project#27723

We try to guard against request ID collisions currently in the
frontend in OutputProcessor:

```
    def add_request(...):
        if request_id in self.request_states:
            raise ValueError(f"Request id {request_id} already running.")
```

however, this is not always effective:

1) We can have abort race conditions where a request is no longer
   tracked by the frontend, but still not completed in the engine.
   See vllm-project#15326 for an attempt to fix this.
2) We can have async scheduling race conditions where a request
   ID is removed from the output processor and being scheduled
   while the older request with that ID is still being completed
   by the model runner. See vllm-project#29355
3) With P/D, a request will continue to be tracked by the prefill
   engine long after the prefill request has been completed in
   the frontend, while we wait for the decode side to fetch the
   KV blocks. See vllm-project#20139

Let's instead ensure we use a unique request ID internally, even
when a client provides a custom request ID. We can do this simply
by appending a short random suffix to any request ID provided
by the frontend.

A full 32 character random UUID would be overkill as a suffix,
so how many random characters would be sufficient? 8 characters
gives us 32 bits of entropy, or 16^8 possible prefixes.

Using the collision probability approximation from
https://preshing.com/20110504/hash-collision-probabilities:

N = 16^8 and k is the number of generated suffixes, then the
probability of collision is (k^2)/(2N), so If a client somehow
caused vLLM to hold 10k requests that reuse the same client-provided
ID, then there would be a 1.16% chance of collision:

```
>>> (k**2)/(2*N)
0.011641532182693481
```

That seems (super good enough)[https://hownot2.com/products/hownot2-super-good-enough-t-shirt].

The key changes to support this are:

1. `InputProcessor.process_inputs()` - we add some randomness to the
request ID just before creating an `EngineCoreRequest`, and store both
the random "internal" request ID (as `request_id`) and the supplied
"external" request ID (as `external_req_id`) in the
`EngineCoreRequest`.
2. `RequestState.make_request_output()` - we ensure that
`RequestOutput.request_id` continues to be the external request ID
(for backwards compat) and add `internal_request_id`.
3. `OutputProcessor.abort_requests()` - we make `OutputProcessor`
track a mapping from external request ID to internal request IDs, so
`abort_requests()` can abort based on either ID.
4. `AsyncLLM` - we use `RequestOutputCollector` to track the internal
request ID, so we can use the internal ID to abort an in-progress
request. We also add an `internal` boolean flag to `abort()` so API
users can abort based on either ID.
5. `ParentRequest` - in the case of parallel sampling, we need to
track both the internal and external ID for the later creation of
`RequestOutput` aggregating the child outputs.

We need to ensure we track the external->internal request ID
mapping because abort() will be supplied an external request ID.
In the case where an external request ID maps to multiple running
requests, we assume the caller requires all of those requests
to be aborted. The caller can use EngineCoreRequest.request_id
as the request ID if they want to be more specific.

Signed-off-by: Mark McLoughlin <markmc@redhat.com>
charlotte12l pushed a commit to charlotte12l/vllm that referenced this pull request Dec 5, 2025
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Xingyu Liu <charlotteliu12x@gmail.com>
Zhathw pushed a commit to Zhathw/vllm that referenced this pull request Dec 6, 2025
markmc added a commit to markmc/vllm that referenced this pull request Dec 6, 2025
Since vllm-project#9550 and vllm-project#10968 we support client's supplying a custom
request ID. The motivation for this is that it can be very helpful
when you need to correlate vLLM logs with logs of a related service.

Since the request ID is used ubiquitously across vLLM as a unique
key, it obviously is problematic if we ever have multiple in-flight
requests using the same client-provided request ID.

We saw this happening recently when `vllm serve bench` started
including a request ID and the request IDs from multiple concurrent
instances caused collisions. See vllm-project#27723

We try to guard against request ID collisions currently in the
frontend in OutputProcessor:

```
    def add_request(...):
        if request_id in self.request_states:
            raise ValueError(f"Request id {request_id} already running.")
```

however, this is not always effective:

1) We can have abort race conditions where a request is no longer
   tracked by the frontend, but still not completed in the engine.
   See vllm-project#15326 for an attempt to fix this.
2) We can have async scheduling race conditions where a request
   ID is removed from the output processor and being scheduled
   while the older request with that ID is still being completed
   by the model runner. See vllm-project#29355
3) With P/D, a request will continue to be tracked by the prefill
   engine long after the prefill request has been completed in
   the frontend, while we wait for the decode side to fetch the
   KV blocks. See vllm-project#20139

Let's instead ensure we use a unique request ID internally, even
when a client provides a custom request ID. We can do this simply
by appending a short random suffix to any request ID provided
by the frontend.

A full 32 character random UUID would be overkill as a suffix,
so how many random characters would be sufficient? 8 characters
gives us 32 bits of entropy, or 16^8 possible prefixes.

Using the collision probability approximation from
https://preshing.com/20110504/hash-collision-probabilities:

N = 16^8 and k is the number of generated suffixes, then the
probability of collision is (k^2)/(2N), so If a client somehow
caused vLLM to hold 10k requests that reuse the same client-provided
ID, then there would be a 1.16% chance of collision:

```
>>> (k**2)/(2*N)
0.011641532182693481
```

That seems (super good enough)[https://hownot2.com/products/hownot2-super-good-enough-t-shirt].

The key changes to support this are:

1. `InputProcessor.process_inputs()` - we add some randomness to the
request ID just before creating an `EngineCoreRequest`, and store both
the random "internal" request ID (as `request_id`) and the supplied
"external" request ID (as `external_req_id`) in the
`EngineCoreRequest`.
2. `RequestState.make_request_output()` - we ensure that
`RequestOutput.request_id` continues to be the external request ID
(for backwards compat) and add `internal_request_id`.
3. `OutputProcessor.abort_requests()` - we make `OutputProcessor`
track a mapping from external request ID to internal request IDs, so
`abort_requests()` can abort based on either ID.
4. `AsyncLLM` - we use `RequestOutputCollector` to track the internal
request ID, so we can use the internal ID to abort an in-progress
request. We also add an `internal` boolean flag to `abort()` so API
users can abort based on either ID.
5. `ParentRequest` - in the case of parallel sampling, we need to
track both the internal and external ID for the later creation of
`RequestOutput` aggregating the child outputs.

We need to ensure we track the external->internal request ID
mapping because abort() will be supplied an external request ID.
In the case where an external request ID maps to multiple running
requests, we assume the caller requires all of those requests
to be aborted. The caller can use EngineCoreRequest.request_id
as the request ID if they want to be more specific.

Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

frontend ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants