Skip to content

Improve embedder and reranker throughput with bf16 loading and single-call tokenization#1757

Open
oliverholworthy wants to merge 2 commits intoNVIDIA:mainfrom
oliverholworthy:oholworthy/perf-embed-1b-v2-dtype-tokenizer
Open

Improve embedder and reranker throughput with bf16 loading and single-call tokenization#1757
oliverholworthy wants to merge 2 commits intoNVIDIA:mainfrom
oliverholworthy:oholworthy/perf-embed-1b-v2-dtype-tokenizer

Conversation

@oliverholworthy
Copy link
Copy Markdown
Contributor

@oliverholworthy oliverholworthy commented Mar 31, 2026

Description

Improve inference throughput for the text-only embedder and reranker by addressing two inefficiencies:

1. Load embed-1b-v2 in native bfloat16 (torch_dtype=torch.bfloat16)

The VL embedder and reranker already load in bf16, but the text-only embedder was loading in fp32 and relying on torch.autocast. The autocast wrapper is removed since it adds casting overhead on bf16 weights. The last hidden state is upcast to float32 before pooling and normalization to avoid accumulation errors in bf16 reductions.

2. Tokenize all texts in a single call (embedder + reranker)

Both models were calling the tokenizer per batch chunk inside the inference loop. HuggingFace tokenizers have per-call setup overhead that doesn't scale with batch size, so a single tokenize-all-then-slice approach is significantly faster.

In local benchmarking on an Ampere-class GPU:

  • Embedder: ~15-20% throughput improvement, closing the gap with the VL embedder on text-only workloads
  • Reranker: ~4% throughput improvement

Checklist

  • I am familiar with the Contributing Guidelines.
  • New or existing tests cover these changes.
  • The documentation is up to date with these changes.
  • If adjusting docker-compose.yaml environment variables have you ensured those are mimicked in the Helm values.yaml file.

@oliverholworthy oliverholworthy marked this pull request as ready for review March 31, 2026 14:40
@oliverholworthy oliverholworthy requested review from a team as code owners March 31, 2026 14:40
@oliverholworthy oliverholworthy force-pushed the oholworthy/perf-embed-1b-v2-dtype-tokenizer branch from 7346422 to c54b7ff Compare March 31, 2026 14:43
The text-only embedder was loading the model in float32 (HuggingFace
default) and relying on torch.autocast during inference. This is slower
than loading natively in bfloat16 — which is what the VL variant already
does — because autocast introduces casting overhead on top of fp32
memory bandwidth costs.

Additionally, the tokenizer was called once per batch chunk (16 calls
for 1000 texts at batch_size=64). HuggingFace tokenizers have
per-call setup overhead (padding computation, tensor allocation) that
doesn't scale with batch size, so a single tokenize-all-then-slice
approach saves ~80ms per 1000 texts.

Combined these changes improve throughput from ~325 to ~381 texts/sec
(+17%), closing the gap with the VL embedder (~386 texts/sec).

Signed-off-by: Oliver Holworthy <1216955+oliverholworthy@users.noreply.github.com>
@oliverholworthy oliverholworthy force-pushed the oholworthy/perf-embed-1b-v2-dtype-tokenizer branch from c54b7ff to 205011e Compare March 31, 2026 14:44
Same optimization as the embedder: call the tokenizer once with all
texts then slice into GPU batches, instead of calling per chunk.
Applied to both score() and score_pairs() methods.

Signed-off-by: Oliver Holworthy <1216955+oliverholworthy@users.noreply.github.com>
@oliverholworthy oliverholworthy changed the title Improve embed-1b-v2 throughput by loading in bf16 and tokenizing once Improve embedder and reranker throughput with bf16 loading and single-call tokenization Mar 31, 2026
@oliverholworthy oliverholworthy self-assigned this Mar 31, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant