Skip to content

[Experimental] Add TurboQuantKVCache: PolarQuant KV cache compression at 2-4 bits#1059

Open
rachittshah wants to merge 1 commit intoml-explore:mainfrom
rachittshah:feat/turboquant-kv-cache
Open

[Experimental] Add TurboQuantKVCache: PolarQuant KV cache compression at 2-4 bits#1059
rachittshah wants to merge 1 commit intoml-explore:mainfrom
rachittshah:feat/turboquant-kv-cache

Conversation

@rachittshah
Copy link
Copy Markdown

@rachittshah rachittshah commented Mar 26, 2026

Closes #1060

Adds PolarQuant KV cache compression (from Google's TurboQuant paper, ICLR 2026) as an opt-in experimental feature. The algorithm applies a fixed random orthogonal rotation to KV vectors then quantizes each coordinate with a precomputed Lloyd-Max optimal scalar quantizer. Data oblivious, no calibration needed.

Everything lives in a single new file mlx_lm/models/turboquant.py (~200 lines, pure mlx.core, no numpy). Nothing is imported unless --turbo-kv-bits is passed or to_turboquant() is called.

Changes

File What
models/turboquant.py New file. TurboQuantKVCache + PolarQuant helpers
models/cache.py Lazy class resolver in load_prompt_cache (+10 lines), to_turboquant() on KVCache (+13 lines)
generate.py --turbo-kv-bits CLI arg, maybe_turboquant_kv_cache() (+14 lines)
tests/test_prompt_cache.py 7 new tests (+127 lines)

No new dependencies. Codebook constants are inlined. Rotation matrix generated via mx.linalg.qr.

Results

Tested on Apple Silicon across Llama 3.2-1B/3B and Qwen3-1.7B/4B. Logit cosine similarity vs FP16 KV cache:

Model (head_dim) 3-bit 4-bit 3-bit compression
Llama 3.2-3B (128) 0.988 0.997 4.6x
Qwen3-4B (128) 0.957 0.995 4.6x
Llama 3.2-1B (64) 0.823 0.974 4.0x

Top-1 token accuracy is perfect at 4-bit across all models and prompts. Full benchmark and methodology at https://github.com/rachittshah/mlx-turboquant/blob/main/REPORT.md

Known limitations

Decode throughput is ~0.5x FP16 due to the dequantize-on-fetch path. A fused Metal kernel (analogous to mx.quantized_matmul) would close this gap but is out of scope for this PR.

Some models degrade below 4-bit (Qwen3-1.7B drops at 3-bit). Recommend 4-bit as default.

Usage

python -m mlx_lm.generate --model mlx-community/Llama-3.2-3B-Instruct-4bit \
    --turbo-kv-bits 4 --prompt "The capital of France is"
from mlx_lm.models.turboquant import TurboQuantKVCache
cache = [TurboQuantKVCache(bits=4) for _ in range(num_layers)]
# or convert an existing cache
tq = kv_cache.to_turboquant(bits=3)

Tests

All 27 pass (20 existing + 7 new). Tests cover basic ops, quality, trim, save/load, conversion, model integration, and memory.

@rachittshah rachittshah marked this pull request as ready for review March 26, 2026 07:37
@rachittshah rachittshah force-pushed the feat/turboquant-kv-cache branch from b631010 to fd936ca Compare March 26, 2026 07:43
@rachittshah rachittshah changed the title Add TurboQuantKVCache: data-oblivious KV cache compression at 2-4 bits [Experimental] Add TurboQuantKVCache: PolarQuant KV cache compression at 2-4 bits Mar 26, 2026
Adds opt-in TurboQuant KV cache compression (PolarQuant, ICLR 2026)
in a single self-contained file with zero impact on existing paths.

turboquant.py uses only mlx.core — no numpy. Codebook constants are
inlined (~200 bytes). Rotation matrix via mx.linalg.qr. Lazy-loaded
only when --turbo-kv-bits is passed or to_turboquant() is called.

Changes to existing files:
- cache.py: lazy class resolver in load_prompt_cache (+10 lines),
  to_turboquant() on KVCache (+13 lines)
- generate.py: --turbo-kv-bits CLI arg (+14 lines)
- tests: 7 new tests, all 27 pass

Quality (Llama 3.2-3B, head_dim=128):
  4-bit: 0.997 cosine vs FP16, 3.8x compression
  3-bit: 0.988 cosine vs FP16, 4.6x compression
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

PolarQuant KV cache compression (TurboQuant, ICLR 2026)

1 participant