Record: Order-Adaptive 9-gram Backoff + Distributed Prefill — val_bpb 0.4405 (3-seed mean)#890
Open
sofiabod wants to merge 25 commits intoopenai:mainfrom
Open
Record: Order-Adaptive 9-gram Backoff + Distributed Prefill — val_bpb 0.4405 (3-seed mean)#890sofiabod wants to merge 25 commits intoopenai:mainfrom
sofiabod wants to merge 25 commits intoopenai:mainfrom
Conversation
- add BigramHash(2048,128) with zero-init and learnable scale - add SmearGate: per-dim gate blending with prev token - weight decay 0.04 on Muon (leaderboard standard) - muon_momentum 0.99 (from 0.95, leaderboard standard) - best config baked in: 7L mlp_mult=3 seq_len=4096 etc - bigram/smear params explicitly added to optimizer groups
- add forward_logits() method to GPT for eval without loss computation - add eval_val_sliding() with configurable stride (default 64) - each scored token gets ~4032 tokens of context instead of ~2048 average - eval-only change: no training modifications, no artifact size change - expected ~0.03 BPB improvement in reported score
- init attn_scale and mlp_scale to 1/sqrt(layer_idx+1) instead of 1.0 - deeper layers get smaller residual contributions, stabilizes training - zero extra params, zero compute overhead - used by all top submissions per vault research
- apply rotary embeddings to first 16 dims of 64 head_dim (25%) - remaining 48 dims are position-free, improving generalization - zero extra params, used by all top submissions per vault research - configurable via ROPE_DIMS env var (0=all, default=16)
- TTT: 5 epochs at lr=0.0005 (matching SOTA PR openai#442) - use DDP model for TTT forward pass to sync gradients across GPUs - shard validation tokens across ranks for proper distributed TTT - batch size 4 seqs/GPU, modal timeout 1800s
- legal score-first TTT: score chunk, then adapt on scored tokens (1 seq to avoid OOM) - SGD+momentum, freeze early 2 blocks, 3 epochs, lr=0.005, adapt every 4 batches - GPTQ-lite: test 5 clip percentiles per row, pick best MSE - Tight SWA: collect 12 checkpoints when lr_scale<0.2, average before export - int8 with SWA+GPTQ: 1.1787 (improved from 1.1802)
- 11 layers, XSA on last 4, int6 quantization + zstd-22 - EMA(0.997), GPTQ-lite, Tight SWA, Late QAT@0.15 - Partial RoPE 16/64, LN Scale 1/sqrt(layer+1) - SmearGate + BigramHash(2048,128), VE128 on layers 9,10 - Muon WD=0.04, momentum=0.99, matrix_lr=0.025 - SDPA fallback (no FA3), batch 786K, seq 2048 - add zstandard to Modal image
- flash-attn requires GPU for compilation, Modal builds without GPU - keeping SDPA fallback, ~101ms/step - still have FA3 import attempt in code for when it becomes available
- attempt flash-attn pip install at runtime with 120s timeout - still falls back to SDPA if install fails - 101ms/step with SDPA, ~84ms with FA3
- replace relu(x)^2 with leaky_relu(x, 0.5)^2 - PR openai#493 reaches 1.1309 with partial stack using this activation - untried on full openai#414 stack — could give -0.002 to -0.005 BPB - zero param cost, zero speed overhead
…enai#486) - 30 epochs AdamW(lr=0.0005) on val tokens with cosine LR decay - per-layer LR: 3x for mlp.proj (high quant error), 0.5x for mlp.fc - DDP gradient sync via all_reduce(AVG) + grad clip 1.0 - keep LeakyReLU(0.5)^2 from exp48 - expected: ~0.06 BPB gain (1.127 → ~1.07) - modal timeout 3600s for 30-epoch TTT
- TTT_MODE=preeval (default): bulk train then score (max BPB, may be invalid) - TTT_MODE=legal: score chunk first, then train on scored tokens (valid for records) - legal TTT unfreezes last 2 blocks + norms + scales + embeddings - 1528 lines (over 1500 baseline limit but OK for records folder)
…ual hash tables, per-window score-first, entropy-adaptive alpha, tc>0 check)
…gramHash 6144, int5, stride=32) + 9-gram prefill
…_bpb=0.4405, 3 seeds)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Record: Order-Adaptive 9-gram Backoff + Distributed Prefill — val_bpb 0.4405 (3-seed mean)
Results
Method
11-layer transformer (512d, 8/8 full MHA, XSA-all, LeakyReLU(0.5)², 3.5x MLP).
Order-adaptive entropy-gated 9-gram backoff cache with per-order entropy thresholds
and distributed cache prefill. Score-first, backward-looking, deterministic.
Architecture
Eval-time N-gram Cache
Key Insight
Distributed cache prefill is critical — without it, ranks 1-7 start with cold caches,
losing ~60% of n-gram effectiveness. Prefill makes distributed eval equivalent to
single-GPU sequential eval. Combined with 9-gram orders (capturing longer repeated
phrases) and per-order entropy gating (trusting higher orders at lower uncertainty),
this produces a -0.69 BPB gain over neural-only sliding window eval.
Legality
(2) compute blended loss, (3) update cache with window tokens. Cache only uses
backward-looking tokens that have already been scored. No future data access.
output entropy, not the target token. No oracle/hindsight selection.
Acknowledgments
Huge thanks to the incredible community: