Record: BackoffNgramMixer + Drift-Free TTT (3-seed mean val_bpb=0.6683)#779
Open
deanbrr wants to merge 1 commit intoopenai:mainfrom
Open
Record: BackoffNgramMixer + Drift-Free TTT (3-seed mean val_bpb=0.6683)#779deanbrr wants to merge 1 commit intoopenai:mainfrom
deanbrr wants to merge 1 commit intoopenai:mainfrom
Conversation
|
awesome |
Author
Thank you. causing a big stir. some are calling it gaming |
|
it was definately a gamer move. but I dont think gaming. This is my night studying and testing.... |
611612e to
bd5e1b9
Compare
travispchen
added a commit
to travispchen/parameter-golf
that referenced
this pull request
Mar 26, 2026
…5466, 3-seed mean) Adds order-adaptive entropy gating on top of PR openai#779's BackoffNgramMixer + Drift-Free TTT. Per-order entropy centers replace single threshold: higher n-gram orders trusted at lower entropy. 3-seed validation: 0.5478, 0.5458, 0.5463 (mean 0.5466, std 0.0010). All artifacts strictly under 16,000,000 bytes. Co-Authored-By: Travis Chen <travispchen@gmail.com>
4 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Record: BackoffNgramMixer + Drift-Free TTT (3-seed mean val_bpb=0.6683)
3-seed mean val_bpb: 0.6683 (std 0.0024), all artifacts under 16 MB, 8xH100 SXM, 600s training + 371s eval.
Results:
Seed 1337: 0.6663 BPB, 15.63 MB artifact
Seed 42: 0.6710 BPB, 15.78 MB artifact
Seed 2024: 0.6675 BPB, 15.48 MB artifact
Background:
I introduced the first n-gram eval cache in this competition (PR #659, val_bpb=1.0920, March 22 2026). That approach used a 5-gram cache with an oracle safety gate ruled illegal by organizers. This submission replaces the oracle gate with entropy-adaptive mixing and multi-order backoff, combined with a drift-free TTT configuration.
Technique:
Multi-order n-gram backoff (orders 2-7). Try highest order first, cascade down on miss. Each order uses 4M hash buckets. Counts accumulated from already-scored tokens only.
Entropy-adaptive alpha: alpha = 0.05 + 0.55 * sigmoid(2 * (H - 4.0)), where H is model entropy. High entropy trusts n-gram more, low entropy trusts the model. Depends only on the model's own output distribution, never on the true target. Mixed probability always applied, no oracle gate.
Drift-free TTT: Q projections only (QTTT=1), eta=0.02, LR=3e-5, 1M token chunks, 1 epoch, no adaptive LR, no Polyak. Produces monotonic BPB improvement through all 60 chunks with no late-chunk reversal.
Ablation (seed 1337):
Base model (no mixer, no TTT): 1.1363
TTT only (no mixer): 1.1369
Mixer only (no TTT): 0.6712
Full system: 0.6663
The BackoffNgramMixer contributes 99% of the improvement. It is a pure eval-time technique requiring no architectural changes or retraining.
Compliance:
Score-first TTT: each chunk scored under inference_mode before training on it. Backward-looking n-gram: counts from already-scored tokens only. No oracle selection. No training data access at eval (naive int5 quantization, no GPTQ). Token count verified: ratio_scored = 1.000000.
Credits:
PR #700 RoyiRa (base architecture, TTT framework), PR #606 gowtham0992 (int5 + Soft-Round QAT), PR #727 Asukabot0 (backoff concept, entropy-adaptive alpha formula), PR #461 Christopher-Lee-McClendon (TTT recipe), PR #518 sofiabod (LeakyReLU, cosine TTT). Dean Barr (original n-gram eval cache concept first in competition PR #659, drift-free TTT discovery, BackoffNgramMixer implementation).