The leaderboard says everyone at the top uses sliding window eval. Instead of evaluating each token with whatever context it happens to have (tokens early in a chunk get...
1.3055
compression score
What this score means
Quick read before we head down the fairway.
Bits per byte is the challenge score: how many bits the model needs, on average, to predict each byte of unseen text. Lower is better.
The leaderboard says everyone at the top uses sliding window eval. Instead of evaluating each token with whatever context it happens to have (tokens early in a chunk get almost none), we overlap the windows so every token gets at least 768 tokens of prior context. The training doesn't change. The artifact doesn't change. It's purely an eval-time trick — and it's reportedly worth about 0.03 BPB for free.
Sliding window evaluation should improve BPB for free by giving every token near-maximum context, without changing training or artifact size.
(Whispering) A quiet revolution today. The competitor has not changed a single weight, not modified a single layer. The model is precisely the same as Hole 12. What has changed is how we look at it. Sliding window evaluation. Every token, every prediction, given the fullest possible context. It's rather like discovering that the course you've been playing has been measured in kilometers rather than miles. The scores... are about to change.
Looper’s Pick
The leaderboard says everyone at the top uses sliding window eval. Instead of evaluating each token with whatever context it happens to have (tokens early in a chunk get almost none), we overlap the windows so every token gets at least 768 tokens of prior context. The training doesn’t change. The artifact doesn’t change. It’s purely an eval-time trick — and it’s reportedly worth about 0.03 BPB for free.
The Shot — Sliding Window Evaluation
What is sliding window evaluation and why is it a free improvement?
Imagine you’re reading a novel but you can only see one page at a time. At the top of each page, you’re disoriented — who was speaking? What was the context? By the bottom of the page, you’ve reoriented and your predictions are much better. Now imagine if you could overlap the pages: for each new page, you re-read the last three-quarters of the previous page first. You’re never disoriented. Every sentence gets nearly the full page of context.
That’s sliding window evaluation. In the standard approach, we split the validation text into non-overlapping chunks of 1,024 tokens. Token #0 in each chunk has zero context — the model is guessing blind. Token #512 has 512 tokens of context. Only token #1023 gets the full 1,024 tokens of context. On average, each token gets about 512 tokens of context.
With sliding window (stride=256), we advance by only 256 tokens between evaluation windows. Each new window of 1,024 tokens overlaps with 768 tokens from the previous window. We only score the last 256 tokens — the ones that get at least 768 tokens of context. This means every scored token has near-maximum context, dramatically improving prediction quality.
The trick: this changes nothing about the trained model or the compressed artifact. The model is identical. The only difference is how we evaluate it. The competition explicitly allows this — evaluation can use any sequence length and any strategy within the 10-minute eval budget. Every top submission on the leaderboard uses some form of sliding window.
The trade-off is eval speed: with stride=256 on 62M validation tokens, we need ~240K forward passes instead of ~60K. On our L40S this took 13.5 minutes (over the 10-minute eval budget), but on 8xH100 it would be much faster. Batching multiple windows together and using a smaller stride (like 64) would further improve BPB at the cost of more compute.
On the Tee
(Whispering) A quiet revolution today. The competitor has not changed a single weight, not modified a single layer. The model is precisely the same as Hole 12. What has changed is how we look at it. Sliding window evaluation. Every token, every prediction, given the fullest possible context. It’s rather like discovering that the course you’ve been playing has been measured in kilometers rather than miles. The scores… are about to change.
Results
| Metric | Value |
|---|---|
| val_bpb | 1.3055 |
| val_loss | 2.2043 |
| params | ~18,380,000 |
| artifact | 16.71 MB (still over 16MB) |
| wall time | 600s (training) + 814s (eval) |
| eval stride | 256 tokens |
Sliding Window Effect
| Eval method | val_bpb | Delta |
|---|---|---|
| Standard (Hole 12) | 1.3394 | — |
| Sliding window stride=256 (this hole) | 1.3055 | -0.0339 |
Free. Zero cost to training. Zero cost to artifact. Pure eval-time improvement.
Remaining Issues
The artifact is still 16.71 MB — over the 16MB limit. And eval took 13.5 minutes on L40S (over the 10-minute eval budget). Both problems solve themselves on faster hardware, but we still need INT6 quantization or similar to get the artifact under 16MB.
The Booth Reacts
Trent: (Removing glasses in genuine surprise) One-point-three-zero-five-five. From one-point-three-three-nine-four. Thirty-four thousandths of improvement and we did not change a single weight. (Long pause) I confess I find this rather extraordinary. The model was already trained. The artifact was already compressed. And yet, simply by reading the examination paper more carefully — overlapping our windows, granting each token its due context — we have found three hundredths of a nat that were there all along. This is the golfing equivalent of discovering you’ve been scoring with the wrong par. The ball was always in the hole. We simply hadn’t looked.
Slice: (Staring at screen) Thirty-four thousandths. FOR FREE. You know what I did in Q-school ‘04 that made the difference? I didn’t change my swing, didn’t buy new clubs, didn’t hire a new instructor. I started reading my putts from BOTH sides of the hole. Same putt, better read, lower score. THAT’S what sliding window is. We had the talent all along — we just weren’t looking at the scoreboard right. (Turns to camera) And by the way, the leaderboard leaders? They’re using stride=64, not 256. We haven’t even maxed this out yet.
The Booth Reacts
The Card
Picked up strokes on the field
This hole improved 0.0349 on the compression score versus the previous stop. Lower is better here: it means the model predicts unseen text more efficiently while leaving 0 bytes of artifact headroom.
Training Curve
Massive free gain: 0.034 BPB. Sliding window eval is the single highest-ROI technique we've found. Eval time is too long at stride=256 on L40S (13.5 min) but will be fast enough on H100.
vs. the Field
1.2197
1.2244
1.2244
1.3055
Signature Voices
Post-round notebook notes from the tower, the caddie book, and the cheap seats.
(Whispering) A quiet revolution today. The competitor has not changed a single weight, not modified a single layer. The model is precisely the same as Hole 12. What has c...
(Staring at screen) Thirty-four thousandths. FOR FREE. You know what I did in Q-school '04 that made the difference? I didn't change my swing, didn't buy new clubs, didn'...
Model Card
How this hole was run
round_015_sw256 ok train_gpt_valemb_sw.py cuda