The baseline uses 4 KV heads for 8 query heads — that's a 2:1 ratio in the grouped query attention . What if we go more aggressive? Drop to 2 KV heads. The model's been t...
1.4140
compression score
What this score means
Quick read before we head down the fairway.
Bits per byte is the challenge score: how many bits the model needs, on average, to predict each byte of unseen text. Lower is better.
The baseline uses 4 KV heads for 8 query heads — that's a 2:1 ratio in the grouped query attention. What if we go more aggressive? Drop to 2 KV heads. The model's been telling us it doesn't need all that attention capacity — Hole 5 barely noticed losing it. Fewer KV params means slightly faster steps, and we keep the same query expressiveness. Less baggage, same clubs.
Dropping from 4 KV heads to 2 will preserve quality while shaving parameters, memory, and step time.
(Whispering) A subtle change today. The competitor has lightened the bag — two KV heads where there were four. The gallery may not even notice. But the caddy assures us the clubs that remain are more than sufficient for the task at hand.
Looper’s Pick
The baseline uses 4 KV heads for 8 query heads — that’s a 2:1 ratio in the grouped query attention. What if we go more aggressive? Drop to 2 KV heads. The model’s been telling us it doesn’t need all that attention capacity — Hole 5 barely noticed losing it. Fewer KV params means slightly faster steps, and we keep the same query expressiveness. Less baggage, same clubs.
The Shot — Reducing KV Heads
What are KV heads and why can we get away with fewer?
In golf, your caddy carries 14 clubs but you rarely use more than 5 or 6 in a given round. The rest are insurance. The question is: can you leave some at home and still play your best?
In a transformer, self-attention works by having each token create three things: a Query (“what am I looking for?”), a Key (“what do I contain?”), and a Value (“what information should I pass along?”). In multi-head attention, these are split into independent “heads” that each attend to different aspects of the context.
The baseline uses 8 query heads but only 4 Key-Value (KV) heads — a technique called Grouped Query Attention (GQA). Each pair of query heads shares one KV head. This saves parameters (the K and V projection matrices are half the size) while preserving most of the attention quality, because the queries can still specialize while reading from the same contextual information.
We’re pushing this further: 2 KV heads for 8 query heads, a 4:1 ratio. Each KV head now serves 4 query heads. This saves another ~590K parameters and reduces compute per attention layer.
The research is encouraging: the original GQA paper (Ainslie et al., 2023) showed that even the extreme case — a single KV head serving all query heads, called Multi-Query Attention — maintains most of the quality of full multi-head attention, especially at smaller model scales. The savings compound: fewer KV parameters mean a smaller artifact, faster steps, and more training iterations in our wall clock budget.
On the Tee
(Whispering) A subtle change today. The competitor has lightened the bag — two KV heads where there were four. The gallery may not even notice. But the caddy assures us the clubs that remain are more than sufficient for the task at hand.
Results
| Metric | Value |
|---|---|
| val_bpb | 1.4140 |
| val_loss | 2.3875 |
| params | 16,470,600 |
| artifact | 13.85 MB (yes < 16MB) |
| wall time | 300s |
| steps completed | 1,377 |
| step avg | 218ms |
| peak memory | 2,660 MiB |
Comparison vs Hole 5 (baseline KV=4)
| Hole 5 (KV=4) | Hole 7 (KV=2) | |
|---|---|---|
| val_bpb | 1.4139 | 1.4140 |
| params | 17,059,912 | 16,470,600 |
| step avg | 229ms | 218ms |
| steps in 5min | 1,309 | 1,377 |
| memory | 2,798 MiB | 2,660 MiB |
Virtually identical BPB with 5% more steps, 5% less memory, and 590K fewer parameters. Free speed.
The Booth Reacts
Trent: And the scorecard reads… one-point-four-one-four-zero. Identical, to all practical purposes, to the previous hole’s one-point-four-one-three-nine. (Slight nod) Now, a lesser commentator might call this a wasted hole. But observe: five percent more steps completed. Five percent less memory consumed. Nearly six hundred thousand fewer parameters. The model has shed weight and lost nothing. In golf, we call that improving your swing mechanics without changing your score. The gains compound later.
Slice: OK so we dropped two KV heads and NOTHING HAPPENED to the score. You know what that tells me? Those heads were FREELOADING. Just sitting there, eating parameters, contributing NOTHING. When I was at Q-school in ‘04, there was a guy who carried 16 clubs — two over the limit. Got disqualified on the first tee. Sometimes less really is more. Keep the two heads, pocket the speed, let’s move on to something that actually moves the needle.
The Booth Reacts
The Card
Played it straight down the middle
This hole improved 0.0001 on the compression score versus the previous stop. Lower is better here: it means the model predicts unseen text more efficiently while leaving 2,154,559 bytes of artifact headroom.
Training Curve
Promising free lunch. The score stayed flat while efficiency improved, which makes this a useful ingredient for later holes.
vs. the Field
1.2197
1.2244
1.2244
1.4140
Signature Voices
Post-round notebook notes from the tower, the caddie book, and the cheap seats.
(Whispering) A subtle change today. The competitor has lightened the bag — two KV heads where there were four. The gallery may not even notice. But the caddy assures us t...
OK so we dropped two KV heads and NOTHING HAPPENED to the score. You know what that tells me? Those heads were FREELOADING. Just sitting there, eating parameters, contrib...
Model Card
How this hole was run
round_007_kv2 ok cuda