Your network blocks the Lichess assets!

lichess.org
Donate
A study about the effectiveness of the woodpecker method

Woodpecker method training: 431,000 Puzzles, 60,000 Rated Games, and +35 Elo

TacticsPuzzleChess
Contains sponsored content, affiliate links or commercial advertisement
The woodpecker method, solving the same set of tactical puzzles across repeated cycles to build automatic pattern recognition, was popularized by GM Axel Smith and IM Hans Tikkanen. But how well does it actually work at scale? We built Disco Chess, a free platform that automates this method, and analyzed 431,899 Woodpecker Method puzzle attempts from 1,952 users alongside 60,081 of their rated Lichess and Chess.com games. Users who trained consistently gained an average of +35 Elo, with players under 1200 gaining the most (+56 Elo, 80% improved). By cycle 5, users solved puzzles at 2.5x their original efficiency.

Background

Disco Chess did a previous study of 120,000 puzzle attempts showed that the woodpecker method clearly improves puzzle accuracy and speed - users got faster and more accurate with each cycle. But it couldn't answer the next question: does that improvement transfer to rated games?

disco-chess-preview.webp

Since January 2026, Disco Chess users can link their Lichess or Chess.com accounts and import their rated game history. 569 users did so, giving us a direct bridge between Woodpecker Method training data and game performance. This post summarizes what we found across 20 weeks of data.

Disclaimer: The Woodpecker Method is a trademark licensed by Chess.com, LLC and Quality Chess UK LTD, originating from GM Hans Tikkanen. Disco Chess is not affiliated with, endorsed by, or connected to Chess.com, Quality Chess, or Chessable.

The Dataset

This studyPrevious study
Users1,9521,017
Puzzle attempts431,899120,513
Imported rated games60,081N/A
Users with linked accounts616N/A
Observation period20 weeks7 weeks

Of 616 users with linked accounts, 498 linked Lichess and 178 linked Chess.com (some linked both; 569 had imported games). Game time controls were predominantly blitz (60%) and rapid (38%).

The Primary Result: +35 Elo

From the 569 users with imported games, we identified 105 who met quality criteria: 7+ active training days and 10+ rated games during their training window. We smoothed ratings using the average of each user's first and last 5 games to reduce noise.

MetricValue
Mean Elo change+34.9
Median Elo change+35.0
95% confidence interval[17.3, 52.5]
p-value< 0.001
Cohen's d0.38 (small-to-medium)
Users who gained Elo71 of 105 (68%)

68% of users in the quality cohort gained Elo, with the effect strongest for lower-rated players.

Who Benefits Most?

Starting ElonAvg Elo Change% Improved
< 120025+56.080%
1200–159941+34.868%
1600–199929+15.955%
2000+10+37.970%

The effect is largest for players under 1200 (median +70 Elo, 80% improved). The 1200–1599 bracket gained a median of +35 Elo with 68% improving. Above 1600, gains are smaller, which is consistent with the broader chess improvement literature: the higher your rating, the harder each point is to gain. The 2000+ bracket is intriguing (+38 Elo, 70% improved) but has only 10 users, so treat it as anecdotal.

Consistency Matters More Than Volume

This is perhaps the most actionable finding. Using 475 user-week observation pairs, we looked at whether this week's training volume predicted next week's Elo change.

Training Volume (this week)nAvg Elo Change (next week)
No training125-0.4
Light (1–49 puzzles)107+11.3
Moderate (50–149 puzzles)115+8.1
Heavy (150+ puzzles)128+8.3

The biggest jump is from zero to any training: +8 to +11 Elo per week versus essentially flat. But there is no additional benefit from heavy training over light training. Doing 20 puzzles a day consistently appears to be as effective as grinding 50+.

Moderate training also reduced rating volatility (SD of 24.3 vs 43.0 for non-trainers), suggesting that regular practice may stabilize performance even when it doesn't dramatically increase it.

The Woodpecker Method: Cycle-Over-Cycle Results

The woodpecker method asks you to solve the same puzzle set repeatedly across cycles, building speed and automaticity. With 3.6x more data than our previous study, the cycle progression is confirmed:

CycleUser-Set PairsAccuracyAvg Solve TimeEfficiency Multiplier
13,23080.1%34.3s1.00x
227689.5%26.1s1.47x
313090.4%20.9s1.85x
47690.5%16.9s2.29x
54692.1%15.6s2.53x

Accuracy plateaus around 90% after cycle 2, but solve time keeps dropping. By cycle 5, users solve puzzles at 2.5x their original efficiency. This is the Woodpecker Method working as theorized: repetition converts conscious calculation into pattern recognition.

The drop from 3,230 user-set pairs at cycle 1 to 276 at cycle 2 means 91% of users did not complete a second cycle within the observation window. Survivorship bias is real here - the users who keep going may be systematically different from those who stop. But the effect size at cycle 2 is based on a substantial n=276 and is unlikely to be explained entirely by selection.

Users who completed 3–4 cycles of puzzle sets and also played rated games showed the highest average Elo gain (+49 Elo, n=20), compared to +13 Elo for single-cycle users (n=70).

Spaced Repetition for Mistakes

Disco Chess includes an Anki-style spaced repetition system: when you get a puzzle wrong, it enters a review queue with escalating intervals. We compared accuracy gains between cycles based on how much users engaged with this system.

Review ActivitynCycle 1 AccuracyCycle 2+ AccuracyGain
No reviews2683.3%88.4%+5.1pp
Heavy reviews (200+)8781.0%90.3%+9.3pp

Heavy reviewers started with the *lowest* cycle 1 accuracy but achieved the *highest* later-cycle accuracy. The review system appears most valuable for users who make more mistakes - which is exactly when it generates the most material. The caveat: heavy reviewers are also the most engaged users overall, so we cannot cleanly separate the review mechanism from general dedication.

The Delayed Effect

Tracking the quality cohort's Elo trajectory week by week reveals a characteristic pattern:

WeekAvg Elo Change from StartTraining Attempts
0–2+9 to +10115–182/week
3–4+19 to +34113–130/week
5–6+43 to +4894–119/week
7++51 to +5935–66/week

Training volume peaks in weeks 1–2, but Elo gains accelerate in weeks 3–6 - even as training tapers off. This is consistent with a delayed transfer effect: the pattern recognition built through puzzle repetition takes time to integrate into actual game play. If you start training and see no rating change after two weeks, that is normal.

What This Study Cannot Show

We want to be explicit about the limitations:

  • No randomized control group. We did not randomly assign users to "train" vs. "don't train" groups. The correlation between training and Elo gain is consistent with causation but does not prove it.
  • Pre-existing improvement trend. Users who seek out a training platform are likely already trying to improve. Their +23 Elo pre-training trend confirms this. Some or all of the post-training gain may be a continuation of existing momentum.
  • Selection bias. The quality cohort (105 users) represents users who trained for 7+ days and played 10+ games. The 73% of users who left within a week are not represented. Our results describe what happens for users who commit to training, not what happens for everyone who tries.
  • Confounding activities. Users may have simultaneously been reading chess books, watching videos, taking lessons, or analyzing their games. We cannot separate the effect of puzzle training from other improvement activities.
  • Small subgroups. The 2000+ bracket has 10 users. High-cycle analyses involve fewer than 50 user-set pairs. These are exploratory, not definitive.

Practical Takeaways

For what they are worth given the limitations above:

  1. Just start. The dose-response data shows the largest benefit is from going from no training to any training. Even light, consistent practice (a few puzzles a day) predicted positive Elo movement.
  2. Be consistent, not intense. 20–30 puzzles a day, done regularly, appears as effective as higher volumes. Find a pace you can sustain.
  3. Repeat your puzzle sets. Users who completed multiple Woodpecker cycles showed larger Elo gains than single-cycle users, and the efficiency data confirms real improvement with repetition.
  4. Review your mistakes. Active engagement with missed puzzles - not just moving on - was associated with the largest accuracy improvements.
  5. Be patient. Expect 3–5 weeks before training visibly affects your rating. Pattern recognition takes time to transfer from puzzles to games.
  6. Lower-rated players stand to gain the most. If you are under 1600, the data is most encouraging for you.

We plan to continue collecting data and may revisit this analysis when we have a larger sample of users with linked accounts, particularly at higher rating brackets. If you train on Disco Chess and link your Lichess account, your (anonymized) data contributes to future studies. If you have questions about the methodology or want to poke holes in the analysis, we welcome it. Good data gets better with scrutiny.