In my years as a senior program manager, I saw a pattern repeat itself over and over again. In the early days of a release, we’d develop increasingly detailed plans and estimates. By the time we were ready to build, the plans looked solid. There were risks, but we had mitigation plans in place.
Then, as the release neared its end, things would get tight, and eventually we’d schedule a meeting to discuss pushing the date out. We’d make tough calls to accept some known quality issues. We’d plan for lots of overtime. And we’d vow that *next time* we’d do a better job.
We recently wrote about that early optimism and late-crisis pattern, especially the need to map core complexity and probe it early. Here, I’ll focus on how we need to change our approach to estimation in order to get better forecasts and make room for probing the core complexity.
When I was in the middle of that experience, it felt like there were only two options. One was to keep repeating the cycle. Build the detailed plan, estimate all the pieces, add them back up, apply some buffering, and hope this time we’d get it right.
The other option was the one recommended by some in the agile community: stop trying to estimate so far out and just do rolling planning every few weeks. I’ve seen that work in two or three contexts. But most executive teams still have to make quarterly financial calls, review strategy and plans with the board or investors, commit budget, and decide where to invest time and people across a portfolio of options. “Figuring it out as we go” doesn’t address the bigger questions they need to answer.
There is a third option, and it has become a core part of CAPED™ (Complexity-Aware Planning, Estimation, & Delivery). It solves several problems at once. It is much faster than building a detailed component-based estimate. It produces more accurate forecasts. And it gives teams the air cover they need to do Active Planning instead of trying to map every piece of the puzzle up front and hoping the math holds.
The third option is called Reference Class Forecasting. Kahneman and Tversky helped explain why the usual approach goes wrong. When we estimate by breaking the current project into tasks and imagining how they’ll go, we’re using what’s often called the inside view.
Inside view estimation goes wrong in predictable ways. It’s subject to optimism bias: teams genuinely believe they can avoid the problems that hit the last project. It amplifies the planning fallacy: the more time we spend building a plan, the more believable it becomes, regardless of whether the data is accurate or not. It ignores complexity, where some of the most important parts can’t be clearly seen at the start.
So Kahneman and Tversky recommended a different approach: take an outside view. Look at similar efforts from the past. Group them by meaningful similarities. That gives you a reference class. Then look at what actually happened in that class. How long did those efforts take? What range of outcomes resulted? How often did they run long, and by how much?
This turns out to be both faster and more accurate than building a detailed inside-view estimate. You are no longer pretending you can reason your way to certainty from an early task breakdown. You are starting from real outcomes instead.
Reference Class Forecasting works so well that mega-project expert Bent Flyvbjerg uses it on the major projects he advises and writes about it in How Big Things Get Done. It’s well worth reading. A lot of what he describes will feel familiar to our audience, especially the need for active planning and small-slice delivery once a project is underway.
Join our free webinar next week to see how this works in practice. We’ll show how Reference Class Forecasting improves forecasts and how that frees teams to use iterative probe cycles in Active Planning and good Agile practices during delivery, avoiding the early-optimism, late-crisis patterns too many of us have been stuck in for too long.
Last updated