Insights for Leaders | 2 to 3 minute read
Most organisations are reasonably good at making decisions. Very few are good at learning from them. The gap between the two is not a matter of intent, almost every leadership team will tell you they review what worked and what didn't. The gap is a matter of honesty, and of building evaluation into the process before the decision is made, not as an afterthought once the outcome is known.
Done well, evaluation is one of the highest-return activities available to a leadership team. Done poorly, or not at all, it ensures that the same mistakes get made with increasing confidence each time.
The most common form of evaluation in organisations is the post-mortem, a structured review conducted after something has gone wrong. It has value. But it has two significant limitations.
The first is survivorship. Post-mortems happen after failures. The decisions that went well, for reasons that were partly luck, partly timing, partly factors that had nothing to do with the quality of the decision, rarely receive the same scrutiny. Over time, this produces leadership teams that have a detailed understanding of how they fail, and a significantly inflated understanding of how well they decide when things go right.
The second is hindsight bias. Once an outcome is known, it becomes almost impossible to evaluate the decision that produced it on its own terms, the information available at the time, the uncertainty that existed, the options that were genuinely on the table. The tendency is to judge the decision by its outcome, which tells you very little about whether the process was sound.
A good decision can produce a bad outcome. A poor decision can produce a good one. Conflating the two is not just intellectually sloppy, it actively degrades the quality of future decisions.
The most underused form of evaluation happens before the decision is finalised, not after. It involves asking, explicitly: how will we know if this was the right decision? What would we expect to observe, and over what timeframe, if the assumptions underlying this choice are correct? What would signal that they're wrong?
These questions force a precision that most decision-making processes avoid. They require the leadership team to commit to the logic of the decision in a way that makes it testable. And they create a baseline against which actual outcomes can be measured, not by the comfortable standard of "things turned out reasonably well," but against the specific conditions the decision was designed to produce.
This isn't complicated. It requires perhaps thirty minutes of disciplined conversation at the end of a decision process. It is done rarely, because it creates accountability that most organisations would prefer to keep implicit.
In organisations where evaluation is genuinely embedded, decisions are treated as hypotheses. The reasoning behind them is documented, not in exhaustive detail, but clearly enough to be revisited. Outcomes are tracked against the assumptions that justified the decision, not just against financial targets. And when the evidence suggests the original logic was flawed, that finding is treated as valuable information rather than an uncomfortable verdict on the people who decided.
The leadership behaviours that enable this are specific: leaders who model intellectual honesty about their own past decisions, who distinguish clearly between process quality and outcome quality, and who treat being demonstrably wrong about something as evidence of a functioning learning system, not as a career risk.
The organisations that evaluate well don't just avoid repeating mistakes. They build something more valuable over time: a clearer, more calibrated understanding of where their judgement is reliable and where it isn't.
That kind of institutional self-knowledge is rare. It doesn't appear on a balance sheet. But it compounds, quietly, durably, into a decision-making capability that is genuinely difficult for competitors to replicate.
Next in this series: Governance and Decision Quality