At the end of every quarterly OKR review, an executive has approved a set of statements. They have read through team-level goals, asked a few clarifying questions, and signed off. The portfolio looks coherent on paper. What the executive has not done is learn anything new about whether the organisation’s investment is pointed in the right direction. That is not a review. That is administration.
The rubber-stamping pattern is not caused by poor preparation or insufficient rigour. It is caused by a structural misapplication of the instrument itself. Across large organisations in government and corporate Australia, OKRs have been introduced as a team-level goal-writing exercise — a way to get individual units to articulate their plans in a standardised format. Used this way, they produce hundreds of statements that are coherent within each team’s own frame of reference and largely disconnected from the strategic question the executive actually needs answered. Research by Haufe Talent and Stuttgart University of Applied Sciences found that even among organisations actively using OKRs, only 60% of employees have a concrete understanding of company strategy (Haufe Talent & Stuttgart University of Applied Sciences, 2022). Four in ten people working inside an OKR system are doing so without a clear strategic anchor. The implication is not that teams are writing goals carelessly. It is that the framework has been deployed at the wrong level of the organisation, serving the wrong purpose.
Output-based OKRs produce sign-offs, not governance
The structural distinction is this: OKRs function as a team-level planning mechanism when key results measure output — what the team will build, ship, or complete. They function as a governance mechanism when key results measure outcome — measurable changes in the behaviour of customers, citizens, or stakeholders that indicate the investment is generating value.
The category error runs deep. When OKRs are prescribed from leadership down through the organisation as fixed targets, teams lose the ownership that makes the framework work. The consequence is predictable: goals are written to satisfy the review process rather than to reflect genuine strategic hypotheses. Reviews become a performance — everyone on track until the quarter ends and the gap between reported progress and actual impact becomes visible. This is not a failure of discipline. It is the rational response of capable people to a system that has asked the wrong question of them from the outset.
Used correctly, OKRs give an executive a fundamentally different signal. When every team’s key results are framed as measurable changes in behaviour — in the people the organisation exists to serve — the portfolio becomes legible. An executive reviewing output OKRs is approving a production schedule. An executive reviewing outcome OKRs is reading the strategic position of the organisation in real time. The question in the review room shifts from “what did you build?” to “what changed as a result?” That second question cannot be gamed, and a completed task list does not answer it. It requires the team to have stayed close to the people their work affects and to have tracked whether the investment moved anything that matters. An executive who makes “what changed?” the governing question of their review process has changed the character of every review that follows.
Strategy is the precondition, not the output
This reframe has a precondition. Outcome-based OKRs only become legible at the portfolio level when the organisation has a clear, communicated strategy that teams can orient their goals toward. Without that shared direction, team-level OKRs — even well-written ones — address different problems in different directions, and no amount of review rigour makes the portfolio coherent. A strategy that exists only in the leadership team’s heads does not function as an alignment mechanism. It needs to be articulated well enough that every team can independently determine whether the goals they are proposing support it. The review meeting is too late to begin that conversation.
For an executive operating at portfolio level, this reframe is not a small adjustment. It changes what you look for in a review, what you ask when a team presents its goals, and what you consider evidence of progress. The executive who brings a governance lens to OKRs is not checking whether teams are busy. They are reading whether the organisation’s collective investment is generating the behavioural changes — in customers, in citizens, in the systems it operates within — that the strategy is designed to produce. The executive who reads it correctly can see, in a single review cycle, which investments are moving something and which are producing activity without consequence.
One question separates a governance review from administration
The shift from output to outcome is not a process change. It does not require new software, a new reporting template, or a reorganisation of teams. It requires a single persistent question asked at the right moment. In your next portfolio review — before any discussion of what teams are building — ask each presenter: “If this goal is achieved, what will be measurably different in the people or systems you exist to serve?” Teams that struggle with it are not failing the test — they are showing you where the governance work begins.
The distinction surfaces something more important than goal quality. It identifies whether the team has a genuine hypothesis about value — whether they have thought through the connection between their work and its consequence in the world. An executive who can surface that distinction quickly has an instrument. One who cannot is left approving statements.
The constraint is not in the framework. It is in the question the review has been designed to answer.
What this means for senior leaders
1. An OKR review that produces approval without insight is an administrative exercise, not a governance one. Executives should assess their current review process against this distinction before the next cycle.
2. Key results that measure output — features shipped, projects completed, deliverables produced — tell an executive what teams did. Key results that measure outcome tell an executive what changed. The second is a governance signal; the first is a production log.
3. A strategy that has not been articulated well enough for every team to write goals against it is not functioning as an alignment mechanism. Scaling OKRs without first establishing that shared strategic direction produces coordination theatre, not organisational coherence.
4. The question “If this goal is achieved, what will be measurably different in the people or systems you exist to serve?” is a diagnostic instrument. It surfaces teams operating from a genuine value hypothesis and teams operating from a production schedule. An executive who asks it consistently will see their portfolio differently within a single review cycle.
References
Haufe Talent & Stuttgart University of Applied Sciences. (2022). OKR benchmark study [In German]. Haufe Group. [Cited via English-language secondary sources]