Every major programme begins with a commissioning decision. Resources are allocated, teams assembled, and a timeline agreed upon. What is rarely agreed — rarely even asked — is whether the structural logic being applied to the programme matches the nature of the challenge it is meant to address.
This is not a process failure. Organisations do not typically have a shared language for distinguishing between categories of challenge. The result is that the same governance model, planning assumptions, oversight cadence, and performance metrics are applied to programmes that are fundamentally distinct. For some, this works. For others, it produces an outcome that is both predictable and invisible until significant investment has already been made.
Two categories of challenge present themselves to every leadership team.
The first challenge
Q: Is the solution ultimately knowable upfront?
If the solution exists, it can be found, and a well-designed plan will find it. Expert analysis, rigorous requirement-setting, and disciplined execution are the right instruments here.
The second challenge
Q: What if the solution isn’t knowable upfront?
Complex problems don’t have upfront solutions. These only emerge through action — through building, observing, and adjusting in response to real users in their actual context of use. No amount of upfront analysis can uncover whether a solution will indeed solve the problem it’s intended to address until a real user starts to use it.
Most organisations just follow the process
The structural error organisations make is applying the logic of the first category to the second. It is not a mistake born of incompetence. It’s a mistake that has arisen through the application of a specific kind of organisational behaviour: follow the process and everyone assumes this will create success.
It is a reasonable response to institutional pressure: governance frameworks reward comprehensive planning; investment cases require projected outcomes; oversight bodies expect defined milestones. The result is that programmes which require discovery are structured for delivery — and the cost of this mismatch accumulates quietly, often for months, before it becomes visible.
The pattern is consistent across programs of this type: the constraint is not in the team's capability or the organisation's commitment. It is in the structural logic applied to the problem before its nature was honestly assessed.
Where false assumptions happen the most
These conflicting approaches tend to emerge in a very particular way.
- The program produces detailed plans, sophisticated designs, and robust governance artefacts. Progress is reported against milestones.
- Senior stakeholders receive confidence-building updates.
Then, when the solution finally encounters the real environment — real users, real integration points, real operational conditions — substantial rework is often required. Assumptions that seemed sound at the planning stage turn out to have been hypotheses. Months of investment, in some cases more than a year, must be reconsidered.
This mismatch between structural logic and the challenge category has significant implications for executives when commissioning programs: how, in a traditional waterfall process, can you make budget proposals for solutions that are not knowable upfront?
Simple problem
Follow the process, and the outcome is guaranteed. Great candidate for AI and automation.
Framework: Predictive planning (Waterfall). Change is highly unlikely.
Complicated problem
There are many solutions that are likely to solve the problem. Only an expert can recognise which is most likely to create the outcome needed.
Framework: Adaptive planning (Agile). Fast feedback will give confidence in the expert’s choice of solution. If not, adapt the plan (and the solution) as early as possible.
Complex problem
Only when a solution is put in the hands of real users can we be certain that our idea actually solves the problem.
Framework: Adaptive planning using hypotheses (Agile). Deployment often and fast feedback from users will confirm whether the evolving solution is the right one. If not, adapt the solution and its plan.
Avoiding the simple/complex trap
The relevant question for executives is not how to improve the program’s delivery — it is whether the program’s chosen framework is capable of solving the problem in the first place.
Organisations that answer this question accurately design their programs accordingly and consistently find that results arrive faster and last longer. Organisations that answer it late only discover that the cost of downstream correction is disproportionate to what early diagnosis would have required.
The diagnostic question is not complicated, but it does require honesty. Before a program is commissioned, executives need to ask: is the solution knowable in advance, or must it be discovered through action?
Asking this question changes what the commissioning process produces. It changes what success looks like at six months, what governance needs to provide, and what performance metrics are actually informative. It does not simplify the program. What it does is align the program’s governance, reporting, and feedback loops to the executive, with the problem’s structure.
The pattern is consistent across programs of this type: the constraint is not in the team's capability or the organisation's commitment. It is in the structural logic applied to the challenge before its nature was honestly assessed.