The Technical Operating Model Trap: A Restructure in Search of a Strategy

Every technology leader has encountered a version of the same pressure. A capability review flags unclear accountability. A budget cycle surfaces apparent duplication across teams. An audit recommends that the function get a clearer operating model. The response, almost universally, is to commission a technical operating model — a TOM.

In Brief


  • Government IT functions pursue a TOM when accountability is unclear, but the restructure that follows tends to reorganise the problem rather than resolve it.
  • Across six performance dimensions, TOM actuals average 35–50% lower than promised.
  • On duplication reduction — the primary TOM promise — independent research suggests typical outcomes of 12% against promises of 35%.
  • A product operating model exceeds TOM promises on four of six dimensions: 45% duplication reduction, 36% cost reduction, 65% governance improvement, 55% customer satisfaction uplift.
  • The throughput gap is the starkest: TOM delivers around 8% cycle-time improvement vs 91% in a product model.

The appeal is understandable. A TOM offers something visible and bounded: a document, an org chart, a completion date. For a technology function under scrutiny, it signals that the leadership understands the problem and is doing something about it. The question that tends not to get asked until well after the engagement closes is whether the something being done actually addressed what was wrong.

What a technical TOM looks like in practice

The shape of a technical TOM is fairly consistent across government agencies. It begins with a current-state assessment — a map of systems, teams, and the relationships between them. This typically surfaces areas where similar platforms are managed by separate teams, where responsibilities appear to overlap, and where it is not obvious who is accountable for what. This is genuine and useful diagnostic work.

The solution phase is where the pattern tends to diverge from the intent. Having identified technical overlap, the engagement turns to the question of ownership: who should be accountable for this platform category? The result is an organisational design built around technical affinity — teams grouped by the systems they manage. Infrastructure teams own infrastructure. Application teams own application categories. Data platforms go to data teams. The org chart becomes a map of the technology estate.

Governance follows the same logic. Decision rights, escalation paths, and RACI matrices are defined for each system domain. The framework is internally coherent and can be explained clearly. The customer — the business area that depends on the technology to deliver a policy outcome — tends to appear as a named stakeholder in someone else’s governance table, rather than as the reason the function exists.

Why the outcomes rarely follow

When we look at where the expected gains from a TOM did not materialise, a consistent pattern tends to surface. It is rarely a failure of execution. More often, it is a misidentification of the problem.

Duplication is the primary TOM justification. The assumption is that duplication is a supply-side problem: too many teams owning similar platforms, which rationalisation can fix. What tends to be true is that duplication is a demand-side problem rooted in transparency and coordination failure. One business area does not know another business area already solved the same problem, so it approaches its slice of the IT function and commissions a solution. The IT function — organised around technical domains — obliges, because that is what it is structured to do. No single team has visibility across the whole portfolio. The duplication accumulates not because the technology function is poorly organised, but because nobody could see across the silos before the decision was made. A TOM addresses the supply side — who owns which platform — without ever asking why the same capability was built three times. That question implicates the demand side, the funding model, and the absence of a portfolio view that could have surfaced the overlap before it happened. Rearranging the supply side leaves the demand-side conditions untouched.

A technical affinity restructure is an attempt to eliminate duplication. It is largely administrative: two managers, two budget lines, two teams touching similar platforms. The typical TOM promise on duplication reduction runs to around 35 per cent. What tends to actually materialise is closer to 12 per cent — because the deeper duplication, the parallel approval chains, the procurement conducted independently for identical capability needs, the fragmented governance of shared concerns, sits in the white space between teams and goes untouched. Bain & Company’s research on cost transformation found that at least 60 per cent of the value in cost-reduction efforts depends on initiatives requiring coordination across organisational boundaries, not consolidation within them (Bain & Company, 2024). Restructuring within technical domains while leaving cross-domain coordination unchanged tends to solve the cheaper half of the problem.

The structural consequence worth examining is what happens to handoffs. When work requires an outcome that touches more than one system — which describes most meaningful work in a government agency — it has to move between teams. Research by Celent found that 60 to 70 per cent of material operational errors in complex organisations are traceable to handoffs between siloed functions (cited in Tomlinson, 2025). A restructure built around system ownership does not reduce these handoffs. It formalises them, and then routes them through governance processes that were designed to define accountability rather than accelerate flow. The result: TOM engagements typically deliver around 8 per cent cycle-time improvement. The governance improvement figure follows a similar pattern — promised at 40 per cent, the observable change tends to land near 15 per cent, because governance documents within a system-oriented structure account for accountability without changing how decisions connect to outcomes.

There is a well-documented mechanism underlying this. Melvin Conway observed in 1968 that organisations produce system designs that mirror their own communication structures (Conway, 1968). The reverse tends to hold equally: when a technology function is reorganised around system boundaries, coordination between those systems becomes structurally harder over time. MIT and Harvard Business School research confirmed the mirroring hypothesis empirically — products developed by tightly coupled, system-aligned organisations are significantly less modular than those from loosely coupled ones (MacCormack, Rusnak & Baldwin, 2012).

The governance layer does not interrupt this dynamic. What tends to happen in practice is that roles and responsibilities get restated within the new management structure — accountability for systems, not for the outcomes those systems are meant to produce. These are different things, and the first consistently crowds out the second.

Bain’s 2026 research on reorganisations adds an important dimension here. In a survey of nearly 1,000 executives and employees who had undergone a reorganisation, 88 per cent of leaders reported confidence that the new structure would achieve its goals. Only 36 per cent of the employees operating within it agreed (Bain & Company, 2026). This is not a communication failure. It reflects a genuine difference in what each group can see. Leaders see the org chart. Employees experience the handoffs, the ambiguous decision rights, and the absence of any clear answer to the question of whose job it is to get a cross-system outcome actually delivered.

What the failure costs

The cost of a TOM that does not deliver is not a one-time disruption. It tends to compound. In the first year, the reorganisation itself consumes the function’s productive capacity — roles shift, reporting lines change, and delivery slows while the organisation learns to operate the new structure. In the second year, the same coordination problems that motivated the TOM surface again in the new configuration, now reinforced by formalised boundaries and governance frameworks that constrain cross-domain collaboration rather than enabling it.

Beyond the delivery drag, there is an opportunity cost that is harder to quantify but significant. Across the six dimensions that matter most to an agency CIO — duplication reduction, cost, throughput, governance quality, customer satisfaction, and risk profile — the pattern is consistent: TOM actuals average around 35 to 50 per cent of what was promised. In terms that land at a CIO level, that means a cost reduction effort targeting 25 per cent yields around 9 per cent. A customer satisfaction improvement targeted at 20 per cent produces around 5 per cent — which reflects, structurally, that a model organised around system ownership has no customer satisfaction mechanism built into it at all. The Niskanen Center’s analysis of government digital services identifies this as the condition that makes technology brittle: organising around systems rewards output, not outcomes, and leaves users with technology that reflects the agency’s internal structure rather than the needs it exists to serve (Niskanen Center, 2026).

A different starting point: product operations

The alternative that tends to produce better results is not more structure. It is a different first question. Instead of asking how to organise the people, a product operating model asks what the technology function exists to deliver — and for whom. The organisational structure, governance, and investment decisions follow from the answer to that question rather than from the shape of the technology estate.

This shift in sequencing changes what counts as duplication. Two teams managing similar platforms may be performing genuinely redundant work — or they may be supporting different business processes that happen to share a technology. You cannot tell which is true from a platform inventory. You can tell from a service model that traces technology back to the customer outcome it is meant to support.

A product operating model organises this around six interdependent elements. Foundations and culture establishes what a product is — a mechanism for delivering measurable value to a defined customer — and the conditions that make it possible: empowered teams, transparency, and outcomes as the measure of success rather than activity. Strategy and portfolio connects investment decisions to expected outcomes, replacing effort-based business cases with value-based funding that can adapt as evidence accumulates. Organisational design structures teams around end-to-end accountability for value streams rather than system ownership. Discovery ensures that problems and solutions are validated through continuous customer engagement before build capacity is committed. Delivery and lifecycle replaces the project-and-sustainment binary with a continuous model that evolves with customer need and product maturity. Governance and performance provides the measurement and adaptive planning infrastructure to know whether investments are working and redirect them when they are not.

How to start

The executive’s diagnostic question is not complicated. Are the technology teams organised to optimise value delivery to the people who depend on the technology — or are they organised to make the management of tasks and the control of technology change as administratively clean as possible? A TOM produces the second. It groups people around systems because systems are bounded, ownable, and governable. Value delivery is messier: it cuts across systems, involves business context the IT function may not hold, and cannot be tracked through change control logs. The discomfort most technology functions are optimised to avoid is precisely what a product model requires them to sit with.

The first executive move is not commissioning a transformation programme. It is resisting that reflex. Pick one outcome. Fund a small, cross-functional team with persistent accountability for it. Give them 90 days. The question at 90 days is not whether they have completed their activities — it is whether the outcome is moving, whether the team understands why, and whether what they have learned is transferable to a harder problem.

Here is where most executives create the very condition they are trying to escape. A product model is not a complicated problem with a discoverable right answer that can be designed in advance. It is a novel organisational condition that has to be grown through real work under real conditions. When the first move gets handed to people trained in programme delivery, they will do what they know:

  • Define the roles and document the processes.
  • Get executive sign-off and present the completed framework for implementation.

That produces a product model on paper. It does not produce empowered teams, outcome-oriented accountability, or a culture that treats customer need as the primary constraint — because none of those things can be created by signing off a document. They emerge from repeated decisions made differently, under conditions where the outcome is the measure.

The reporting instinct compounds this. Milestone completion, RAG status, deliverable sign-off — these measure whether the transformation is being executed as designed. None of them tell you whether the model is working. After 90 days, the evidence that matters is whether the outcome is improving, whether the team is making decisions they could not make before, and whether the business area that depends on the outcome is experiencing a difference. If that evidence is not present, the pilot has been captured by the programme framing and the root cause needs to be examined — not accelerated.

The three signals that indicate the model is repeatable and worth scaling:

  • The team is intact and functional at 90 days without structural intervention from above.
  • The improvement is traceable to the conditions the model created, not to the specific individuals involved.
  • The business area is asking for more. Absent those three, scaling is premature. Present all three, scaling is the obvious next move.

What the evidence shows

Organisations that have made this shift report consistent patterns of improvement across every dimension where TOM typically underdelivers. On cost, Bain & Company’s research found that companies adopting a product operating model achieved a 36 per cent reduction in development costs — against a TOM-typical outcome of around 9 per cent. On throughput, the gap is wider still. Alvarez & Marsal’s analysis of persistent, cross-functional product teams found outcome delivery velocity increases of 40 to 50 per cent compared to organisations structured around systems and functions (Alvarez & Marsal, 2026). In ZXM’s own government work, a persistent product model applied to a major regulatory technology programme delivered a 91 per cent reduction in cycle time, a 200 per cent increase in throughput, and a 73 per cent reduction in cost-per-Feature (ZXM / AUSTRAC, 2024) — against a TOM promise of around 20 per cent cycle-time improvement and an observable delivery rate of closer to 8 per cent.

On the dimensions that TOM structures tend to ignore entirely — customer satisfaction and governance quality — the product model improvement figures are 55 per cent and 65 per cent respectively. The customer satisfaction figure is not an accident of the design; it is a direct consequence of organising teams around the question of what the customer needs rather than what the system owns. The governance figure reflects the difference between accountability documented in a RACI matrix and accountability embedded in a team’s operating model.

These outcomes do not require more effort than a TOM. They require effort directed at a different question, earlier in the process. A technical operating model that begins by asking how to organise the technology estate will produce an organisational answer. A product operating model that begins by asking who the customers are and what they need will produce an outcome answer. The difference in where those two answers lead is, in most agencies, the difference between a restructure that holds and one that has to be redone.

The simple conclusion

A technical operating model is a structural response to a structural problem. Its appeal is real: it is tangible, explainable, and deliverable. What tends to go wrong is not the intent but the sequence — the structure is designed before the customer question is answered, and then the governance layer documents accountability for the structure rather than for the outcomes that motivated it.

A product operating model does not promise to be simpler. What the evidence consistently shows is that it delivers: 36 per cent lower costs against a TOM’s 9 per cent, a 91 per cent cycle-time improvement against a TOM’s 8 per cent, and customer and governance gains that a system-oriented structure has no mechanism to produce at all. A TOM reorganises what the technology function already is. A product operating model changes what it is for.

References

Alvarez & Marsal. (2026, February). Product and platform models: The operating model enterprises need to scale technology and generate recurring business value. https://www.alvarezandmarsal.com/thought-leadership/product-and-platform-models-the-operating-model-enterprises-need-to-scale-technology-and-generate-recurring-business-value

Bain & Company. (2024). From silos to speed: How product operating model is transforming consumer products companies. https://www.bain.com/insights/from-silos-to-speed-how-product-operating-model-is-transforming-consumer-products-companies/

Bain & Company. (2024). Sustained cost transformation: Delivering savings that stick. https://www.bain.com/insights/sustained-cost-transformation/

Bain & Company. (2026, January). 88% of leaders are confident their reorganization will deliver — only 36% of employees agree. https://www.bain.com/about/media-center/press-releases/2026/88-of-leaders-are-confident-their-reorganization-will-deliver–only-36-of-employees-agree-bain–co-research/

Conway, M. E. (1968). How do committees invent? (4), 28–31.

Fowler, M. (2022). Conway’s Law. martinfowler.com. https://martinfowler.com/bliki/ConwaysLaw.html

MacCormack, A., Rusnak, J., & Baldwin, C. Y. (2012). Exploring the duality between product and organizational architectures: A test of the mirroring hypothesis. (8), 1309–1324. https://doi.org/10.1016/j.respol.2012.04.011

Niskanen Center. (2026, January). The product operating model: How government should deliver digital services. https://www.niskanencenter.org/the-product-operating-model-how-government-should-deliver-digital-services/

Niskanen Center. (2025, August). Conway’s Law at government scale. https://www.niskanencenter.org/conways-law-at-government-scale/

Scrum.org. (2025). Why handoffs are killing your agility. https://www.scrum.org/resources/blog/why-handoffs-are-killing-your-agility

Tomlinson, R. (2025, March). Divided we fall: The crushing cost of organisational silos [citing Celent, 2021]. . https://medium.com/@tomlinsonroland/divided-we-fall-the-crushing-cost-of-organisational-silos-19ed49a4b3ce

ZXM / AUSTRAC. (2024). Product operating model engagement outcomes [client data, not publicly available].

About the author

Receive insights on strategy, leadership, and transformation.
By subscribing you agree to our Privacy Policy
© 2026 Zen Ex Machina (ZXM) Pty Ltd. All rights reserved. ABN 93 153 194 220

Discover more from Zen Ex Machina

Subscribe now to keep reading and get access to the full archive.

Continue reading

agile iq academy logo 2022-05-05 sm

Enter your details

search previous next tag category expand menu location phone mail time cart zoom edit close