The AI Operating Model Decision You Already Made

The AI operating model decision was made when the AI program was approved. Most organisations didn’t know they were making it.

In Brief


  • AI operating model decisions are made when the AI program is approved, not later — and most organisations do not realise the approval was the decision.
  • Enterprise performance only moves in organisations that redesigned the operating model alongside the AI program.
  • Only 34% of organisations are genuinely redesigning core processes for AI; the rest inherit the original design's ceiling (Deloitte, 2026).
  • AI high performers, the roughly six percent attributing 5% or more of EBIT to AI, are 2.8 times more likely than their peers to have redesigned workflows (McKinsey, 2025).
  • The highest-return diagnostic is the one run before operating model redesign is scoped, and it is rarely well-positioned to be run by the program team.

Most AI programs are doing what the brief asked of them. Individual tasks are faster. Knowledge work is cheaper to produce. The case studies that justified the investment are being met on schedule. What isn’t moving, in most organisations, is enterprise performance. Throughput, margin, cycle time, customer outcome: the numbers that show up on a board paper look the same after eighteen months of AI deployment as they did before.

The execution is fine. The AI program is doing what it was designed to do. The gap is structural, and it was created at the point the program was approved.

Efficiency stops short of enterprise performance when the operating model is left untouched

The most cited finding from the 2026 enterprise research is also the most easily misread. Deloitte’s State of AI in Enterprise 2026 found that only 34% of surveyed organisations are using AI to deeply transform their businesses by creating new products, services, or core process redesigns. The remaining two-thirds are layering AI onto unredesigned processes (Deloitte, 2026). The instinctive reading is that those two-thirds are simply behind. A more accurate reading is that they approved an AI program without realising the approval was also a about the operating model. Specifically, the decision to leave it as it was.

McKinsey’s State of Organizations 2026 research sharpens the point. Nearly 40% of senior leaders identify redefining process flows as the biggest organisational unlock over the next one to two years, and two-thirds of leaders describe their organisations as overly complex and inefficient (McKinsey, 2026). The traditional remedies, including structural redesigns, cost cuts and flatter hierarchies, are producing diminishing returns. Process flow, McKinsey argues, is now the main thing limiting enterprise performance. AI is the lever. The operating model is what AI has to change to produce that result.

The AI program inherits the operating model — and the limits that come with it

When an AI program is approved, the operating model around it is already doing several things at once. It defines who hands work to whom, where decisions get made, what gets measured, and which exceptions slow the process down. The program is then asked to make the existing work faster, and it will succeed at that. What it cannot do, without an explicit decision to change the operating model, is alter the handoffs, decision rights, measures, or exception paths the work travels through. The ceiling on enterprise performance is set by those, not by the speed of the individual task.

This is why the data looks the way it does. The same McKinsey research that traces the productivity gap also identifies what separates the small cohort of organisations getting measurable EBIT impact from AI from those that aren’t. AI high performers, the roughly six percent of organisations attributing five percent or more of EBIT to AI use, are 2.8 times more likely than their peers to have fundamentally redesigned individual workflows (55% versus 20%). McKinsey concluded that workflow redesign made one of the strongest contributions of any factor tested to meaningful AI business impact (McKinsey, 2025).

This is uncomfortable reading for the way most AI programs got authorised. The investment case rested on efficiency gains. The operating model was assumed to stay the same. The redesign question wasn’t on the approval paper. The program is now hitting a limit that was set, without anyone realising it, the moment it was funded.

The operating model decision is made by default, not deliberation

The constraint persists because the operating model decision is rarely named as a decision. It sits distributed across the program approval, the funding cycle, the vendor scope, and the role of the executive sponsor, and that distribution conceals the choice from the person who would otherwise have made it deliberately. The CIO sees a technology investment. The COO sees a productivity program. The CEO sees a strategic signal to the market. None of them sees, on the paper they sign, the line that reads: the operating model will not be redesigned as part of this work.

That line is there. It is just not written down. And because it is not written down, the question of whether it is the right line to draw is never raised at the level where the answer would change. Capable executives operating in good faith are reliably approving an outcome they wouldn’t have chosen if it had been put to them as a choice. The decision is being made by default.

Naming the decision changes what the executive can do about it

Once the decision is named, several things shift.

The AI investment case and the operating model question cannot be separated without cost. They were always one decision. Separating them allocates the AI spend before the operating model has been read, which guarantees the spend will hit the ceiling the original design imposes.

The diagnostic here means an independent, structured assessment of what the operating model actually allows — run before redesign is scoped. The diagnostic that pays the highest return is the one done before operating model redesign is scoped, not after. Once the redesign is scoped, the diagnostic question has already been pre-answered by the scope. The window is narrow. It sits between the AI program approval and the operating model redesign, and most organisations do not currently have an event in their governance calendar that occupies it.

And the diagnostic is rarely something the program team is well-positioned to run. The team that built the AI investment case is the team whose work the diagnostic would examine. The team that will redesign the operating model has the same constraint. The question that’s hardest to answer from inside the program is the one that asks whether the program’s premise is the right premise.

The next AI investment paper is an operating model paper in another form

The shift is small in words, large in consequence. The next AI investment paper that comes to the executive is, when read accurately, an operating model paper in another form. Reading it that way changes what the executive asks before approving it. The question isn’t whether the AI use case is sound; it almost certainly is. The question is whether the operating model around the use case will allow the efficiency gain to translate into enterprise performance, and whether the redesign required to make that translation is in scope, out of scope, or invisible.

An executive who reads the AI program as the operating model decision it actually is, and who establishes a diagnostic in the window before that decision is locked, is already ahead of the cohort that will spend the next two years discovering, retrospectively, that they made the same decision without seeing it.

What this means for senior leaders

  1. The AI investment paper currently on your desk is also the operating model paper. Read it as both before you sign it, or you have made the operating model decision by default.
  2. The window for a high-return diagnostic is before operating model redesign is scoped, not after. Once scope is set, the diagnostic question has already been pre-answered by the people who set the scope.
  3. The diagnostic is not work the program team is well-positioned to run. The team whose premises the diagnostic would test is the same team that produced the investment case. An independent view is the asset.
  4. AI high performers — the 5%+-of-EBIT cohort — share one behaviour that 80% of their peers do not: they redesigned the workflows the AI touches. Workflow redesign is not the cost of the program; it is the mechanism by which efficiency becomes enterprise performance.

References

  • Deloitte. (2026). State of Generative AI in the Enterprise Q1 2026. Deloitte Australia.
  • McKinsey & Company. (2025). The State of AI: How organizations are rewiring to capture value. McKinsey & Company.
  • McKinsey & Company. (2026). The State of Organizations 2026. McKinsey & Company.

About the author

Receive insights on strategy, leadership, and transformation.
By subscribing you agree to our Privacy Policy
© 2026 Zen Ex Machina (ZXM) Pty Ltd. All rights reserved. ABN 93 153 194 220

Discover more from Zen Ex Machina

Subscribe now to keep reading and get access to the full archive.

Continue reading

agile iq academy logo 2022-05-05 sm

Enter your details

search previous next tag category expand menu location phone mail time cart zoom edit close