Your AI deployments are running ahead of the governance you can defend at board level

The executive paper was approved last quarter. It described an AI deployment that had moved from pilot to production faster than expected, with measurable productivity gains and no material incidents. The technology team was confident. The audit committee asked one question: who is accountable for the decisions the system is now making? The room paused. Eighteen months of deployment had produced a working system and a level of risk that was not transparent to anyone.

In Brief


  • Australian organisations have deployed AI faster than they have built the accountability to govern it, and that gap now sits on the board's plate.
  • Only 22% of Australian organisations report a highly advanced model for agent governance, while two-thirds of directors are already using AI tools.
  • Governance failure is a different category to technology failure: the system can work exactly as designed and still leave no one accountable for the decision it made.
  • Sovereign AI obligations and critical infrastructure principles are being written before most boards have a defensible answer to "who signed off on this?"
  • The next audit committee question is not whether the AI works. It is whether accountability for what it does is locatable on a person.

This is not unusual. It is the typical state of play. Across the Australian organisations that have moved AI from experimentation into production over the last 18 months, the deployment curve has run ahead of the accountability curve. The gap is now visible at board level, not at the technology level, and that distinction is doing more work than most executives realise.

Two-thirds of Australian directors used AI tools in 2025, and only 22% had policies in place to govern that use (Governance Institute of Australia, 2026). The same 22% figure surfaces in Deloitte’s 2026 enterprise survey: only 22% of Australian organisations report a highly advanced model for agent governance (Deloitte Australia, 2026). The numbers describe the same condition from two angles. Adoption has happened. Governance maturity has not kept pace.

The point that tends to be missed is that this is a different category of problem to whether the AI works.

When an AI deployment fails as a piece of technology, the response pattern is well-rehearsed. The system is rolled back, an incident review is run, the model is retrained or replaced, and the technology team owns the remediation. Governance failure does not look like that. The system continues to work exactly as designed. The output continues to be acted on. What is missing is the chain that makes someone, by name, accountable for the decision the system has made on the organisation’s behalf. The audit committee, the regulator, and eventually a parliamentary committee if the deployment is public-facing, are not asking whether the AI is performing. They are asking who signed off on what it is now doing, and the answer cannot be the model.

The Australian regulatory environment has already begun to write that question into law. The Australian Government’s National AI Plan, released in December 2025, sets out nine concrete actions including the establishment of the Australian Artificial Intelligence Safety Institute and the GovAI hosting service (Department of Industry, Science and Resources, 2025). The Australian Cyber Security Centre, with international partners, has issued principles for the secure integration of AI into operational technology, including an explicit requirement to establish AI governance and assurance frameworks for critical infrastructure (Australian Cyber Security Centre, 2025). The Governance Institute has flagged digital and AI competence as a non-negotiable director skill for 2026 (Governance Institute of Australia, 2026). The signals are converging. The board’s exposure is no longer hypothetical.

What follows from this is straightforward but not easy to act on. If the AI itself cannot hold accountability, because accountability is a property of an actor capable of being held to it, then someone in the organisation already holds that accountability today, whether the governance documents acknowledge it or not. The question is whether the person holding it is the person the board would name if asked. Where there is a gap between the two, the board is in a position it would not consciously have approved. That gap is what the next audit committee question is really testing.

The pattern that surfaces most often in Australian organisations is that accountability has migrated quietly into the technology team, by default rather than by design. The model was built there. The deployment was managed there, and the decisions the model now makes appear, to the rest of the organisation, to be technical artefacts. That positioning was defensible when AI was a tool that supported human judgement. It is harder to defend when AI is making the decision and a human is approving the output without independent reading of it. The category of the work has changed. The location of accountability has not yet caught up.

The shift the board can make does not require waiting for the regulator. It requires reading the existing deployments through a different question. Not “is the system performing?” but “who would be named if a decision the system made were challenged tomorrow?” If the answer to the second question is not crisp, if it requires diagrams, multiple sign-off layers, or a referral to “the AI working group”, the gap is locatable, and locatable gaps are the ones boards can close. Closing them does not require new technology. It requires the board to assert, with the same clarity it asserts financial accountability, that decisions made on the organisation’s behalf are owned by named decision-makers regardless of whether the proximate cause is human or model.

The boards that do this in the next twelve months will not be ahead of the regulator. They will be ahead of their own audit committees, which is where the question is going to be asked first.

The deployment was always defensible as technology. The question is whether the decisions it now makes are defensible as governance.

References

Australian Cyber Security Centre. (2025). Principles for the secure integration of artificial intelligence in operational technology. https://www.cyber.gov.au/business-government/secure-design/operational-technology-environments/principles-for-the-secure-integration-of-artificial-intelligence-in-operational-technology

Deloitte Australia. (2026). The state of AI in the enterprise: 2026 AI report. https://www.deloitte.com/au/en/issues/generative-ai/state-of-ai-in-enterprise.html

Department of Industry, Science and Resources. (2025). National AI Plan. https://www.industry.gov.au/publications/national-ai-plan

Governance Institute of Australia. (2026). The 2026 governance agenda: Priorities for directors. https://www.governanceinstitute.com.au/news_media/the-2026-governance-agenda-priorities-for-directors/

About the author

Receive insights on strategy, leadership, and transformation.
By subscribing you agree to our Privacy Policy
© 2026 Zen Ex Machina (ZXM) Pty Ltd. All rights reserved. ABN 93 153 194 220

Discover more from Zen Ex Machina

Subscribe now to keep reading and get access to the full archive.

Continue reading

agile iq academy logo 2022-05-05 sm

Enter your details

search previous next tag category expand menu location phone mail time cart zoom edit close