top of page

TECHNICAL STATEMENT 1: DEFINE AN OPERATIONAL MODEL

  • Writer: Bruce Mullan
    Bruce Mullan
  • Apr 13
  • 4 min read

The first of the 42 Statements, Statement One addresses your operating model, namely establishing how AI technologies are governed, implemented and monitored within your organisation.  I believe this is one of the highest impact Statements out of the 42. So I'm glad it's right at the top.


In short, the Standard asks organisations to integrate these requirements in how their AI systems are designed, deployed and maintained:


  • compliance,

  • efficiency,

  • ethical standards,

  • traceability,

  • reproducibility,

  • modularity,

  • security and

  • governance


Sounds like a mouthful and it certainly is. It requires deep consideration of either a whole new operating model or adapting an existing one to incorporate the extensive new AI requirements.

Statement One is "recommended" and not "mandatory", and I can't understand why the DTA chose this path. I suspect this flexibility allows organisations the freedom to choose their own adventure in a significantly more complex operating context.


At its most basic level, an operating model is a blueprint defining how to organise your responsibility structures, people, processes, technology, and governance to deliver outcomes and achieve strategic business objectives. It's your "vibe".


A good IT operating model acts silently and effectively delivering repeatable services and outcomes, while a poor one shows up as a noisy bureaucratic bottleneck. A bureaucracy that slows delivery, burns resources, and creates internal friction (often aided by fragmented silos). You will have seen, heard and felt the latter quite often in your working life, whereas a good one, well, it just works. It's a good vibe.


If you read the press or hear about AI failures, one of the biggest failure reasons is attributed to a lack of accountability.  No-one can figure out who decided what, or when, or who owns what, what data was used and so on? It's no surprise an organisation with a poor operating model is more likely to have any type of failure, whether AI or not.


Alas, Statement One doesn't explicitly state you must define your accountability structure, because that's implied under your organisation's operating model umbrella.  Well that's not as useful as I'd like it to be for something so important! But we can figure the rest out from here.


AI Accountability Structures

What sort of governance structures work best in an AI operating model? The Australian National Audit Office review of government AI implementations found a disturbing pattern - organisations with elaborate governance structures, steering committees, working groups, advisory boards and reference panels had no better outcomes than those with minimal governance. In some cases, they performed worse - the complexity of governance became its own project consuming resources while AI systems operated without meaningful oversight.


Minimum Viable Governance

To make AI governance actually work, your IT operating model, only needs three things: define how decisions get made, who makes them and how quickly can you act. Firstly, there are the five types of decisions needed to implement effective AI governance:


  1. Go or no-go decisions - Should an AI system proceed to its next phase?

  2. Resource allocation - Which statements get priority and what doesn't? 

  3. Statement interpretation - How do we interpret a statement in our context?

  4. Escalation resolution - Who makes the call on features that governance prohibits?

  5. Practice adoption - Should all AI systems have the same monitoring dashboard?


Every other governance activity, reporting, reviewing, discussing and analysing only matters if it enables these five decisions, if it doesn't it's a bureaucracy.


Establish Clear Ownership

Secondly, there must be someone accountable for your AI system.  Someone who owns success and failure, owns daily decisions, and escalates when coming up against boundaries. Someone who has executive level authority. This, someone, is your AI System Owner.

The AI System Owner also takes on the responsibility of implementing governance for their system. This is where they put on a governance hat. This part role I call the (Governance) Implementation Lead. Lastly, to make effective and efficient decisions, I recommend a three layer decision architecture. Here's how it looks:


Overview of the Three Layer Decision Architecture

  1. Implementation Lead - Makes operational decisions about the AI system: What fairness metrics to use? How to structure bias testing? When do we showcase?

  2. Stakeholder input - Decides on significant changes such as expansion of user groups, changing of core requirements, or modifying the fundamental algorithm.

  3. Executive committee approval - Limited to strategic decisions such as terminating a project, go live decisions, or proceeding despite major risks.


Unlike traditional hierarchies which push decisions upwards, this decision model pushes them downward to the lowest competent level. The Implementation Lead interprets statements, allocates team resources, approves implementation approaches, and resolves technical disputes. Single point accountability with decision velocity to implement governance from the top-down.


Bureaucracy test

Your governance structures should feel light weight to operate but heavy in impact. Remember it's a "vibe". Decisions should happen in days not weeks. If you find you are in meetings where nothing gets decided, endless approval chains, reviews that don't change anything or burdensome documentation then stop. Breathe. Act to immediately change that and redesign for simplicity. A minimum viable AI governance model looks something like this:


Defining your operating model is the first of the 42 Statements.


  • If your current IT operating model is already fragmented, bureaucratic or siloed, it's unlikely you'll even get to the next 41 Statements. Adopting AI will be a persistent struggle and a potential compliance catastrophe. AI will innovate faster than you can make decisions. Transitioning to AI offers the chance to reshape your organisation's operating model to find your best starting point for enduring success.


  • If your current IT operating model is already in good shape, you will do well, more power to you. It's the "vibe".


Stay safe, Bruce


ABOUT ME

I write all my own content, you can tell by the odd typo and occasional missing word. I use AI for research. I also teach organisations how to implement the Australian AI Governance Standard and confidently transition to AI systems. To learn about my upcoming public AI Governance workshops visit: Public workshops


To learn more about AI Governance, check out my Hitchhikers Guide to AI Governance Podcast visit: Hitchhikers Guide to AI Governance Podcast


Podcast about AI Technical Statements

Comments


bottom of page