top of page

THE AGED CARE ASSESSMENT ALGORITHM FAILS AI TECHNICAL STANDARD

  • Writer: Bruce Mullan
    Bruce Mullan
  • Apr 1
  • 2 min read

Updated: Apr 17

It's concerning the new Integrated Assessment Tool (IAT) for determining aged care support levels is at the heart of a breaking ABC news story. The new assessment tool is automatically reducing care levels for older Australians according to clinicians that use it.


Unlike the Robodebt scandal, the Department of Health, Disability and Ageing was mandated to adopt the Digital Transformation Agency AI technical standard from December 2025.


Assessment tool fails AI Technical standard

The purpose of the new Aged Care system is to more accurately calculate the level of funding based on the clinical inputs. Clinical assessors cannot override its algorithm - even if they think the person deserves a higher level of care. The government is arguing the outcome is a better allocation overall ensuring people who missed out under the old system get the care they need with the new one.


If we've learned anything from Robodebt, it's that our old ways of doing things just can't handle this new level of digital complexity. Getting AI governance right from the start shows you're committed to being transparent, taking accountability, and protecting users every step of the way.


So, let's dive in and pull this apart from an AI governance perspective.


Firstly, for the new AI technical standard to apply, the system must meet the definition of an AI system: "An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".


Tick.


Technical Statement 11 - Safety. Has harm been caused? Elderly care recipients have either lost funding or there are 800 cases now in review awaiting funding decisions. The elderly lose care they would have otherwise had.


Tick.


Technical Statement 5 - Explainability. Can the system explain its decisions to the clinicians. It doesn't seems so, as the clinicians describe the new system as playing Russian Roulette. No explainability. 


Tick.


Technical Statement 10 - Human-centred Design. The system does not allow the user to opt out and request human assessment. Lack of opt out options.


 Tick.


Technical Statement 10 - Human-centred Design. A decision cannot be overridden by a human. No human oversight and control mechanisms.


Tick.


So, there are four failures against the government's own AI technical standard:


1. Decisions that harm elderly people.

2. Decisions that can't be explained.

3. No option for a human to assess instead.

4. No option for a human to override the machine's decision. 


This fiasco will likely become another cautionary AI transformation tale. The only difference is we now have a clear set of standards to measure the poor performance of the Department of Health, Disability and Ageing's new assessment tool.



Stay safe, Bruce


ABOUT ME

I write all my own content, you can tell by the odd typo and occasional missing word. I use AI for research. I also teach organisations how to implement the Australian AI Governance Standard and confidently transition to AI systems. To learn about my upcoming public AI Governance workshops visit: Public workshops


To learn more about AI Governance, check out my Hitchhikers Guide to AI Governance Podcast visit:  Hitchhikers Guide to AI Governance Podcast


Podcast on AI Governance and the AI Technical Standard.

Comments


bottom of page