top of page

THE AGED CARE ASSESSMENT ALGORITHM FIASCO

  • Bruce Mullan
  • 5 days ago
  • 2 min read

It's concerning the new Integrated Assessment Tool (IAT) for determining aged care support levels is at the heart of a breaking ABC news story. The new assessment tool is automatically reducing care levels for older Australians according to clinicians that use it.


Unlike the Robodebt scandal, the Department of Health, Disability and Ageing was mandated to adopt the Digital Transformation Agency AI standards from December 2025.


The purpose of the new system is to more accurately calculate the level of funding based on the clinical inputs. Clinical assessors cannot override its algorithm - even if they think the person deserves a higher level of care. The government is arguing the outcome is a better allocation overall ensuring people who missed out under the old system get the care they need with the new one.


If we've learned anything from Robodebt, it's that our old ways of doing things just can't handle this new level of digital complexity. Getting AI governance right from the start shows you're committed to being transparent, taking accountability, and protecting users every step of the way.


So, let's dive in and pull this apart from an AI governance perspective.


Firstly, for the new AI standards to apply, the system must meet the definition for an AI system: "An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".


Tick.


Statement 11 - Safety. Has harm been caused? Elderly care recipients have either lost funding or there are 800 cases now in review awaiting funding decisions. The elderly lose care they would have otherwise had.


Tick.


Statement 5 - Explainability. Can the system explain its decisions to the clinicians. It doesn't seems so, as the clinicians describe the new system as playing Russian Roulette. No explainability. 


Tick.


Statement 10 - Human-centred Design. The system does not allow the user to opt out and request human assessment. Lack of opt out options.


 Tick.


Statement 10 - Human-centred Design. A decision cannot be overridden by a human. No human oversight and control mechanisms.


Tick.


So, there are four failures against the government's own AI standard:


1. Decisions that harm elderly people.

2. Decisions that can't be explained.

3. No option for a human to assess instead.

4. No option for a human to override the machine's decision. 


This fiasco will likely become another cautionary tale. The only difference is we now have a clear set of standards to measure the poor performance of the Department of Health, Disability and Ageing's new assessment tool.


Comments


bottom of page