Computer says no: The assessment problem

3 minute read


The Minister spoke in Question Time yesterday, but his answer slid straight past the question. We heard plenty about “input” and “streamlining,” but nothing about the real issue. Assessors can’t fix an algorithm based assessment decision when it’s blatantly wrong.

Late last year, we wrote about the assessment problems emerging in Support at Home. Since then, those issues have stopped looking like teething problems and started forming a worrying pattern.

After decades of watching governments design and redesign assessment systems across disability and aged care, this pattern is familiar. It always starts with promises of fairness and consistency delivered through better tools, tighter rules and smarter algorithms. Human judgement is framed as the problem to be managed out of the system. And each time, when the system gets it wrong, real people wear the consequences.

As we head into 2026, amid endless promises about what AI and technology will deliver, the government is making the same mistake again, this time in Support at Home. The pursuit of fairness is being built on a flawed assumption: that complex human judgement can be replaced by scoring systems and algorithms.

The Integrated Assessment Tool and its funding algorithm are being rolled out as if better questions and locked decisions will automatically produce better outcomes. That belief is not justified.

Take something as ordinary as incontinence. One person may tell an assessor they’re “managing fine.” In reality, they may only be coping because a partner is quietly doing the work, or because embarrassment makes them reluctant to disclose the full picture. Another person with the same condition may openly admit they’re struggling.

To an algorithm, those answers look very different and trigger very different funding outcomes. Clinically, they often aren’t different at all. A competent assessor hears the words but looks past them, asking: what’s actually happening here?

Human judgement matters.

Algorithms don’t see hesitation or embarrassment. They don’t recognise when an answer doesn’t align with the person in front of them. And they don’t recognise when they’re wrong, because once a decision is locked in, there is no meaningful way to challenge it.

When assessment systems discourage overrides, treat professional judgement as a risk and prioritise consistency over accuracy, the system does not become fairer. It breaks. This is the same design logic that underpinned Robodebt: black-box decision-making, constrained human intervention, confidence in “objective” outputs, and mounting harm while early warnings were dismissed.

Support at Home is now on that path.

If someone with clear cognitive or safety risks is classified as low priority and something goes wrong, it will not matter that the algorithm followed the rules. The harm will already have occurred.

Technology should support judgement, not replace it. Assessment was never meant to be automatic and pretending otherwise will not end well.

The lesson from past assessment failures is clear. Tools and technology become dangerous when they replace professional judgement instead of supporting it. Support at Home must give assessors clear authority to override, escalate and review decisions when risk is evident.

Systems that prioritise consistency over safety and locked outcomes over judgement, do not produce fairness. If Support at Home is to avoid repeating earlier failures, it must recognise that informed discretion is not a weakness. It is the safeguard.

The rules were followed. The answer is final. Computer says no.




Continue Reading

Roland Naufal

Roland’s three decades of disability experience and insistence on doing things better have earned him a reputation as independent and outspoken. He is known for finding hidden business opportunities and providing insights into the things that matter in disability. Roland worked extensively on disability deinstitutionalisation in the early 90's and has lectured on the politics and history of disability. From 2012-2014, he consulted on NDIS design for the National Disability & Carer Alliance and was the winner of the 2002 Harvard Club Disability Fellowship. Roland has held leadership roles in some of Australia’s best known disability organisations and is now one of Australia’s most knowledgeable NDIS consultants and trainers.

Next
Next

T’was the night before a SaH Christmas 2025