We need human oversight of machine decisions to stop robo-debt drama
One former member of Australia’s government review tribunal has described robo-debt as a form of ‘extortion’.
Federal MP Amanda Rishworth raised concerns over the weekend that Australia could be headed for another robo-debt ordeal after the government reportedly confirmed the Australian Taxation Office (ATO) will use data matching to audit childcare rebates.
Government agencies increasingly use automated tools to make or facilitate decisions that affect citizens’ lives, but it’s not always appropriate for important decisions to be made by a computer.
In the European Union, the General Data Protection Regulation (GDPR) prohibits certain types of decisions from being solely automated. It also creates rights for individuals who are affected by automated processing.
We need similar safeguards in Australia for high stakes automated decisions made by government agencies.
The rise of robotic decisions
The trend toward automation of government processes is accelerating in line with the government’s commitment to digital transformation.
Automated tools are now used to make or facilitate decisions in a range of government agencies, including decisions about welfare, tax, health, visas and veterans’ affairs. Centrelink’s employment income confirmation system, known as “robo-debt”, is a high profile example of what can go wrong with automated decision making.
Automation can improve the consistency and efficiency of government processes. But if there is bias or error in the computer program or data set, a flawed decision-making logic will be applied systematically, meaning large numbers of people could be affected.
Guidelines aren’t enforceable
The government has previously published guidelines on automated government decision making, including Best Practice Principles in 2004, and the Better Practice Guide in 2007. Both reports provide important advice about how to design automated systems to align with the values of public law.
But the recommendations in these reports aren’t enforceable. They also fail to create legal protections for those affected by automated decisions.
In May, there was public consultation about an artificial intelligence (AI) ethics frameworkfor Australia. It highlighted the need for updated ethical principles to apply to new AI technologies. It also recommended a range of tools for improving the design of AI systems, including impact and risk assessments.
But, again, these recommendations will not be enforceable, even if they are included in the final framework. The current draft stops short of restricting the use of AI for certain types of decisions.
A new legal framework is needed
In contrast to Australia’s non-restrictive approach, legislative controls on data protection and automated decision making included in the GDPR are an example of best practice.
Article 22 of the GDPR is of particular interest for Australia. Unless specified exemptions apply, it prohibits the use of solely automated processing for decisions that produce legal or other significant effects for individuals.
To avoid this prohibition, decisions require meaningful human involvement and oversight. Having a human “rubber stamp” a decision made by automated outputs is insufficient.
Similar protections are needed in Australia, particularly for government decisions that affect individual rights and interests. Such safeguards would limit the types of government processes that can be fully automated.
‘Robo-debt’ would require meaningful human involvement under the GDPR
Let’s take a closer look at “robo-debt” to see how a prohibition on solely automated decision making might work.
The robo-debt system uses an automated data-matching and assessment process to raise welfare debts against people who the system flags as having been overpaid. Someone who receives a debt discrepancy notice can respond by giving income evidence to Centrelink. If no information is provided, an algorithm generates a fortnightly income figure by averaging income data from the ATO.
Of course, many welfare recipients have variable income as they are engaged in casual, part-time or seasonal work. It’s not surprising that the reliance on averaged data has led to a high number of reported errors. Receiving incorrect robo-debt notices has contributed to stress, anxiety and depression for many people.
One former member of Australia’s government review tribunal has described the system as a form of “extortion”.
If Australia had GDPR-type protections, meaningful human involvement would be required before an automated debt notice was sent. Manual review by human decision makers is important to ensure that a welfare debt is in fact owed.
There should also be restrictions on fully automating other high stakes decisions by government agencies. Decisions about visas and tax debts, for example, ought to be overseen by humans.