Inside Story

Government by algorithm

Automated welfare didn’t end with the robodebt controversy. Here and overseas, governments are turning vital decisions over to computers

Mike Steketee 6 April 2018 3393 words

“We’ll track you down”: Alan Tudge, federal human services minister at the time of the robo-debt controversy. Mick Tsikas/AAP Image


Remember robodebt, the computer-generated letters that resulted in thousands of people being hounded to repay social security debts they hadn’t incurred? Thanks to some improvements to the system by the Turnbull government, the controversy died down and the media caravan moved on. But that doesn’t mean the problems all went away.

The system’s main elements not only remain intact, they are being expanded as part of a much wider shift to automating welfare — or what might be called government by algorithm. And despite the improvements, robodebt (which goes officially by the more antiseptic title of Online Compliance Intervention) is still making mistakes and causing anguish.

In 2017, no fewer than 1,385,276 people received debt notices from the Department of Human Services. Only 3.7 per cent or 51,230 of them went through the robodebt system, which compares income data provided by welfare recipients with information held within the government. This is a substantial drop from the previous year, perhaps reflecting a more measured approach by the government after the controversy of 2016 and 2017. But the emphasis is on expansion, with the department telling me it plans to increase data-matching, including through robodebt, to more than 600,000 reviews a year. The logic behind this, from the government’s point of view, is clear: data-matching, particularly in its automated form, has made it much more cost-effective to pursue overpayments of welfare benefits.

This is part of a bigger modernisation of the welfare system over a seven-year period, in what the Department of Human Services calls “one of the world’s largest social welfare ICT system transformations.” Given that the department and Centrelink have been using a computer system that began operating in 1983, an upgrade certainly seems justified. This isn’t just the government’s view: the National Welfare Rights Network’s Kate Beaumont said in 2015 that the new system promised to reduce the administrative burden on recipients and result in reductions in overpayments and debt recovery.

The developments here are in line with the automation of welfare delivery and monitoring in other Western nations. In the United States, where most welfare programs are delivered at the state or county level, automated eligibility is now “standard practice in almost every state’s public assistance office,” writes political scientist Virginia Eubanks in her recent book Automating Inequality. More than that, predictive models and algorithms are increasingly being used to target and withhold assistance.

In Britain, a single universal credit is being introduced to replace six separate government benefits. In 2013 the National Audit Office found the scheme to be riddled with major technical problems, prompting the Labour opposition to label it a “titanic-sized IT disaster.” Due to be rolled out nationally by last year, it is at least five years behind schedule.


There’s no doubt that an efficient administration using modern computer power and algorithms can process welfare claims more efficiently. But the experience with robodebt and with automatic programs overseas provides some cautionary tales. Those same tools can be used for culling the welfare rolls, justifiably or not. According to Eubanks, automated eligibility systems, ranking algorithms and predictive risk models are being integrated into human and social services in the United States “at a breathtaking pace, with little or no political discussion about their impacts.”

In 1973, nearly half of the Americans living below the poverty line received benefits from the Aid to Families with Dependent Children program, or AFDC. A decade later, after the widespread introduction of automated welfare management systems, the figure had dropped to 30 per cent. Today, fewer than one in ten benefit from its replacement program, Temporary Assistance to Needy Families, or TANF.

Automation is not solely responsible for this dramatic fall. From 1996, president Bill Clinton’s quest to “end welfare as we know it” imposed time limits and an array of strict conditions on benefits. Because welfare fraud looms large in the popular imagination, it can provide political cover for drastic measures.

Depending on how it is defined, rates of welfare fraud in the United States have been calculated to be as high as 10 per cent for improper payments, including fraud, and as low as 0.8 per cent based on the proportion of allegations that result in criminal convictions. Either way, fraud goes nowhere near providing an explanation for falls in welfare numbers of the magnitude that occurred in the United States. (In Australia, according to an analysis of more than $200 billion in Centrelink payments, the savings from detected fraud amount to less than one-fifth of one per cent.)

But all welfare systems have an inbuilt tension between helping those in need and minimising disincentives to work. That makes the design of the system, and the motivation behind the design, critical. In 2006, the government of Indiana, led by Republican governor Mitch Daniels, a long-time critic of AFDC and TANF, sought expressions of interest in outsourcing and automating the administration of TANF and two other schemes, the food stamps program and Medicaid. The riding instructions could not have been clearer: welfare dependence had to be cut, and financial incentives would be given for reducing eligibility. Daniels described the state’s welfare system as the worst in the United States, “irretrievably broken,” wasteful and fraudulent.

The goals were achieved in spectacular fashion. In two years, a million applications across the three programs were rejected, a 54 per cent increase on the previous three years. When the contract with IBM for the new system was signed in 2006, 38 per cent of poor families with children received benefits under TANF; by 2014, despite the worst economic downturn since the Great Depression, the figure was down to 8 per cent. The campaign went far beyond any notion of tough love to become a brutal attack on the poor that reinforced the trend towards increasing inequality — one of the factors in the rise of political populism in the United States.

Critical to the new system in Indiana — as it was to the initial rollout of robodebt in Australia — was reducing the scope for human discretion. No government employee dealt with a case from beginning to end: when people called for help, they always spoke to a different person — if they were lucky enough to get through. A lawyer told Eubanks that 95 per cent of the Medicaid applications he handled involved errors made during processing, resulting in eligibility mistakenly being denied. Any deviation from the rigid application process, however minor, was interpreted as a “failure to cooperate” and used to deny eligibility. Previously this had been a punishment of last resort for those who refused to participate in assessing eligibility. Now, in Eubanks’s words, “failure to cooperate” became “a chainsaw that clear-cut the welfare rolls, no matter the collateral damage.”

She cites the case of Omega Young, who missed an appointment in 2008 to authorise her continued access to Medicaid because she was in hospital with terminal cancer. Although she rang to say she couldn’t make it and gave the reason why, her medical benefits and her food stamps were cut off for “failure to cooperate.” Months later, with her medical bills reaching $10,000, she won an appeal that restored her benefits. The decision came the day after she died.

Eventually, after the public controversy put pressure on IBM, the company produced a 362-page document outlining how to fix problems such as “inaccurate and incomplete data gathering” and “incorrect communications to clients,” leading the Indiana government to cancel the contract and abandon the plan.

In 2016, Allegheny County in Pennsylvania introduced a “predictive risk” computer model aimed at forecasting where child abuse and neglect were most likely to occur. It uses 132 variables to rate the risk of children being mistreated, including the length of time parents spend on public benefits, past involvement with the child welfare system, the age of the mother, whether the child was born to a single parent, mental health issues and periods in jail.

When the model was applied to historical data in New Zealand, where it was first developed, it was found to predict with “fair, approaching good” accuracy whether a finding of mistreatment of children would be established by the age of five. Tested against similar data in Allegheny County, it scored a predictive rate of 76 per cent. If that sounds impressive, Eubanks points out that with 15,139 reports of abuse and neglect in Allegheny in 2016, it would have produced 3633 incorrect predictions. In the first nine months of the new model’s operation, more reports than previously were identified for investigation and those rated as a higher risk were more likely to be substantiated. But, Eubanks adds, of the higher-risk reports that triggered a mandatory investigation, 28 per cent were overridden by a manager and dismissed and only 51 per cent of the remainder were substantiated.

The goals set for the Allegheny system are limited and it is used to support human decision-making rather than replace it. Five other cities and states, including New York City and Los Angeles, have since introduced similar systems in what Eubanks describes as “a nationwide algorithmic experiment in child welfare.” The question is whether other jurisdictions will be as careful about placing clear boundaries around their operation. The NZ government stopped trials of the system in 2015 after a new minister, Anne Tolley, took over the social development portfolio. A briefing document on the project leaked to the media showed that she had written in the margins, “Not on my watch! These are children not lab rats.”

Australia, too, is looking at the possibilities of predictive analytics. According to the head of enterprise architecture at the Department of Human Services, Garrett McDonald, such a system could aim to minimise overpayments and thereby prevent debts occurring. At IBM’s Think 2018 conference in Las Vegas last month, he quoted the example of people on benefits being required to estimate their income over the next twelve months, with those who underestimated receiving excess payments from the government that needed to be recovered.

“So what we’re looking at,” McDonald said, “is how do we deploy predictive analytics so we can take a look at an individual’s circumstances and say ‘what do you think the probability is that you may end up with an inadvertent overpayment and how can we engage with you proactively throughout the year to help true that up, so that you don’t reach the end of the year and have an overpayment that we need to recover?’” If it sounds positive, it also suggests an extra level of prying into “an individual’s circumstances.”

Privacy is a real concern for the targets of these schemes. An electronic “coordinated entry system” for homeless people introduced in Los Angeles has been widely lauded for its greater efficiency. It combines data from a disparate array of homeless services and matches it to available resources, reducing overlap and double dipping. Assessment for the scheme includes an intrusive survey that asks questions about experiences of sexual assault and family violence, mental health problems, suicide attempts, drug taking, unprotected sex and prostitution; algorithms use this information to determine those with the greatest need for accommodation. As Eubanks observes, this creates a dilemma for the homeless: “Admitting risky or even illegal behaviour… can snag you a higher ranking on the priority list for permanent supportive housing. But it can also open you up to law enforcement scrutiny.”


The Turnbull government has budgeted to save $3.7 billion in the four years to 2019–20 “primarily due to measures to enhance the integrity of social welfare payments, including expanding and extending data-matching activities with the Australian Taxation Office.” It is data-matching that lies at the heart of the robodebt scheme: where wage or salary figures supplied by employers to the ATO appear to be higher than those reported by people who are or have been on benefits, the computerised system notifies them by letter about the discrepancy and asks them to check their employment income. If the figure is deemed not to have been challenged, demands are made for payments, including through debt collection agencies.

In the way the system operated in 2016 and 2017, this process occurred irrespective of whether people received the initial letter — they often went to old addresses — or whether the recipients were able to clear the multiple hurdles in their way. In 2016, thirty-six million calls to the department went unanswered, according to the Community and Public Sector Union, and those that were answered often involved long waits.

To date, debt recovery has focused on people on unemployment benefits and youth allowances, which make up $1.5 billion of the total expected savings of $3.7 billion over four years. On 1 July last year, the automated system was extended to include shares, bank interest and other non-employment income. A system that was used to recover debts mainly from those on benefits such as Newstart and the parenting payment now covers pensioners and other retirees more likely to have other assets. Age pensions account for $1.1 billion of the anticipated $3.7 billion in savings, parenting payments $700 million and disability support pensions $400 million.

In the context of almost $80 billion a year spent on these programs, these may not seem large amounts. For some groups, though — such as the unemployed, who receive very meagre benefits to start with — the savings targets represent a significant proportion of outlays. The government aims to recoup almost 4 per cent of total unemployment and sickness benefits payments of $10 billion a year, for instance. This may well be a realistic goal from the department’s point of view. The introduction of robodebt meant that the Department of Human Services planned to suddenly expand a system that had sought to recover debts from 20,000 people each year to an estimated 783,000 debts in 2016–17. It appears that this target, provided to an inquiry by the Commonwealth ombudsman, was not achieved, with the department not responding to a request for the actual figure. Instead, it repeated that it was increasing data-matching reviews to more than 600,000 a year.

Conservative governments have tended to be particularly heavy-handed when it comes to pursuing welfare recipients. In 2016, then human services minister Alan Tudge said to those who allegedly owed money to Centrelink, “We’ll find you, we’ll track you down and you will have to repay those debts and you may end up in prison.” It is not an attitude likely to encourage people to cooperate with the government, particularly when overpayments are often inadvertent and frequently the result of government errors. Reducing the scope for humans to exercise discretion made the system much harsher.

The Commonwealth ombudsman reported that the letters notifying Centrelink customers of alleged income discrepancies were “unclear and deficient in many respects,” omitting crucial information such as the helpline telephone number, the fact that help from a human was (theoretically) available, and the fact that an extension of time could be provided to supply information.

Robodebt shifted the onus of proof wholly to the individual to check the accuracy of the information in the letter. A major reason for errors in the debt calculations was that the income figures the ATO passed on to the department from employers were on an annual basis, whereas the department calculates benefits on fortnightly income. If it did not receive contrary information, the department’s policy was to average the ATO figure over the year, meaning the debts calculated were often too high because many people had interrupted periods of work and were entitled to full benefits when unemployed.

If people challenged the department’s demands — which required particular persistence — debts were often reduced to zero or a fraction of the initial assessment. One of the cases in the ombudsman’s report concerned a woman who received a debt notice for $5875. When she supplied bank statements and an employment contract showing she had stopped working for an employer whom the department assumed had employed her for the whole year, this was ignored and she received a letter from a debt collection agency demanding immediate payment of the full amount.

When she contacted Centrelink again, she was told that she needed to provide payslips and a separation certificate. When she said she had been unable to obtain them from her employer because the business had changed hands multiple times, and that she had provided bank statements instead, she was told there was nothing further that could be done. She eventually succeeded in having her case referred for manual reassessment and the debt was reduced to zero. She joined the more than 10,000 people — not counting those who gave up the fight — whose debt was reduced to zero in the fourteen months to September last year.

The experiences with robodebt have a parallel with the welfare system in Indiana, where “refusal to cooperate” was used on the slightest pretext to deny or cut off assistance. In Australia, a lack of response to a Centrelink letter was assumed to mean a failure to cooperate, even though it often was the result of the letter being sent to an old address or people being unable to get through on the telephone.


The robodebt controversy has at least caused the government to temper its hardline approach. Centrelink delayed sending out letters over Christmas and the new year to spare people anxiety over the holiday period. It accepted and has at least partly implemented the ombudsman’s recommendations, including by providing access to a dedicated helpline (no helpline number was provided in the initial letters), delaying action to recover debts when people seek a review, allowing people to use bank statements rather than payslips to verify their income, offering more assistance to vulnerable people and no longer automatically charging a 10 per cent debt recovery fee.

As well, Centrelink has recruited 1000 people on contract to bring total staff dealing with customers to about 2500. It now sends out registered letters, which are returned if people have changed addresses. (The Australian Council of Social Service estimated that more than 6500 people first heard about their alleged debt when they were contacted by a debt collector.)

The National Social Security Rights Network (previously the National Welfare Rights Network), the peak body for community legal centres with social security practices, has received fewer requests for help since the initial controversy. Joni Gear, the Network’s legal project officer, says this is partly because people have become more familiar with the system and partly because the department has put considerable effort into improving its communication with people affected.

But she adds that the basic process remains the same, and human checking for errors still doesn’t occur at the initial stage. Debts are still being wrongly assessed and the onus of proof remains with the person accused of having been overpaid. “We have a lot of issues with this type of system, particularly for people who are still social security recipients. It’s a huge burden to have to comply with this process and quite stressful, thinking potentially you have a substantial debt and to have to prove you aren’t being overpaid.”

Gear says the improvements to the system still leave many social security recipients stranded because they are not familiar with an online process. “We need to make sure that face-to-face customer service still exists for these people and that they don’t fall through the cracks.”

What the government didn’t do is adopt the majority recommendation of a Senate committee — that is, of its Labor and Greens members, with Coalition senators dissenting — that the robodebts system be suspended until fundamental issues of procedural fairness were addressed. But the Department of Human Services is hoping that “George” will relieve some of the pressure on its staff. George, a robot, is “starting to explore solving some significant challenges that we have in our face-to-face servicing,” according to the department’s chief information officer Gary Sterrenberg. “One of these is violence. George is able to detect violence in a crowd and that gives us signals to be able to help our staff avoid those situations.”

So people whose frustrations boil over because of the lack of empathy shown by the department’s computers can be brought to heel with the help of another computer. A perfect example of government by algorithm.

“Automated decision-making shatters the social safety net, criminalises the poor, intensifies discrimination and compromises our deepest national values,” writes Virginia Eubanks. Perhaps it doesn’t need to be that way, but it will require the wonders of modern technology to be deployed in a much more sensitive way than they have been to date. ●