Inside Story

Current affairs & culture from Australia and beyond

1400 words

Face to face with the future

18 October 2019

Questions need to be asked about the federal government’s embrace of facial recognition technology

Right:

“The very point of facial recognition technology is to identify a specific person at a particular place and time. It necessarily interferes with people’s ability to control what others know about them.” David McNew/AFP/Getty Images

“The very point of facial recognition technology is to identify a specific person at a particular place and time. It necessarily interferes with people’s ability to control what others know about them.” David McNew/AFP/Getty Images


Facial recognition technology is fast becoming a basic tool of government, helping patrol borders, police streets across the country, assess eligibility for public programs and even monitor school attendance. Now, the federal government has proposed a more ambitious national facial recognition system — a leap into the future that needs to be debated thoroughly before irreversible decisions are made.

The government’s Identity-matching Services Bill 2019 is currently being reviewed by parliament’s Joint Committee on Intelligence and Security. If passed in its present form, it will empower the home affairs department to create a national database of people’s names, photographs and other identifying information, and to develop a facial recognition system that can match images against the database. The department will also establish a “hub” so that governments across Australia, and in some cases private entities, can exchange “identity-matching services.”

So far, Australian governments haven’t been terribly open about their use of facial recognition. When Queensland’s police minister, Mark Ryan, was asked how the state’s police were using the technology, he merely said that “there is some capability that the police may or may not have, which I can’t get into for obvious security reasons.”

In this respect, the national legislation seems like a step forward. It places facial recognition technology on a statutory footing, unlike in other countries such as Britain. But its vague language makes it difficult to know exactly how the new system will work. To take one example, the government claims that the system will identify people only on a “one-to-many” basis: where a single individual’s image is compared against a large database to find a match. But nothing in the bill rules out more radical, “many-to-many” matching, where large groups of people are identified from CCTV footage in real time. The government assures us that this capability is not supported by its current systems and is prohibited by the intergovernmental agreement underpinning the bill. But these constraints are contingent and legally unenforceable.

The bill also gives home affairs minister Peter Dutton broad discretion to expand the system’s scope. He can define new categories of identifying information to be shared and matched over the system, and can even create wholly new “identity-matching services” for use by government and private entities under certain conditions. This uncertainty about the legislation’s scope is obviously dangerous. It also hampers democratic debate. It is difficult to determine whether the new system is justified without a clear understanding of how it will operate.

Facial recognition technology is vulnerable to “function creep” and abuse. The Queensland government set up a facial recognition system during the 2018 Commonwealth Games to identify “sixteen high-priority targets.” But when none of these targets had been identified by halfway through the Games, the government “opened up” the system to general policing. It is particularly important that other parts of government, such as parliament and independent integrity bodies like the ombudsman, are able to check the executive’s use of this technology. Ordinary people may never realise they have been identified by a facial recognition system, and will thus be unable to ensure that governments respect legal limits.

The bill lacks robust safeguards against these threats. It requires that the hub only be used to identify unknown individuals “in the course of identity or community protection activities,” but doesn’t require governments to get a warrant to this effect, or even to have any reasonable belief that the relevant person has committed an offence. It obliges the minister to report annually to parliament on the system’s operation, but not on crucial matters like security breaches or misuse. Finally, as the Law Council of Australia has noted, the Australian Information and Privacy Commissioner is made responsible for auditing the system but has been given no additional funding to do so.

The government’s embrace of facial recognition is troubling for several other reasons. First, the very point of facial recognition technology is to identify a specific person at a particular place and time. It necessarily interferes with people’s ability to control what others know about them.

Imagine if the government proposed to replace CCTV cameras with physical checkpoints, at which police officers demanded to fingerprint passers-by. Such a shocking incursion on privacy wouldn’t withstand a moment’s scrutiny. Yet facial recognition is even more potent in this regard than fingerprints and other traditional biometrics. Because it is almost impossible to avoid exposing your face to cameras when you move around in public, facial recognition can be used to identify almost anyone. And it can be done from afar, without a person’s knowledge or cooperation.

Second, if the government records and aggregates the results of facial recognition, it can quickly paint a very detailed picture of a person’s life. It might learn very little from identifying a person in a busy street on an isolated occasion, but it can learn much more if it spots a person in the same place every month, around the time when a local environmental group meets nearby.

Third, facial recognition technology makes mistakes. A system trialled in London matched forty-two people on the street to police watchlists, but only eight of the matches were correct — an error rate of 81 per cent. Facial recognition may thus cause government to interfere in a person’s affairs or deny him or her a benefit without justification.

That risk is magnified by the phenomenon of automation bias: the tendency for people to be overly reliant on and uncritical of automated systems. And it may also be unfairly distributed: facial recognition systems tend to be bad at identifying women and people of colour, thus exposing those groups to a higher risk of unjust intrusion.

Fourth, facial recognition technology relies on large databases of personal information that are vulnerable to security breaches or misuse. And, fifth, facial recognition may use personal information for a purpose other than that for which it was originally collected. Our new system is a case in point. It will rely principally on driver’s licence information collected from the states and territories, which people may never have suspected would be used in this way.

These ethical costs can be understood in several different ways. Imagine that a person realises the government has identified them attending a protest. They may have a subjective feeling of violation or disrespect — what legal scholar Daniel Solove calls a “dignitary harm.” But this identification also has a harmful structural effect: it renders the person vulnerable to interference in their affairs. Could the government possibly use this information to deny them benefits, for example? And the identification might have other undesirable effects, such as deterring the person from attending protests in future. That’s bad for the individual, because it intrudes on their autonomy, and bad for society at large, because it discourages the exercise of democratic freedoms of value to us all.

One thing is clear: the ethical costs of the new system will depend significantly on its precise operation, which at this point remains uncertain.

The final question about the new system concerns its ostensible benefits. The government assures us that it is “a critical component of efforts to protect Australians from identity crime and improve the delivery of government services.” But as researchers at the University of New South Wales have argued, it isn’t clear that the new system is necessary, or even particularly well adapted, to achieve those benefits. How will the new system prevent someone from using another’s bank card information to withdraw money, for example, or to buy something online, one of the most common forms of identity crime? To enable informed debate on these issues, the home affairs department should release the findings of its recent independent review of Australia’s framework for combating identity crime.

Ultimately, we must decide as a community whether the costs and benefits of the government’s proposal are reasonably balanced. We might conclude that the government should be able to use facial recognition in some ways but not in others. We might be happy for the government to match those suspected of serious crimes against a database of convicted offenders, but not to match large groups of people, in real time, against a dragnet of almost every adult in Australia. The government still has important questions to answer before we can make those difficult ethical judgements. •

Read next

2356 words

Migration policy enters uncharted waters

New rules mean the government’s migration projections could be seriously wrong

Right:

The plans by home affairs minister Peter Dutton (centre) and prime minister Scott Morrison (right) ignore labour-market dynamics outside the biggest capital cities. Mick Tsikas/AAP Image

The plans by home affairs minister Peter Dutton (centre) and prime minister Scott Morrison (right) ignore labour-market dynamics outside the biggest capital cities. Mick Tsikas/AAP Image