Inside Story

Will we finally look clearly at facial recognition technology?

Revelations about Clearview AI’s harvesting of online images challenge us all to think carefully about this technology’s impacts

Ellen Broad 24 January 2020 826 words

Throughout history, societies have constrained technologies when it’s uncertain whether they will do more harm than good. Gabriel Pevide/iStockphoto


Last weekend another new dystopian-sounding facial recognition application hit the headlines. This time, it was a little-known start-up, Clearview AI, which is providing identity-matching software to law enforcement agencies in the United States.

Stories about how facial recognition is being used by law enforcement aren’t that surprising these days. But the Clearview AI revelations, published by the New York Times, made the tech industry sit up. Here was a company that, even in a world of increasingly invasive facial recognition applications, had crossed a line. It scraped the open web, collected billions of photos of people, and built an app enabling users to match their own pictures of a person with the photos in that vast database, with links to pages on the web where those photos appeared.

This kind of application — breathtaking in scale, deeply invasive in implementation — has long been technically possible; it just wasn’t something technology companies were keen to do (or at least, to be seen as doing).

Up until recently, conversations about facial recognition technology haven’t usually gone much further than whether we should or shouldn’t ban it. There has been no middle ground. Supporters are on the side of law and order, whatever that takes; opponents are radical leftists with a disregard for public safety or luddites opposed to technological progress. The many different choices made in designing and deploying the various tools and methods that fall under the umbrella of “facial recognition” — some of them sensible, others careless, some downright ugly — tend to get lost along the way.

Many things are technically possible. That doesn’t make them safe, ethical or useful. It is technically possible to build a three-wheeled car. It just might keel over if you go round a bend at more than forty kilometres per hour. It’s technically possible to manipulate software measuring carbon emissions in a car so that readings are artificially lowered, but that doesn’t mean it’s legally or socially permissible.

Technologies are not monolithic. The design of every product rests on a range of choices and trade-offs. Some products are well designed and conscious of their social and ecological footprints. Other products pose threats to physical safety, discriminate against people, or are designed to cheat. We need to think carefully about how we want technology to be applied — how we want it to be manifested in the world. Facial recognition is no different.

Clearview AI’s facial recognition application wasn’t just bad because it scraped billions of images of people without their knowledge or consent. If details of the New York Times’s investigation are true, it went a lot further than that. It built software capable of monitoring whom its users — mostly law enforcement agencies — were searching for. It manipulated image search results, and removed some matches. Images uploaded by police were stored on their own servers, with little verification of data security.

Are these things we want? Are these practices okay?

Clearview AI is just the latest in a long line of stories about buggy, inaccurate, invasive and outright offensive implementations of facial recognition. Face-detection settings on cameras that only work on certain faces. Image-tagging software making racist comparisons. Identity-matching databases used to investigate crime consistently misidentifying members of already marginalised groups. Software engineers matching women’s faces with adult videos online, to help men check if their girlfriends had ever acted in porn.

Last week European Union regulators indicated they’re considering a potential ban on facial recognition technology for up to five years — with some exceptions — while they figure out the technology’s impact and the regulatory issues that need to be tackled. Google and Facebook have already expressed cautious support for such a ban.

Some cities have already started curtailing facial recognition: in San Francisco, the government voted in 2019 to ban local law enforcement from using the technology. In New York State, the education department demanded a school district cease using the technology in public schools.

Speaking to the New York Times, one investor in Clearview AI, David Scalzo, was doubtful about the power of any prohibition. Technology can’t be banned, he said. “It might lead to a dystopian future or something, but you can’t ban it.”

It’s true that a technology, once discovered, can’t be undiscovered (though some have been forgotten). But throughout history, societies have temporarily banned the development or certain applications of technologies when it’s unclear whether they will do more harm than good: think nuclear power, or gene editing. Sometimes temporary bans become permanent ones. Sometimes they’re lifted once we’ve used the breathing space to figure out the rules of engagement.

And yes, it’s true that bans can be broken. But technologies don’t break bans — people do. People who do not respect or recognise the concerns of the societies they live in.

Technologies do not lead us into a dystopian future: we decide the future we want. •