Inside Story

AI through the looking glass

Could artificial intelligence make us less human?

Kurt Johnson Books 11 November 2024 1270 words

Statistics at any resolution can’t emulate human emotion, reason or instinct, argues philosopher Shannon Vellor (above). University of Edinburgh


The famous Turing Test, developed to assess whether a computer program possesses intelligence, is indicative of the flawed thinking at the heart of Artificial Intelligence. It involves a human subject, a computer program, and a judge who poses questions and decides which written response comes from the human and which from the program. If the judge can’t distinguish, the AI is deemed intelligent.

Two flaws are immediately obvious. Some humans are easier to fool than others, so hanging a claim as grandiose as machine intelligence on something as unreliable as human subjectivity is hazardous. A deeper issue is the test’s closed loop: as the ultimate arbiter, humans bring their own world to the task of judging. It’s all too easy, all too human, to gaze into the AI, see our reflection and then judge it to be intelligent.

Enter philosopher Shannon Vallor’s The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking, which makes a compelling case that entrusting AI with responsibilities traditionally performed by human intelligence represents a dire threat to society’s capacities. This threat comes at a time when we face climate change and other existential risks. The problem lies in our confusion: our mistaking the articulate for the intelligent and the world through the looking glass for real life.

But first, the uncontroversial. AI as it exists today is not genuinely intelligent by any robust definition. It can’t comprehend, analyse or reason. It hallucinates and even fails to solve basic mathematical problems. Confronted by the complexity of the physical world, it stumbles. Driverless cars, for example, for years an invention supposedly just around the corner, are a danger on the streets.

How AI emulates intelligence without possessing it is important to understand. Generative AI relies on statistical models trained on immense data sets. Its text-generating capability, for example, uses an initial prompt to predict, one word at a time, the most likely words in a sequence. The output simulates fluency without the model itself having any understanding of its meaning. No machine could be more perfectly designed to exploit the Turing Test’s blind spot — to trick a human judge into presuming intelligence when the only mind in play is the judge’s. Only this time we’re all the judges.

Of course, models can be trained to perform a wider range of behaviours than text generation. For Vallor the root problem is not the existence of such generative AI but rather its misapplication. Consider job-candidate vetting software used by HR departments and recruiters to wade through the flood of applications. This software can detect female applications even when demographics have been scrubbed by finding subtle stand-ins, like a particular school or particular names. Statistically this is valuable information — and it can provide empirical evidence to help counter prejudice. It becomes a problem when the software, having removed the human, automates a historically biased process. It operates like this because training data infers this is how humans behave. Once automated, the prejudice is locked in.

Central here is AI’s opacity. These models are so complex, some relying on trillions of variables, it is impossible for humans to parse how any given statistical likelihood was calculated. Most problems only become obvious once the automation has been operating for some time. Unique cases or people that deviate from the norm are liable to be mislabelled because no amount of training data can capture the full spectrum of humanity.

For Vallor the philosophical mistake is simple: statistics at any resolution can’t emulate human emotion, reason or instinct. She contrasts her belief in moral evolution and the uniqueness of humans with the view prominent among tech evangelists that humans are just very complex machines, possible to simulate if only we have enough horsepower.

When the Turing Test was developed in 1950, it was intended less as a rigorous definition of machine intelligence than a distant horizon for engineers to strive towards. A wave of techno-utopianism was cresting, carried along by the hope that technology might relieve humanity of biases and prejudices and lead us into an objective, purely rational future. It was an idealistic vision, if a little chilling. What has actually happened is that our potent statistical systems, rather than creating a sanitised objective reality, have inherited our own flaws and become a mirror image of humanity.

Vallor traces the intellectual streams of today’s Silicon Valley through to today’s so-called long-termists. Growing out of effective altruism — itself an expansion of the contentious ethical calculus of utilitarianism — long-termists profess to fear humanity’s possible extinction or “immiseration” at the hands of AI. They have begun to demand immense amounts of cash to make sure AI systems don’t overreach. That many are also part of the lobby opposing regulation of the sector betrays this as a disingenuous grab for cash. What’s more, it represents a diversion of resources, Vallor argues, in terms both of cash and of attention, from the far more urgent problem of climate change.

A particularly fascinating aspect of The AI Mirror is its treatment of science fiction. Many blockbuster visions of AI present a future where machines will inherit the worst aspects of humanity. For Vallor, the screen’s black mirror reflects the nerd rage, lust for power and icy technocracy of today’s Silicon Valley. Portraying AI as a bogeyman that will inevitably enslave humankind is not just grim, it’s dull. There is nothing intrinsic to intelligence that makes it strive for dominance, Vallor says. Instead, she advocates a more expansive imagination where AI can have positive human traits, like humour.

Back in the real world, Vallor suggests that AI should augment rather than automate human decision-making. She provides enticing possibilities like using AI to restore forgotten languages or match a distressed patient with an appropriate therapist. Augmentation is based on the belief that humanity possesses unique unautomatable faculties — our moral intuition, our empathy. All these aspects resist data capture, never making it into these statistical models, and so do not exist in generative AI.

Vallor is also right to point out that just as the ubiquity of map apps has eroded our sense of direction, so too will the delegation of moral and ethical decisions to AI restrict our ability to navigate complex moral decisions. The decline has already started. She gives a chilling example from Britain, where AI has been used to judge the likelihood that a prisoner will reoffend. Even when a human judge is present, they often rely on the verdict given by the software.

One problem with Vallor’s analysis is an overreliance on the philosophic lens (and the extended mirror metaphor) to organise her ideas. The present state of AI is not the result of any intellectual tradition; philosophy arrived afterwards to justify the tech. Other commentators have more successfully argued that AI is not utilitarianism gone mad but the logical product of companies’ harvesting of data in the interest of maximising efficiency, a practice that began long before modern computing. These commentaries hark back to Frederick Taylor’s use of scientific management to break down tasks, develop metrics and optimise efficiency. Even then, automation was the final goal, always as a means of maximising profit.

Vallor’s mirror is an productive way of thinking about AI. The world through the AI looking glass is a warped reflection we should be cautious about staring at for too long, lest we confuse it for reality. •

The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking
By Shannon Vallor | Oxford University Press | $55.95 | 272 pages