In my day job, I run a lab dedicated to research and development in AI for law enforcement and community safety. We’re preoccupied with building what’s euphemistically described as “technology for social good.” Yes, it’s a trite description, but it does at least hint at a flip side. And that flip side, the misuse of AI technology, is growing fast.
In this category go a slew of harms that range from riding roughshod over intellectual property (stealing copyrighted material to train algorithms) through to the illegal and abhorrent (creating child abuse material). Beyond these extreme examples, a whole heap of downright dodginess is emerging in the name of AI “innovation.”
In response, a patchwork quilt of rules is being constructed across the world that tries to graft ethical and legal constraints onto the technology. But the financial and political forces at play are so powerful that we are seriously entertaining the idea that regulation and safety should play second fiddle to getting the technology out the door and into the hands of everyone. Don’t worry — it’s not that bad, in fact it will probably be good for us… Cigarette anyone?
It’s always refreshing when the AI behemoth gets a poke, and Emily Bender and Alex Hanna invite us to get real with their well-researched and entertaining book, The AI Con. They’re highly qualified to do so. Hanna, formerly part of Google’s “Ethical AI” team, is a sociologist whose work examines data-driven socio-technical inequality. Bender is a linguist and incisive AI critic who, among many scholarly attacks on the way the technology is sold to us, has labelled large language models (the technology behind systems such as ChatGPT) “stochastic parrots.” We know where we’re going with a book subtitled “How to Fight Big Tech’s Hype and Create the Future We Want,” and Bender and Hanna don’t hold back. They are, in their words, scaling the ever-rising AI “bullshit mountain.”
As someone who works in AI, it’d be incongruous if I said all AI is bullshit. I certainly don’t say that, and neither do Bender and Hanna. They acknowledge early in the book that properly conceived, constructed and legitimately helpful AI systems do exist. But a dominant, obfuscating cycle of AI “hype and harm” has also been at play. This includes breathless hyperbole from the “AI boosters,” including (of course) prominent tech leaders. Meanwhile, various harms are increasingly rained down on those affected by AI-infused decision-making, the training of AI systems and the generally inappropriate use of the technology.
Although plenty of well-meaning people work in the AI industry, this book shows we are having the wool pulled over our eyes when the tech is sold to us as the solution to all manner of problems. Equally, Bender and Hanna argue that the claims of some prominent “doomers” that AI may drive us out of existence are just another means to talk up the tech and distract us from the AI harms of today. These pronouncements, they say, are designed to convince us that the industry has got our back by somehow “aligning” AI development with human values. Bender and Hanna interrogate this idea in depth — and their analysis is biting.
Fundamentally, a lot of the hype around AI is facilitated by the fact that artificial intelligence is such a slippery concept. In fact, many AI practitioners give up trying to define it, and I’ve heard plenty of them say things along the lines of “the definition doesn’t really matter.” But of course it matters. If people can’t agree on a concrete definition of a thing, that thing is an easy vehicle for charlatanic claims.
Bender and Hanna call the moniker AI out for what it really is — a marketing term. In the selling, it can be made out to be any number of things. It would be much better, they argue, to use labels describing the specific purpose of a particular algorithm or system. And usually this is something that would be best couched as automation.
For example, if you’re developing software that classifies images, call it an automated image classifier. If your system translates audio to text, call it automated transcription. And, if you’re building a chatbot that can “converse” with a human or synthesise writing, perhaps follow Bender and Hanna’s glorious takedown of the idea that this is somehow intelligence at work and call it a “text extruding machine.”
That label deftly summarises the fact that ChatGPT and its ilk are a really a kind of lexical sausage factory, churning out one word after another based on the probability that the next word fits well with those that came before it. If you train the text extruder on a collection big enough for it to do a good job of calculating realistic word probabilities, the results look very human-like. (Come to think of it, we probably should have named our research lab “Automated Statistical Data Analysis, Processing and Loosely Related Technologies for Law Enforcement and Community Safety.” But we went with the zeitgeist and stuck with AI in the title.)
The human-looking outputs of today’s chatbots (only one type of “AI”) can be so realistic that some people start talking of these systems as if they are human. Bender and Hanna acknowledge that humans are naturally “anthropomorphising creatures,” but they caution strongly against doing this with AI because it means falling into the trap of equating being human with banal computational reductionism. (Next time you hear someone say they asked for ChatGPT’s “opinion,” keep this in mind.)
For example, despite the term hallucination being used widely to describe chatbots producing non-factual results (usually with an air of confidence), the term only appears in The AI Con as one to be avoided because it trivialises human mental illness and incorrectly implies that a stochastic parrot can somehow “perceive” something that’s not there (as they say, it can’t “perceive” anything).
Of course, the most loaded anthropomorphising language of AI hype has it that the algorithms are “thinking” or “reasoning.” This is blatant co-opting of terms that have been used forever to describe higher-level human capabilities. That’s nothing new in the marketing world of course: there are dishwashers said to be intelligent because they detect how much grime is on plates. A technical marvel indeed.
But, you say, today’s AI is much more than an efficient dishwasher. It’s going to make us more productive, isn’t it? It will do better science, revolutionise healthcare and education… the list goes on. Bender and Hanna give many examples that question those possibilities, as well as dissecting the agendas behind such spruiking and offering humanising alternatives. Are they modern Luddites? I suspect they’d wear such a title with honour: they certainly explain how those who use the term as a putdown don’t understand its historical context and the clear analogies to the potential automation-driven harms of today.
The AI Con gives just enough explanation of how the technology works to avoid getting bogged down, but also situates it in the human story. What we call AI today runs a lot faster and produces much better outputs than it did in the past. From a computer science perspective, some pretty big technological breakthroughs have been made over the last sixty years or so in pursuit of the “intelligent machine.” Some of this progress has been courtesy of some very smart algorithms, particularly new types of “neural networks” that draw inspiration from our own neurobiology. And rapid advancements in computer hardware mean machines can process all that data we hoard more efficiently, providing grist to the AI mill. No one could claim that the progress in the domain has been less than remarkable from a technical and engineering standpoint.
Bender and Hanna explore the socio-technical trajectory of AI (yes, it was ill-defined and hype-driven from the start) and illuminate the disturbing links between the pursuit of the next vaguely defined evolution of the technology — artificial general intelligence (the thing that might destroy us all?) — and racism, ableism and eugenics. If you think they’re drawing a long bow, be prepared for some serious food for thought.
Okay, let’s leave AI’s dubious provenance aside for a minute. If we are now at a point where it offers helpful use-cases, does the much-touted theory of a job-taking apocalypse hold water? We are using AI all over the place right now. (Remember, the term can mean many things.) Our lab develops image classifiers and other data analysis tools to reduce online harms, for example. But the idea that we should see AI not as a tool to assist with specific tasks but as a replacement for whole swathes of human jobs does take hype to a new level.
For one thing, even if it were possible, it would be downright dangerous and unethical to automate many professions because of the stakes involved and the need for human accountability (think police, doctors, soldiers). What of other employment types? Won’t the inexorable rise of chatbots and image generators mean we don’t need as many humans using their brains? Yes, there is evidence that many industries are swallowing the hype and rolling out text generators and the like en masse. But here’s Bender and Hanna’s take: in almost all cases AI won’t replace jobs, it will make jobs “shittier,” with plenty of our time shifted to “babysitting” the machines.
They make the point that the whole raison d’être of many professions is subverted by a rush to attempting to automate them. University educators, for example, are hired precisely “to educate, to do the slow, painstaking work of teaching students to engage in critical thinking, to assess their thinking, and to provide guidance.” It’s a theme they return to a number of times: that it’s up to us to accept or otherwise the idea that AI will put us out of work. And I like to think that the threat posed by large-scale technological mimicry of humans is not that the technology will become so “intelligent” that it will destroy us, but that we will become so stupid we roll over.
A couple of decades ago, someone extrapolated from the growth of Elvis Presley impersonators between 1977 and 2000 to conclude that 2043 would be the year everyone on Earth would be able to convincingly rock out as The King. I suspect a similar result could be projected by examining the recent growth rate of self-anointed online AI futurists.
The debate about where we are headed is heated and polarised. And, like all hype storms, it’s very, very noisy. The AI Con provides a well-needed intellectual counterpoint to the relentless characterising of AI as human-equivalent (or beyond), and Bender and Hanna ground us in current realities. Thankfully, in addition to wielding the cane on AI hype, they provide pathways to resistance — including my favourite: “ridicule as praxis.” They urge a world where we see things as they are, don’t accept shady dehumanisations in the name of false promises and give the collective finger to those who do. •
The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want
By Emily M. Bender and Alex Hanna | Vintage | $36.99 | 288 pages