Inside Story

Ghosts in the machine

A computer scientist takes on artificial-intelligence boosters. But does he dig deep enough?

Ellen Broad Books 5 August 2021 1798 words

Larson warns that overconfidence in machines diminishes our appreciation of human intelligence. Federal Theatre Project Collection/Library of Congress

The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do
By Erik J. Larson | Harvard University Press | $53.99 | 320 pages


It seems like another era now, but only a few years ago many people thought that one of the biggest threats to humankind was takeover by superintelligent artificial intelligence, or AI. Elon Musk repeatedly expressed fears that AI would make us redundant (he still does). Stephen Hawking predicted AI would eventually bring about the end of the human race. The Bank of England predicted that nearly half of all jobs in Britain could be replaced by robots capable of “thinking, as well as doing.”

Computer scientist and entrepreneur Erik J. Larson disagreed. Back in 2015, as fears of superintelligent AI reached fever pitch, he argued in an essay for the Atlantic that the hype was overblown and could ultimately do real harm. Rather than recent advances in machine learning portending the arrival of intelligent computing power, warned Larson, overconfidence in the intelligence of machines simply diminishes our collective sense of the value of our own, human intelligence.

Now Larson has expanded his arguments into a book, The Myth of Artificial Intelligence, explaining why superintelligent AI — capable of eclipsing the full range of capabilities of the human mind, however those capabilities are defined — is still decades away, if not entirely out of reach. In a detailed, wide-ranging excavation of AI’s history and culture, and the limitations of current machine learning, he argues that there’s basically “no good scientific reason” to believe the myth.

Into this elegant, engaging read Larson weaves references from Greek mythology, art, philosophy and literature (Milan Kundera, Mary Shelley, Edgar Allan Poe and Nietzsche all make appearances) alongside some of the central histories and mythologies of AI itself: the 1956 Dartmouth Summer Research Project, at which the term “artificial intelligence” was coined; Alan Turing’s imitation game, which made a computer’s capacity to hold meaningful, indistinguishable conversations with humans a benchmark in the quest to achieve general intelligence; and the development of IBM’s Watson, Google DeepMind’s AlphaGo, Ex Machina and the Singularity. Men who have promoted the AI myth and men who have questioned it over the past century are given full voice.

Larson has a background in natural language processing a branch of computer science concerned with enabling machines to interpret text and speech — and so the book focuses on the relationships between general machine intelligence and the complexities of human language. The chapters on inference and language, methodically breaking down purported breakthroughs in machine translation and communication, are among The Myth of Artificial Intelligence’s strongest. Larson walks us through why phrases like “the box is in the pen,” which MIT researcher Yehoshua Bar-Hillel flagged in the 1960s as the kind of sentence to confound machine translation, still stymies Google Translate today. Translated into French, the “pen” in question becomes a stylo — a writing instrument — despite the fact that the sentence makes clear it’s smaller than the box. Humans’ lived understanding of the world allows us to more readily place words in context and make meaning of them, says Larson. A box is bigger than a biro, and so the “pen” must be an enclos — another, larger, enclosure.

Larson focuses on language understanding (rather than, say, robotics) because it so aptly illustrates AI’s “narrowness” problem: that a system trained to interpret and translate language in one context fails miserably when that context suddenly changes. He argues that there can be no leap from “narrow” to “general” machine intelligence using any current (or retired) computing methods, and the sooner people stop buying into the hype the better.

General intelligence would only be possible, says Larson, were machines able to master the art of “abduction” (not the kidnapping kind): a term he uses to encompass human traits as varied as common sense, guesswork and intuition. Abduction would allow machines to move from observations of some fact or situation to a more generalisable rule or hypothesis that could explain it: a kind of detective work or guesswork, akin to that of Sherlock Holmes. We humans create new and interesting hypotheses all the time, and then set about establishing for ourselves which ones are valid.

Abduction, sometimes called abductive inference or abductive reasoning, is a focus of a slice of the AI community concerned with developing — or critiquing the lack of — sense-making or intuiting methods for intelligent machines. Every machine operating today, whether promoted by its creators as possessing intelligence or not, relies on deductive or inductive methods (often both): ingesting data about the past to make narrower and often untestable hypotheses about a situation presented to them.

If Larson is pondering more explicitly philosophical questions about whether reason and common sense are truly the heart of human intelligence, or whether language is the high benchmark against which to measure intelligence, he doesn’t explore them here. He is primarily concerned with the what of AI (he describes the kind of intelligence AI practitioners are aiming for) and how this might be achieved (he argues it won’t with current methods, but might with greater focus on methods for abduction). Why is a whole other, mind-bending question that perhaps throws the whole endeavour into question.

While Larson does emphasise the messiness of the reality that machines struggle to deal with, he leaves out some of the messiest issues facing his own sub-field of natural language processing. His chapter on “Machine Learning and Big Data, for example, makes no mention of how automated translation tends to reproduce societal biases learned from the data it is trained with.

Google Translate’s mistranslation of “she is a doctor,” for example, arises in the same way as the pen mistranslation. In both cases, the system’s translation is based on statistical trends it has learned from enormous corpuses of text, without any real understanding of the context within which those words are presented. “She” becomes a “he” because the system has learned that male doctors occur more frequently in text than female doctors. The “pen” becomes a stylo not simply because pen is a homonym and linguistically tricky but also because the system is reaching for the most statistically likely translation of the word. The effect in both cases is an error, the challenge is divining context, and the fix in both cases will involve technical adjustments.

But what of other translation errors? At the conclusion of The Myth of Artificial Intelligence Larson makes a brief further reference to “problematic bias,” citing the notorious mislabelling of dark-skinned people as gorillas in Google Photos as an example, characterising it as one of the issues that has “become trendy” for AI thinkers to worry about. (Google “fixed” the error by blocking the image category “gorilla” in its Photos app.) This is an all-too-brief reference to a theme that it is inseparable from the book’s central thesis.

The Myth of Artificial Intelligence convinces the reader that the creation of intelligent AI systems is being frustrated by the fact that the methods we use to build them don’t sufficiently account for messy complexity. Without equal attention being given to complexities introduced by humans into large language datasets, or to the decisions we make training and tweaking systems based on these large datasets, the issue becomes almost entirely one of having the right tools. Left out of this analysis is the question of whether we have the right materials to work with (in the data we feed into AI systems), or whether we even possess the skills to develop these new tools, or manage their deployment in the world.

Larson’s omission of any real discussion of social biases being absorbed by and enacted by machines is odd because The Myth of Artificial Intelligence is dedicated to persuading readers that current machine learning methods can’t achieve general intelligence, and uses natural language processing extensively and authoritatively to illustrate its point. It would only help his case to acknowledge that even the most powerful language models today produce racist and inaccurate text, or that the enormous corpuses of text they are trained with are laden with their own, enduring errors. Yes, these are challenges of human origin. But they still create machines producing errors, machines not performing as they’re supposed to, machines producing unintended harmful effects. And these, like it or not, are engineering problems that engineers must grapple with.


If indeed Larson is right — if we are reaching a dead end in what’s possible with current AI methods — perhaps the way forward isn’t simply to look at new methods but to choose a different path. Beyond reasoning and common sense, other ways of thinking about knowledge and intelligence — more relational, embedded ways of perceiving the world — might be more relevant to how we think about AI applications for the future. We could draw on more than just language as the foundation of intelligence by acknowledging the importance of other senses, like touch and smell and taste, in interpreting and learning from context. How might these approaches inspire revolutionary AI systems?

At one point in The Myth of Artificial Intelligence, Larson uses Czech playwright Karel Čapek’s 1921 play, R.U.R., to illustrate how science fiction’s images of robots hell-bent on destroying the human race have shaped our fears and expectations of superintelligent machines. In Larson’s retelling, these robots, engineered for optimal efficiency and supposedly without feelings or morals, get “disgruntled somehow anyway,” sparking a revolution that wipes out nearly the entire human race. (Only one man, the engineer of the robots, remains.)

It’s true that the robots in R.U.R. get “disgruntled.” But their creators never intended them to be wholly mindless automatons. In Čapek’s imagination they were made of something like flesh and blood, indistinguishable from humans. To reduce factory accidents, they were altered so as to feel pain; to learn about the world, they were shown the factory library. As the robots rebel in the play’s penultimate act, their human creators ponder how their own actions had led to the uprising. Did engineering the robots to feel like humans lead them to become aware of the injustice of their position? Did the designers focus too much on producing as many robots as possible, failing to think about the consequences of scale? Should they have dared to create technology like this at all?

Separating the human too much from the machine can make it hard to properly interrogate the myth. The Myth of Artificial Intelligence is a clever, engaging book that looks closely at the machines we fear could one day destroy us all, and at how our current tools won’t create this future. It just doesn’t dwell deeply enough on why we, as their creators, might think superintelligent machines are possible, or how our actions might contribute to the impact our creations have on the world. •