The launch of ChatGPT has sent the internet into a fresh spiral of awe and dismay about the quickening march of machine learning’s capabilities. Fresh in his new role as CEO of Twitter, Elon Musk tweeted, “ChatGPT is scary good. We are not far from dangerously strong AI.” Striking a more alarmed tone was Paul Kedrosky, a venture capitalist and tech commentator, who described ChatGPT as a “pocket nuclear bomb.”
Amid these competing visions of dystopia and utopia, ChatGPT continues to generate a lot of buzz, tweets and hot takes.
It is indeed impressive. Type in almost any prompt and it will immediately return a coherent textual response, from a short factual answer to long-form essays, stories and poems.
But it is not new. It is an iterative improvement on the previous three versions of GPT, or Generative Pre-trained Transformer. This machine-learning model, created by OpenAI in 2018, significantly advanced natural language processing — the ability of computers to “understand” human languages. An even more powerful GPT is due for release in 2023.
When it comes down to it, though, ChatGPT behaves like a computer program, not a human. Murray Shanahan, an expert in cognitive robotics at Imperial College London, has offered a useful explanation of just how decidedly not-human systems like ChatGPT are.
Take the question “Who was the first person to walk on the moon?” ChatGPT is able to respond with “Neil Armstrong.”
As Professor Shanahan points out, in this example the question really being asked of ChatGPT is “given the statistical distribution of words in the vast public corpus of (English) text, what words are most likely to follow the sequence ‘who was the first person to land on the moon.’”
As a matter of probability and statistics, ChatGPT determines the answer to be “Neil Armstrong.” It isn’t referring to Neil Armstrong himself, but to a combination of the textual symbols it has mathematically determined are most likely to follow the textual symbols in the prompt. ChatGPT has no knowledge of the space race, the moon landing, or even the moon for that matter.
Herein lies the trick. ChatGPT functions by reducing text to probabilistic patterns of symbols and completely disregards the need for understanding. There is a profound brutalism in this approach and an inherent deceit in the yielded output, which feigns comprehension.
Not surprisingly, technologies like ChatGPT have been criticised for parroting text with no underlying sense of its meaning. Yet the results are impressive and continually improving.
Ironically, by completely disregarding meaning, context and understanding, OpenAI has built a form of artificial intelligence that demonstrates these very attributes incredibly convincingly. Does it even matter that ChatGPT has no idea what it is talking about, when it seems so plausible?
So how should we think about a technology like ChatGPT — a technology that is “stupid” in its internal operations but seemingly approaching comprehension in its output? A good place to start is to think of it in terms of what it actually is – a model.
As one of my favourite professors used to remind me, “All models are wrong, but some are useful.” (The aphorism is credited to statistician George Box.) ChatGPT is built on a model of human language that draws on a forty-five-terabyte dataset of text taken largely from Wikipedia, books and certain Reddit pages. It uses this model to predict the best responses to generate. Though its source material is humungous, as a model of the way language is used in the world it is still limited and, as the aphorism goes, “wrong.”
This is not to play down the technical achievements of those who have worked on the GPTs. I am merely pointing out that language can’t be reduced to a static dataset of forty-five terabytes. Language lives and evolves through interactions people have every minute of every day. It exists in a state of constant flux, in all manner of places — including places beyond the reach of the internet.
So if we accept that the model underpinning ChatGPT is wrong, in what sense is it useful?
Leading AI commentators Arvind Narayanan and Sayash Kapoor pin the utility of ChatGPT to instances where accuracy and truth are not necessary — where the user can check for correctness when they’re debugging code, for example, or translating — and where truth is irrelevant, such as in writing fiction. It’s a view broadly shared by the founder of OpenAI, Sam Altman.
But that perspective overlooks a glaring example of where ChatGPT will be misused: where inaccuracy and mistruth are the intention.
We need to think of the impact of ChatGPT as a technology deployed — and for that matter developed — during our post-truth age. In an environment defined by increasing distrust in institutions and each other, it is naive to overlook ChatGPT’s potential to generate language that serves as a vehicle for anything from inaccuracies to conspiracy theories.
Directing ChatGPT towards nefarious purposes turned out to be easy. Without too much effort I bypassed ChatGPT’s much-vaunted safety functions to generate a newspaper article alleging that Victorian opposition leader Matthew Guy has a criminal history, is implicated in matters relating to Hunter Biden’s laptop, and has been clandestinely plotting with Joe Biden to invade New Zealand and seize its strategic position and natural resources.
While I had to stretch the conspiratorial limits of my imagination, ChatGPT obliged immediately with a coherent piece of text stitching it all together.
As Abeba Birhane and Deborah Raji from the Mozilla Foundation have observed, technologies like ChatGPT have a long history of perpetuating bigotry and occasioning real-world harm. And yet billions of dollars and lashings of human ingenuity continue to be directed to developing them. Surely we need to be asking why?
The prospect of technologies like ChatGPT swamping the internet with conspiracies is certainly a worst-case scenario. But we need to face the possibility and reassert the role of language as a carrier of meaning and the primary medium for constructing our shared reality. To do otherwise is to risk succumbing to the flattened simulations of the world projected by technology systems.
To test the limitations of the world as captured and regurgitated by ChatGPT, I was interested to find out how far its mimicry extended. How would it go describing a place dear to my heart, a place that would be far from the minds and experiences of the North American programmers who set the parameters of its dataset?
I spent a few years living in Darwin and have fond memories of it as a unique place that needs to be experienced to be known. Amid Canberra’s cold start to summer, I have been dreaming of the stifling heat of this time of year in Darwin — the gathering storm clouds, the disappointment when they dissipate without bringing rain, and the evening walks my partner and I would take by the beach in Nightcliff, seeking any coastal breeze to bring relief from the heavy, expectant atmosphere of the tropics in build-up.
So I asked ChatGPT to write a short story about a trip to Nightcliff beach in December. For additional flourish, I requested it in the style of Tim Winton.
In a matter of seconds, ChatGPT started to generate my story. The mimicry of Tim Winton was evident, though nothing like reading his actual work. But the ignorance about Darwin in December was comical as it went on to describe a generic beach scene in the depths of a northern hemisphere winter.
The story was replete with trite descriptions of cold weather, dark-grey choppy seas and a gritty protagonist confronting the elements (as any caricature of a Tim Winton protagonist would). At one point, the main character “wrapped his coat tightly around him and shivered in the biting wind.” Without regard for crocodiles or lethal jellyfish, he dives in for a bracing swim, “feeling the power of the water all around him.” He even spots a seal!
Platforms like ChatGPT are remarkable achievements in mathematics and machine learning, but they are not intelligent and not capable of knowing the world in the ways we can and do. Yet they maintain a grip on our attention and promote our fears.
We are right to be concerned. It is past time to scrutinise why these technologies are being built, what functions we should direct them towards and which regulations we should subject them to. But we should not lose sight of their limitations, which serve as a valuable reminder of the gift of language and its extraordinary capacity to help us make sense of the world and share it with others. •