The AI chatbot ChatGPT took the record for the quickest uptake of any app ever. It gained a million users after just five days, and a hundred million in its first two months, growing four times more quickly than TikTok and fifteen times faster than Instagram.
Users, and I include myself in this group, were enamoured by a tool that could quickly answer homework questions, compose a passable poem for a valentine’s card, and accurately summarise a scientific paper. To many it seemed that our AI overlords were about to appear.
Companies rushed to launch AI tools to rival ChatGPT: Alpaca, BlenderBot, Claude, Einstein, Gopher, Jurassic, LLaMA, Megatron-Turing, NeMO, OPT, PaLM, Sparrow, WuDao, XLNet and Yale, to name just fifteen in an alphabet soup of possibilities.
Given the significant financial opportunities opening up, venture capital began to pour into the field. Microsoft has invested over US$10 billion in OpenAI, the company behind ChatGPT. Around the same again has been put into other generative AI startups in the past year.
OpenAI is now one of the fastest-growing companies ever. Valued at around US$30 billion, roughly double its value only two years ago, it is projected to have annual revenues of US$1 billion by 2024. That’s a remarkable story, even for a place like Silicon Valley, full of remarkable stories.
But the opportunities go beyond OpenAI. A CSIRO Data61 forecast has predicted that AI will add A$22.17 trillion to the global economy by 2030. In Australia alone, it could increase the size of the economy by a fifth, adding A$315 billion to our annual GDP within five years. A lot is at stake.
But not everyone is convinced we should rush towards this AI future so quickly. Among them are the authors of the open letter published last week by the Future of Life Institute in Cambridge, Massachusetts. This call for caution has already attracted more than 50,000 signatories, including tech gurus like Elon Musk, Steve Wozniak and Yuval Harari, along with the chief executives and founders of companies like Stability AI, Ripple and Pinterest, and many senior AI researchers.
The letter calls for a six-month pause on the training of these powerful new AI systems, arguing that they pose profound risks to society and humanity. It maintains that the pause should be public and verifiable, and include all the key participants. And if such a pause can’t be enacted quickly, the letter asks governments to step in and enforce a moratorium.
An article about the open letter in Time magazine goes even further. Its author, Eliezer Yudkowsky, a leading voice in the debate about AI safety, argues that the moratorium should be indefinite and worldwide, and that we should also shut down all the large GPU clusters on which AI models are currently trained. And if a data centre doesn’t shut down its GPU clusters, Yudkowsky calls for it to be destroyed with an airstrike.
You might rightly think it all sounds very dramatic and worrying. And at this point, I should probably put my cards on the table. I was asked to sign the letter but declined.
Why? There’s no hope in hell that companies are going to stop working on AI models voluntarily. There’s too much money at stake. And there’s also no hope in hell that countries are going to impose a moratorium to prevent companies from working on AI models. There’s no historical precedent for such geopolitical coordination.
The letter’s call for action is thus hopelessly unrealistic. And the reasons it gives for this pause are hopelessly misguided. We are not on the cusp of building artificial general intelligence, or AGI, the machine intelligence that would match or exceed human intelligence and threaten human society. Contrary to the letter’s claims, our current AI models are not going to “outnumber, outsmart, obsolete and replace us” any time soon.
In fact, it is their lack of intelligence that should worry us. They will often, for example, produce untruths and do very stupid things. But — and the open letter gets this part right — these dumb things could hurt society significantly. AI chatbots are, for example, excellent weapons of mass persuasion. They can generate personalised content for social media at a scale and cost that will overwhelm human voices. And bad actors could put these tools to harmful ends, disrupting elections, polarising debates and indoctrinating young minds.
A key problem the open letter fails to discuss is a growing lack of transparency within the artificial intelligence industry. Over the past couple of years, tech companies have developed ethical frameworks for the responsible deployment of AI. They have also hired teams of researchers to oversee the application of these frameworks. But commercial pressure appears to be changing all this.
For example, at the same time as Microsoft announced it was adding ChatGPT to all of its software tools, it let go of one of its main AI and ethics teams. Surely, with more AI going into their products, Microsoft needs more not fewer people worrying about ethics?
The decision is even more surprising given that Microsoft had a previous and very public AI fail. Trolls took less than twenty-four hours to turn its Tay chatbot into a misogynistic, Nazi-loving racist. Microsoft is, I fear, at risk of repeating such mistakes.
Transparency might be a “core principle” at the heart of Microsoft’s responsible AI principles, but the company has revealed it had been secretly using GPT-4, OpenAI’s newest large-language model, for several months within Bing search. Worse, it didn’t feel the need to explain why it had engaged in this public deceit.
Other tech companies also appear to be throwing caution to the wind. Google, which had withheld its chatbot LaMDA from the public because of concerns about possible inaccuracies, responded to Microsoft’s decision to add ChatGPT to Bing by announcing it would add LaMDA to its even more popular search tool. This proved an expensive decision: a simple mistake in the first demo of the tool wiped US$100 billion off the market capitalisation of Google’s parent company, Alphabet.
Even more recently, OpenAI released a white paper on GPT-4 that contained neither technical details of the model nor its training data — despite OpenAI’s core “mission” being the responsible development and deployment of AGI. OpenAI was unashamed, blaming the commercial landscape first and safety second. Secrecy is not, however, good for safety. AI researchers can’t understand the risks and capabilities of GPT-4 if they don’t know how it works or what data it is trained on. The only open part of OpenAI now appears to be the name.
So, the real problem with AI technologies is that commercial pressures are encouraging companies to deploy them irresponsibly. Here’s my three-point plan to correct this.
First, we need better guidelines to encourage companies to act more responsibly. Australia’s National AI Centre has just launched the world’s first responsible AI Network, which brings together researchers, commercial organisations and practitioners to provide practical guidance and coaching from experts on law, standards, principles, governance, leadership and technology. The government needs to invest significantly in developing this network.
But guidelines will only take us so far. Regulation is also essential to ensure that AI is used responsibly. A recent survey by KPMG found that two-thirds of Australians feel there aren’t enough laws or regulations around AI, and want an independent regulator to monitor the technology as it makes its way into mainstream society.
We can look to other industries for how we might regulate AI. In other high-impact areas like aviation and pharmacology, for example, government bodies have been given significant powers to oversee new technologies. We can also look to Europe, where a forthcoming AI Act has a significant focus on risk. But whatever form AI regulation takes, it is urgently needed.
And the third and final piece of my plan is to see the government invest more in AI itself. Compared with our competitors, we have funded the sector inadequately. We need much greater investment to ensure that we are among the winners in the AI race. This will bring great economic prosperity to Australia. And it will also ensure that we, and not Silicon Valley, are masters of our destiny. •