The most significant consumer innovation of the last decade was announced on 9 January 2007. Despite uneven health, Apple chief executive Steve Jobs took to the stage at the Macworld Conference in San Francisco and unveiled the iPhone. Ten years later, a billion of them had been sold. Today, many think touchscreen smartphones are as necessary as underwear and more important than socks. Yet when Jobs launched his revolutionary phone, many believed it would fail. His counterpart at Microsoft, Steve Ballmer, laughed at the device, calling it “a not very good email machine.”
The critics were wrong, and wrong in a major way. As industry insiders, they all paid the price for their poor predictions. Their products would all exit the industry, replaced by the new Apple, of course, but also by Samsung and Huawei. What turns out to be a successful innovation might not seem that way at first. There is a reason for that: innovation is new to the world. If it was obvious, someone would have done it.
Technology forecasts can also be wrong in the other direction. In 2001, after years of stealth development, inventor Dean Kamen unveiled the Segway. This was a personal transporter with two wheels on either side of a platform with a stick and handlebar jutting from its centre. At a time when computer-controlled gyroscopes were rare, it seemed like magic. Implausibly, the Segway would balance itself and its occupant upright. The rider simply leaned forward to accelerate and backward to stop. It seemed like something from the future. It seemed like something that you wanted to try.
Many others heralded the Segway as a revolution. Steve Jobs said it was “as big a deal as the PC.” John Doerr, the famous venture capitalist behind Netscape and Amazon, believed it would be bigger than the internet. It was hard to find many early detractors. Alas, a decade and a half later, you might see a Segway used by a traffic cop or group of tourists being led around a city. Otherwise, it is a discarded technological concept. Why didn’t the Segway work out? There were some safety issues, but that hasn’t prevented the police from adopting them. One theory is that people stuck out too much on them, drawing attention in an unwelcome way.
The point is that our forecasts — optimistic or pessimistic — for individual technologies can often be way off base. Marc Andreessen, Netscape founder and venture capitalist, compares his performance with that of Warren Buffett, the world-famous proponent of “value investing”: “Basically, he’s betting against change. We’re betting for change. When he makes a mistake, it’s because something changes that he didn’t expect. When we make a mistake, it’s because something doesn’t change that we thought would.”
We started this discussion of technological prospects with the iPhone and Segway precisely because of this question of far-reaching impact. The iPhone established a dominant design for smartphones. Thanks to people having the internet in their pocket, we got Uber, Airbnb and Spotify. We got Facebook, Instagram, LinkedIn and Twitter to inform, engage and infuriate us. Developing economies skipped over bank accounts to mobile banking, such as Kenya’s ubiquitous M-Pesa service. If the doubters had been right, we would have had none of these things. With the Segway, we didn’t end up changing urban transportation. Billions might have switched to a travel technology that eased congestion and cut emissions, but we didn’t.
Are there still big breakthroughs to be made? On this, economists disagree. There are technological optimists who believe that big breakthrough innovations lie in our future, and pessimists who believe they won’t surpass the past. How can we evaluate their arguments?
The tech optimists
In early 2014, owners of Tesla’s Model S electric vehicles received a recall notice from the US National Highway Traffic Safety Administration related to a problem that could cause a fire. What car owners usually have to do in these cases is return the car to a dealer to be fixed. This is costly for everyone involved. This time it was different. The problem could be fixed by updating the software in the car, and the update could be pushed to almost 30,000 vehicles overnight because Teslas are connected by default to the internet. No muss, no fuss.
The fact that this could now be done for so many products with embedded software caused Andreessen to proclaim that “software is eating the world.” Put simply, real things were no longer fixed in their capabilities. Because of software, they could be enhanced without having to physically rebuild them.
The tech optimists are not optimistic simply because they know that the universe has more to reveal. They are optimistic because they believe that we are still living in a time of accelerating technological change. Andreessen argues that the benefits of computing technologies and the digitisation revolution are ongoing because they are based on software — something that scales easily. More than half the world’s population came online in just the past decade, and the world is not yet fully connected. Moreover, the value of that network increases disproportionately to the number of people on it — an effect known as Metcalfe’s law.
From the perspective of an innovator in software, that means the customer base is still growing rapidly. What is more, with greater numbers of users, distributed infrastructure — known commonly as “the cloud” — becomes cheaper to use, even aside from the reductions in the cost of hardware in data centres. In 2000, it may have cost a start-up $150,000 per month to host an internet application in the cloud. Today it is less than $150. Those gains translate into increased profitability and lower risk for every single software entrepreneur.
Tech optimists point to multiple trends. Since the 1960s, Moore’s law saw processing power double roughly every eighteen to twenty-four months. As a consequence, microprocessors in 2018 had eight million times as many transistors as the best microprocessor in 1971. Worldwide data storage is now around a zettabyte, or ten bytes to the power of twenty-one. Each minute, 300 hours of video are uploaded to YouTube. The next mobile telephony standard, 5G, will operate at many times the speed of the previous generation of wireless technology.
Technologies are sometimes used in unexpected ways. Graphics processing units (developed for hardcore gamers) were used to train neural networks designed to emulate the learning functions of the brain. These new developments in what is called machine learning have led to a renaissance in artificial intelligence research.
Around five years ago, using deep learning methods pioneered by several Canadian university professors, computers’ ability to understand speech and recognise images took a leap forward. These new methods mimicked the brain function, and allowed multiple levels of sorting and classification. The result effectively allowed computers to pick up nuance and associations that even humans would miss. In October 2016, Microsoft engineers announced that their speech recognition software had attained the same level of accuracy as human transcribers when it came to recognising speech in the “Switchboard Corpus,” a set of conversations used to benchmark transcribers. In a controlled environment, machine voice recognition is now more likely to comprehend what we’re saying than the average human. Meanwhile, facial recognition algorithms used by Baidu, Tencent-BestImage, Google and DeepID3 have an accuracy level above 99.5 per cent, compared with humans’ rate of 97.6 percent.
The best way to explain what has happened is to focus on what the new artificial intelligence techniques do best: prediction. Machines can now take a large amount of data (numbers, images, sound files, or videos) and review it for relationships that allow them to forecast with a high degree of accuracy. Image recognition, for example, is basically a prediction activity: “Here is a picture. What is your best guess at what someone would call this?”
Although these technologies still make mistakes, they have the ability to outperform humans in real-world contexts. In 2011, IBM’s Watson computer played the quiz show Jeopardy! against two champions of the game: Ken Jennings and Brad Rutter. Watson won. IBM’s next major human-versus-machine contest came in 2018, when the company showed off its IBM Debater. The computer was able to engage at a reasonably coherent level with a human counterpart on the topic of whether government should subsidise space exploration.
Learning machines don’t just have to rely on their own experience. Indian online retailer Myntra recently deployed an algorithm that designed new clothing images by modifying and combining popular patterns. One of those computer-designed t-shirts, featuring blocks of olive, blue and yellow, is now a bestseller. Artificial intelligence is arguably the next general-purpose technology: a technology so foundational that myriad other innovations grow on its base. We have seen this happen with the steam engine, electric power, plastics, computers, and the internet. The optimists believe that artificial intelligence could have the same potential.
To see how technology might drive science, remember that Galileo’s research — which showed convincingly that Earth revolved around the sun — was based on a technological advance in the form of a telescope that could magnify distant objects thirty times. A few decades later, the creation of a microscope that could magnify tiny things 300 times enabled Robert Hooke to document the existence of cells. These massive breakthroughs in astronomy and biology would have been impossible without advances in glass production and precision manufacturing.
Today, it’s easy to point to similar advances. The use of gene editing could revolutionise medical science. Strong and light materials such as graphene could change manufacturing. These are radical technologies that could bring about decades of further innovation.
The tech pessimists
Others take an altogether dimmer view of our prospects. They worry that we have already picked the low-hanging fruit over the past two centuries, and that the outlook for the next century is bleaker. Their argument is not based on some oracle-like insight into the future but instead on the inescapable economic law of diminishing returns.
In economics, the figure that looms largest on this side of the argument is Robert Gordon. His concern revolves around just how great the relatively recent past has been. Prior to 1870, economic growth occurred at a trickle. But after 1870, the major innovations at the heart of the Industrial Revolution began to work their way fully through society. It wasn’t just that steam power made factories more efficient; our knowledge of science also brought us to a point where new technologies were shaping the environment around us.
In the century following 1870, most people in the United States and Western Europe (and a handful of other places) went from carrying water to having it delivered to their houses at the turn of a tap, instantly and in a form safe enough to drink. Washing machines saved time and made our clothes last longer. Indoor toilets took sewage far away from houses at the push of a lever or yank of a chain. Energy could be easily delivered to people’s houses. Information was brought in by the radio, telephone and television. Cars provided freedom and reshaped the urban form. A reasonable person might suppose that society will never again see such radical changes. The interesting thing is that we can see this in the data on economic growth that measures how innovations have translated into productivity improvements.
Growth has its ups and downs. Smooth out the temporary recessions and upswings, though, and the century until 1973 was an era of steady progress that suddenly petered out. Initially, many economists saw the slowdown as an aberration. Nobel laureate Robert Solow, who pioneered the field of economic growth, said in 1987 that “you can see the computer age everywhere but in the productivity statistics.” Maybe it was a mismeasurement because computers were assisting services whose productivity was notoriously hard to measure? The economic historian Paul David reminded us that when electricity was introduced, it took decades for it to show up in measures of productivity. Maybe once firms worked out how to use computers effectively, the productivity gains would become apparent?
Many advanced nations did experience a surge in productivity growth in the late 1990s. Yet its rate then slowed in the twenty-first century. For workers, things are even worse because of a decoupling of wages from productivity. Even where firms are getting more output for a given level of inputs, they are not sharing most of those gains with employees.
Consequently, a generation of adults has not experienced the fruits of productivity improvements. They are as well educated as their immediate forebears, they are more lightly taxed, and the businesses that employ them have the benefits of more integrated global financial markets.
The problem comes down to something economists call “diminishing returns.” When England continued to put more land under farming during the nineteenth century, as David Ricardo noted, the productivity of additional acres fell. Take any fixed resource and there is only so much you can extract from it. In the twentieth century, Solow observed that this held for other types of capital such as machines. It also applied to workers. The only way out was technological progress, which allowed society to get more out of the same inputs.
So long as the growth in knowledge we had achieved in the past continued into the future, there was nothing to worry about. Yet here is where the tech optimists and tech pessimists part company. The optimists, as we have noted, anticipate rapid technological progress. The pessimists are not so sure. If that is the case, they say, then why have this generation’s inventions not transformed our lives in the way of the great twentieth-century innovations? Do the twenty-first century’s inventions really compare with air conditioning, airplanes and automobiles (to take just one letter of the alphabet)?
To tech pessimists such as Gordon and Tyler Cowen, the answer comes from merely looking at how technological changes from the 1870s to the 1970s transformed the way we live. Electricity transformed work, shifting people from agriculture to the cities. In the cities that shift combined with running water, sewerage systems, and efficient heating and cooling techniques to allow for a comfortable and productive urban life. Electrical appliances reshaped household economics, freeing women to join the paid labour force. Transport on the roads and air was transformed, facilitating unprecedented interregional trade and travel. All this added up to dramatic improvements in productivity. Since 1973 there have been useful inventions to be sure. But they are yet to deliver an equivalent surge in productivity.
What has the pessimists worried is that researchers and scientists are finding it harder to unearth new ideas. Research by Northwestern University’s Ben Jones shows that Nobel laureates are getting older. To be more precise, over the past century the age at which someone does research that will win them a Nobel prize has been rising. The same is true of work that leads to a patent. In addition, more knowledge breakthroughs are being made by teams rather than individuals. This points to more specialisation in knowledge production, with fewer instances in which an individual comprehends developments at the frontier of multiple disciplines. Because this raises the cost of innovating, Jones calls it the increasing “burden of knowledge.”
As technology advances, it becomes tougher to find the next new thing. Take semiconductors. As we have noted, Moore’s law has seen a steady doubling of the density of computer chips every eighteen to twenty-four months. Moore’s law continued up until the mid 2000s, but significantly, the cost of recent increases is eighteen times larger than it was for similar proportionate increases in the 1970s. The same pattern exists in agriculture and medical research. What was once easy has become hard. It suggests that just to keep the slower growth in productivity that we have, innovators must run faster and faster.
Uncertain prospects
The tech optimists and the tech pessimists both have a point. The optimists note that there is still potential for new knowledge, and can point to exciting possibilities that are attracting significant scientific and engineering resources. The pessimists’ colder calculations remind us how exceptional past growth was and point to the logical implication that those ideas that gave the biggest boosts to productivity were likely ones we have already exploited. Historians such as Joel Mokyr have looked at all this discussion and remind us that we have been here before. In every decade, one can find optimists and pessimists. And, at least as far as continuing technological change is concerned, the optimists have usually been on the right side of history.
What does this all mean, however, for the creation price — that is, the price that must be paid to reward innovators and entrepreneurs for their efforts? The answer lies in the cost of innovation. Where the tech optimists and tech pessimists fundamentally differ is in how costly it will be to innovate in the future. If there are technological opportunities just waiting to be exploited, as the optimists claim, then the creation price can be set relatively low. On the other hand, if the cost of innovation is rising, as the pessimists claim, then the creation price will be higher, and growing over time. More resources will have to be dedicated to innovative activities to maintain historical growth rates. In that situation, we will have to ask if it is a price worth paying.
Forecasting the future is like driving through fog. We need to accept that the creation price is uncertain. It could be high, low or somewhere in between. It will likely be different for different technological opportunities and directions. But at the same time, everyone faces this uncertainty. No one has a special insight into the future. That includes entrepreneurs. And given that uncertainty, the best way to get more equality and more innovation is to reduce the costs those entrepreneurs face today.
Planning for flexibility
Which brings us to equity. Here, the goal ought to be a set of institutions that provide a safety net, both for entrepreneurs who fall short of the stars and for those left behind when the rocket takes off. It pays to think about such institutions as a form of insurance, providing greater resilience in the face of a changing world. If you’re giving advice to a teenager, now is the time to tell him or her about the value of being flexible. Education isn’t just an investment; it’s about providing more life options.
To achieve this in the education system, we propose making teacher effectiveness the core focus of schooling, improving the quality of vocational training, and encouraging MOOCs (massive online open courses). And it makes enormous sense to use the talents of the 51 per cent of the population who are women by encouraging technologies that make jobs more family-friendly, and reforming laws that end up biasing the labour market against women. Gender equity isn’t worthwhile just because it will boost productivity but also because — as Canadian prime minister Justin Trudeau might say — it’s 2019.
As economist Sendhil Mullainathan puts it, “The safest prediction is that reality will outstrip our expectations. So, let us craft our policies not just for what we expect but for what will surely surprise us.” The task is to shape a future that looks more like Star Trek than Terminator.
Uncertainty need not be scary. The story of human history — particularly in recent centuries — is of how we have employed our shared ingenuity to improve lives. Longevity has risen. Whole diseases have been eliminated. The typical job is more fulfilling and less painful. Entertainment is more abundant, and much of it is of higher quality (try spending a week watching television from a generation ago). Food standards have risen, and cars are safer than ever. Life is far from perfect, but there is a good deal to celebrate. •