Published on:

We overestimate the impact of innovation in the short term but…

My Times column on Amara’s Law:

 

Alongside a great many foolish things that have been said about the future, only one really clever thing stands out. It was a “law” coined by a Stanford University computer scientist and long-time head of the Institute for the Future by the name of Roy Amara. He said that we tend to overestimate the impact of a new technology in the short run, but we underestimate it in the long run. Quite when he said it and in what context is not clear but colleagues suggest he was articulating it from some time in the 1960s or 1970s.

Along comes an invention or a discovery and soon we are wildly excited about the imminent possibilities that it opens up for flying to the stars or tuning our children’s piano-playing genes. Then, about ten years go by and nothing much seems to happen. Soon the “whatever happened to . . .” cynics are starting to say the whole thing was hype and we’ve been duped. Which turns out to be just the inflexion point when the technology turns ubiquitous and disruptive.

Amara’s Law implies that between the early disappointment and the later underestimate there must be a moment when we get it about right; I reckon these days it is 15 years down the line. We expect too much of an innovation in the first ten years and too little in the first 20, but get it about right at 15. Think about the internet. In William Gibson’s 1984 novel Neuromancer, he foresaw a world of “cyberspace” in which every computer in the world was linked, with profound effects on society. This looked a bit overrated 15 years years later, when the dotcom bubble burst. The Nobel prize-winning economist Paul Krugman wrote in 1998 that “by 2005 or so, it will become clear that the internet’s impact on the economy has been no greater than the fax machine’s”. He went on: “As the rate of technological change in computing slows, the number of jobs for IT specialists will decelerate, then actually turn down; ten years from now, the phrase information economy will sound silly.”

Amara’s Law has a habit of trapping people into such foolhardy remarks after the initial hype subsides, but just before the second wave.

Much the same cycle happened with the human genome project, which released a first draft sequence in 2000, with simultaneous press conferences at the White House and 10 Downing Street. “It is now conceivable that our children’s children will know the term cancer only as a constellation of stars,” said Bill Clinton. It was “a breakthrough that takes humankind across a frontier and into a new era”, said Tony Blair. I cringed at some of that. Sure enough, ten years later, as genomics delivered relatively little of medical use, there were plenty of critics saying it was all hype. However, as gene therapy and gene editing start to tackle cancer, chronic diseases and even ageing, the tide is turning.

Going farther back, the development of electricity in the 19th century seemed to promise so much: light bulbs, dynamos, turbines and motors were all perfected by 1885 but it was not until the early decades of the 20th century that electricity began to transform not just lighting but factories as well.

A century earlier, the first steam locomotive was developed by Richard Trevithick in 1802. The poet, physician and inventor Erasmus Darwin had written a stanza full of hyperbole in 1791: “Soon shall thy arm, unconquer’d steam! Afar/Drag the slow barge, or drive the rapid car;/Or on wide-waving wings expanded bear/The flying-chariot through the fields of air.” Yet, by the time George Stephenson started assembling his Blücher engine at Killingworth colliery in 1814, the technology was generally considered a busted flush: neat idea, not really practical. Interest in steam locomotion had waned. Then Stephenson’s locomotives unleashed us all into a world of unprecedented speed.

The Amara hype cycle is unfolding today with respect to machine learning. Artificial intelligence has been heralded as imminent for a couple of decades. The neural networks and parallel processors that are today enabling computers to learn from deep draughts of data have been around in rudimentary form since the 1960s. However, those who rushed into the field expecting to found great enterprises generally ended up disappointed as the “AI winter” closed down their hopes; they slunk back into philosophy departments of universities.

Today that looks like changing. Thanks to a new kind of chip invented by Andrew Ng, called the graphics processing unit, new kinds of algorithms perfected by Geoffrey Hinton, and a new cornucopia of data, deep-learning programs seem to be on the brink of something special. The success of London-based Deep Mind’s AlphaGo, a program that learns how to win the immensely complicated game of Go without being taught, and humiliates world champions in front of huge television audiences, suggests that something big is afoot.

Self-driving cars are in the early stages of an Amara hype cycle. I am repeatedly being told that lorry drivers and Uber cabbies will soon all be redundant. I would almost guarantee that ten years from now there will be a rash of reports about how the reality has failed to match the forecasts, that there are more jobs for drivers then ever and the self-driving car may be a lot farther away than we thought. I will venture that ten years after that such pessimism will look foolish as autonomous vehicles suddenly start popping up everywhere.

Forecasting technological change is almost impossibly hard and nobody — yes, nobody — is an expert at it. The only sensible course is to be wary of the initial hype but wary too of the later scepticism.

 

Footnote:

 

Given that this concept is often misattributed, eg to Arthur C Clarke, it may be useful to note what Paul Saffo told me about the origin of Roy Amara’s observation:

 

Roy was my boss for well over a decade beginning in 1984, and a close friend until he passed away in the early 2000s.  He first articulated the idea long before I first met him, so I expect the first instance is buried in a report from some time in the 1960s or maybe the 1970s…I am certain I have it written it down in several places in my research journals from the mid-1980s as well, but that of course doesn’t qualify as publication.

Just as Gordon Moore didn’t name Moore’s law, Roy is not the person who named the observed expectation/diffusion lag  “Amara’s Law.”  He was a restrained and modest man and always avoided tooting his own horn.  He was always a bit uncomfortable having the phenomenon named as “his” law. Which for me is a sign of true intellectual integrity.  There is nothing more tedious than people naming laws after them selves.

Also like Gordon Moore, Roy considered it more an observation (and thus a heuristic) than an actual law.  He didn’t feel any ownership of it, as he saw it as an obvious observed phenomena. Moreover, the observation pops up in vaguer forms all the way back to the early 1900s in various forms. In terms of modern history, the idea is a fixture in diffusion studies going back at least as far as Ev Rogers 1st ed of “Diffusion of Innovations” in the early 1960s . Ev taught at Stanford in the mid-70s and then was at USC in the ’80’s – ’90’s.  He of course is the person who coined the brilliant but much-abused term “early adopters” and thus he wrote and spoke at length about the expectation/diffusion lag in his own way.

Roy’s unique contribution is that he articulated it crisply and clearly and explicitly tied the phenomenon to the domain of business forecasting. 

 

By Matt Ridley | Tagged:  rational-optimist  the-times