LATEST BLOG
Thursday, July 24, 2014
Thursday, July 17, 2014
Wednesday, July 09, 2014
Welcome to Matt Ridley's Blog
Matt Ridley is the author of provocative books on evolution, genetics and society. His books have sold over a million copies, been translated into thirty languages, and have won several awards.

Please note that this blog no longer accepts comments (there was too much spam coming in!). If you're reading this blog and want to respond then please use the contact form on the site.

You can also follow me on twitter.

Why Can't Things Get Better Faster (or Slower)?

The surprising regularity of technological progress

My latest Mind and Matter column in the Wall Street Journal:

 

In 1965, the computer expert Gordon Moore published his famous little graph showing that the number of "components per integrated function" on a silicon chip-a measure of computing power-seemed to be doubling every year and a half. He had only five data points, but Moore's Law has settled into an almost iron rule of innovation. Why is it so regular?Moore's Law 1965

The technology guru Ray Kurzweil recently pointed out that a version of Moore's Law has been true since the early years of the 20th century. That is to say, before the integrated circuit even existed, the four previous technologies-electromechanical, relay, vacuum tube and transistor-had all improved along the very same trajectory: The computing power that $1,000 buys has doubled every two years for a century.

A similar graph can be plotted for the number of radio communications fitting into the electromagnetic spectrum. Ever since Guglielmo Marconi's first transmission in 1895, the number of possible simultaneous wireless communications has doubled every 30 months. (This is now known as Cooper's Law, after the inventor Martin Cooper, who demonstrated the first hand-held cellphone.)

Both graphs are roughly exponential, meaning that they curve rapidly upward (or appear as straight lines if plotted on a logarithmic scale). Why don't they move in lurches followed by stagnations? And why can't we cheat these laws by jumping ahead?

What's remarkable about the extension of these regularities back in time is that they now appear to have marched imperturbably through the upheavals of the 20th century without breaking step. How is it possible that the Great Depression did not slow down technological progress? Why didn't the great infusion of technology spending during World War II accelerate it?

There's no certain answer. The inevitable, inexorable and incremental march of technological improvement remains baffling, as does the steady march of world economic growth at 2% to 5% a year ever since the 1940s-far steadier than the progress of any individual country.

A glimmer of explanation can be found in Reed's Law, named after the computer scientist David P. Reed. This states that the utility of large networks increases exponentially with the size of the network. That is to say, it goes up faster than the number of participants or the number of possible pairs of participants (which goes up by Metcalfe's Law).

The Silicon Valley investor Steve Jurvetson thinks this may explain the exponential shape of Moore's and Cooper's laws-so long as you substitute "ideas" for participants. In other words, technology is driving its own progress by steadily expanding its own capacity to bring ideas together. The implication is that, short of arresting half the planet's people, we could not stop the march of technology even if we wanted to.

This is mainly reassuring, because bad policies can't prevent improvement-but also depressing, because good policies can't accelerate improvement. The policies and breakthroughs to which we attach such importance are all but irrelevant on the global scale, though they can, of course, result in a country missing out on the benefits or losing its natural "share" of technology or growth to another country (ask the North Koreans).

A few years ago Mr. Jurvetson added to this menagerie of laws by coining Rose's Law of quantum computing, after a Canadian executive in the field. (Such computers focus on probabilities and exploit the idea that subatomic particles can exist in multiple states at the same time.) The law's prediction of an annual doubling in quantum computing capacity-at speeds too scary to contemplate-has come to pass. If quantum computers are actually doing anything practical in their incomprehensible brains, then quantum computers will soon make their conventional cousins look primitive.