Tuesday, July 13, 2010

Rogoff as Singulatarian

A reader emailed me with this fascinating column in the Guardian by well known economist Kenneth Rogoff:
What will be the big driver of global growth in the next 10 years? Here's betting that this decade will be one in which artificial intelligence hits escape velocity and starts to have an economic impact on a par with the emergence of India and China.

Admittedly, my perspective is heavily coloured by events in the world of chess, a game I once played at a professional level and still follow from a distance. Though special, computer chess nevertheless offers both a window into silicon evolution and a barometer of how people might adapt to it.
He provides some fascinating color on how things have evolved in the chess world after computers became better than people:
A little bit of history might help. In 1996 and 1997, world chess champion Garry Kasparov played a pair of matches against an IBM computer named Deep Blue. At the time, Kasparov dominated world chess, in the same way that Tiger Woods – at least until recently – has dominated golf. In the 1996 match, Deep Blue stunned the champion by beating him in the first game. But Kasparov quickly adjusted to exploit the computer's weakness in long-term strategic planning, where his judgment and intuition seemed to trump the computer's mechanical counting.

Unfortunately, the supremely confident Kasparov did not take Deep Blue seriously enough in the 1997 rematch. Deep Blue shocked the champion, winning the match 3.5 to 2.5. Many commentators have labelled Deep Blue's triumph one of the most important events of the 20th century.
With ever more powerful processors, silicon chess players developed the ability to calculate so far ahead that the distinction between short-term tactical calculations and long-term strategic planning became blurred. At the same time, computer programs began to exploit huge databases of games between grandmaster (the highest title in chess), using results from the human games to extrapolate what moves have the highest chances of success. Soon, it became clear that even the best human chess players would have little chance to do better than an occasional draw.

Today, chess programs have become so good that even grandmasters sometimes struggle to understand the logic behind some of their moves. In chess magazines, one often sees comments from top players such as "My silicon friend says I should have moved my king instead of my queen, but I still think I played the best 'human' move."

It gets worse. Many commercially available computer programs can be set to mimic the styles of top grandmasters to an extent that is almost uncanny. Indeed, chess programs now come very close to passing the late British mathematician Alan Turing's ultimate test of artificial intelligence: can a human conversing with the machine tell it is not human?

I sure can't. Ironically, as computer-aided cheating increasingly pervades chess tournaments (with accusations reaching the highest levels), the main detection device requires using another computer. Only a machine can consistently tell what another computer would do in a given position. Perhaps if Turing were alive today, he would define artificial intelligence as the inability of a computer to tell whether another machine is human!
Ultimately, he's an optimist about what all this means:
In 50 years, computers might be doing everything from driving taxis to performing routine surgery. Sooner than that, artificial intelligence will transform higher learning, potentially making a world-class university education broadly affordable even in poor developing countries. And, of course, there are more mundane but crucial uses of artificial intelligence everywhere, from managing the electronics and lighting in our homes to running "smart grids" for water and electricity, helping monitor these and other systems to reduce waste.

In short, I do not share the view of many that, after the internet and the personal computer, it will be a long wait until the next paradigm-shifting innovation. Artificial intelligence will provide the boost that keeps the teens rolling. So, despite a rough start from the financial crisis (which will still slow global growth this year and next), there is no reason why the new decade has to be an economic flop. Barring another round of deep financial crises, it won't be – as long as politicians do not stand in the way of the new paradigm of trade, technology, and artificial intelligence.
I'm not so sure about this, myself. During an extended global deleveraging episode, one of the main economic symptoms is likely to be lack of aggregate demand. In that situation, technologies that improve productivity and cause layoffs may not be much help (even if they would be growth producing during more normal times).

Admin note: Blogging is likely to be a bit sketchy this week, as I move on Thursday.


Burk Braun said...

Hi, Stuart-

My read is that Rogoff is another, perhaps among the most influential, deficit terrorist trying to deflect our attention from his terrible advice/models in the here-and-now.

The Keynesians have eaten his lunch in the current crisis, intellectually speaking, so what is left but to scare everyone about the imminent doom of deficits and root for countries like Britain, Japan, and the US to shoot themselves in the foot with austerity ... while extending faith-based hope in a pean to private enterprise, today in the form of AI.

America needs jobs, and America has a lot of work that needs doing. We need to put the two together directly.

Burk Braun said...

The macroeconomic case is laid out in this blog in detail, courtesy of Goldman Sachs, ironically enough.

Per said...

If artificial intelligence is defined as mimic human logic and human conversation then there are no problems to achive that. But if artificial intelligence is defined also as mimic human feelings, intuition and broad picture thinking then there isn´t even a embryo in artificial intelligence today.

Mike Aucott said...

Good luck with your move Stuart! The Ithaca area is a great part of the world.

The singularity idea is still too deep for me, and I can't help thinking, as the previous commenter states, that in some areas of life AI has barely scratched the surface. And the idea that there's a new golden age waiting when a threshold in processing capability is exceeded seems like wishful thinking. It's likely that the evolution of technology is subject to a Darwinian sort of natural selection just like the evolution of other systems. Therefore there seems no reason why a world dominated by AI machines would be any more just or rational than the present world. But, we can hope and try to make it so.

Michael Cain said...

I'm not so sure about this, myself.

I'm even less sure than you are. It appears to me that the driving issue in US political economics for at least the next decade (or longer) will be jobs. Rogoff's AIs would be enormously disruptive over time, as they displace human workers. Employment is currently the primary method we use to distribute the goods and services we produce; it's been a staple in science fiction for decades, but we may have to deal with the real problem of how to do that distribution when employment is only available to a limited subset of the population.

JoulesBurn said...

A different sort of game:

A video game called FoldIt, created by University of Washington scientists, shows that people can be more effective than supercomputers -- in some cases -- when it comes to the difficult scientific challenge of folding virtual protein molecules for maximum internal energy.

(probably a reporting mistake -- the goal is to minimize internal energy)