Wednesday, May 26, 2010

Planning for Moore's Law

It's important to realize that whereas originally Moore's law was an empirical description of the emerging behavior of the semiconductor industry, it has long since become a consciously planned effort.  The entire semiconductor industry co-ordinates its efforts to move to ever smaller and smaller "feature size" (the size of individual logic elements in a chip).  The coordination is accomplished via the International Technology Roadmap for Semiconductors, which involves international teams and working groups that collaborate to lay out a fifteen year roadmap for the industry which gets revised or updated every year.

The most recent roadmap is the 2009 version, and I spent a little time with it this morning.  I would guess that most scientists or technologists from outside the semiconductor industry would have a reaction similar to mine: this thing is a product of some alien civilization that has figured out how to make central planning work.  The idea of all the major competing players in an industry collaborating to figure out all the R&D activities required to accomplish the next 15 years of progress is just extraordinary.  But there it is.  It's full of detailed schedules for when the industry will introduce different feature sizes and what challenges they have to overcome to do that.

Here's the key table from the executive summary:


The general idea is that the size of things in nanometers is getting smaller and smaller.  If we take the "Printed Gate Length" going from 47 to 8nm in 15 years, that means that a given unit of area will hold 35 times as many logic elements.  If we also take note of the fact that die size is planned to increase (a die is the unit of manufacture in silicon fabrication facilities) from the current 300mm maximum to 450mm, resulting in another factor of 2.25, for a overall increase in logic per die of about a factor of 75 over 15 years, which corresponds to a doubling time of about two and a half years.  So the industry is apparently slowing down a little bit, but still has every intention of keeping going for a long time.

I don't remotely have the competence to assess the scale of the technical challenges the industry faces in accomplishing this.  Still, when a large, extremely competent, and well-funded industry has a track record of improving performance at a steady rate over forty years, and has extremely detailed plans for how to keep doing the same thing for the next 15 years, I think one would be a little foolish to bet against them.

So to put it in round numbers, over the next two decades, there is likely to be about a one-hundred fold increase in the amount of available computation capacity.  Therefore, anything that can be done with computation will get significantly better, cheaper, and lighter.  Robots and machines will become faster and more skillful.  Mobile phones/computers will be lighter and have better screens.  Cameras will be smaller and much more pervasive and the software for processing their images will be much more powerful.  Games and movies will have much better and more realistic animation.  Video-conferencing will be much higher quality.

In short, as the physical environment continues to slowly degrade, the virtual environment is likely to get much better.  I imagine that will lead people to spend more and more time in the virtual environment.

10 comments:

  1. Great post - thanks !

    But if the underlying physical environment degrades, would we have the resources to run a virtual environment at all ??

    ReplyDelete
  2. Madhu:

    Well, my general rap on that is that as long as the degradation is slowish, then yes, we will. Or at least we could assuming we don't respond really stupidly (admittedly, that last is not a rock-solid assumption).

    Climate change is likely, as far as anyone knows at present, to proceed relatively slowly.

    I also believe that that the post peak decline in oil supply is likely to be relatively slow, certainly in the beginning. So society has to get a few percent more efficient each year and that's doable.

    So, to boil the scenario down to a single stereotype, the defining image of the problems of the early decades of the twenty-first century might be an unemployable young guy conserving oil by living in his parent's basement playing video games all day.

    ReplyDelete
  3. May this progress, including the increases in video-conferencing capabilities, rapidly percolate to the many workplaces where people still must drive to work each day just to (mostly)sit at a computer terminal.

    ReplyDelete
  4. Speaking of alien civilizations, Moore's Law may explain where all the advanced alien civilizations are, in response to Fermi's famous question.

    If we encounter an alien civilization that is only a hundred years ahead of us, it'll be much worse than the disparity between Cortez and Montezuma. A hundred years' worth of Moore's Law (assuming a conservative five twenty-year generations) will give the aliens a computer technology a quadrillion times more powerful than ours. Such a technology would be, as Arthur C. Clark said, "indistinguishable from magic."

    Let's continue Moore's Law for a thousand years, or ten thousand, or a million, or...

    Well, at some point, it becomes indistinguishable from God. They won't be using silicon by then, of course, but their computerized advanced intelligence will have helped them unlock other ways (quantum level computing, computing in other dimensions, or things that we literally are incapable of comprehending).

    This is the sort of thing that can happen once you take a bite out of the fruit of the tree of knowledge.

    ReplyDelete
  5. MisterMoose:

    Yeah, that's more or less exactly what Kurzweil thinks (he has a chapter of The Singularity is Near titled something like "Singularity as Transcendence".

    I am reminded of Isaiah 14:12-16:

    12 How you have fallen from heaven,
    O morning star, son of the dawn!
    You have been cast down to the earth,
    you who once laid low the nations!

    13 You said in your heart,
    "I will ascend to heaven;
    I will raise my throne
    above the stars of God;
    I will sit enthroned on the mount of assembly,
    on the utmost heights of the sacred mountain.

    14 I will ascend above the tops of the clouds;
    I will make myself like the Most High."

    15 But you are brought down to the grave,
    to the depths of the pit.

    16 Those who see you stare at you,
    they ponder your fate:
    "Is this the man who shook the earth
    and made kingdoms tremble,

    ReplyDelete
  6. While the number of gates is anticipated to scale significantly, the growth in clock speed is less promising. From the system drivers document: "Modern MPU platforms have stabilized maximum power dissipation at approximately 120W due to package cost, reliability, and cooling cost issues. ... the updated MPU clock frequency model ... is projected to increase by a factor of at most 1.25× per technology generation , despite aggressive development and deployment of low-power design techniques".
    This can be seen with the plateau in current CPU clock speeds since 2007.

    Smaller gates means more processing, not necessarily faster processing. Given the challenges with taking rich advantage of multiple cores and parallel processing paths, the speed increases that have enabled significant new applications will become harder (lower processing return on advancement :) ).

    ReplyDelete
  7. Bert:

    Absolutely - those of us doing software are acutely aware that single-threaded applications stopped getting much faster a few years back, but we keep getting more and more cores to play with even on very low end platforms. Personally I'm designing stuff that runs on numerous cores now, and figuring I have to allow for hundreds of cores in the lifetime of the architecture.

    In a way, though, it kind of reinforces my overall point, as it will create a kind of additional income gradient within the software industry. If you can handle multi-threaded code/design well, you'll be worth more than if you can't.

    ReplyDelete
  8. @Stuart: There's no Moore's Law for software, but the changes are happening for it, too.

    Both MS and Apple have made major strides in supplying parallelization tools to its software developers (internal and 3rd parties). Intel, AMD, the graphics chips companies, ARM, … all of the processing companies are exploiting the lower power per transistor that comes with size reduction to make multi-processor designs, and the software types are making good strides on exploiting them.

    My laptop could time-slice between keeping KCSM.Org playing, my browser and monitors for new mail, new news, … all running. Or, as it's really doing, give each function one of the four threads available in my processor. There ARE some older programs (Excel), operating systems (much older Win or Mac) that can't divvy up their work, where you stare at the hourglass.

    But the major bottlenecks are the low-hanging fruit. Adobe is still behind the curve on the Macs, but graphics programs generally use multiple processors very efficiently; monster stat problems are no longer monster; even the huge branch & cut optimizations I do on my portfolios can have their algorithms tweaked to divide the search effort.

    The future still looks pretty bright.

    ReplyDelete
  9. India is a land of growing opportunities. You might be amazed to know the demand for software development in India. Software development proves to be a vital tool for businesses in order to achieve their desired goals and objectives.
    Hire Programmer

    ReplyDelete
  10. Stuart - what does Moores Law imply for GINI coefficient?

    ReplyDelete