I'm a pretty technical and informed person when it comes to information technology. I'm not an engineer (any more), but compared to most business people working in the industry, I know a fair bit. But I had no idea that there is a massive revolution going on in the computing industry. The fundamental paradigm that has powered the industry for the last 20 years has changed. It amazes me I didn't know this. And I'm guessing that many techno-savvy people don't either.

The Past

Since the mid 80's, the computer industry has been built on the fact that the speed of computers doubles every 18 months. This is widely described as "Moore's Law," but Moore's Law is slightly different.  Moore's Law states that the density of transistors in a chip doubles every 18 months.  For nearly 20 years microprocessor companies could squeeze more transistors into their chips and run them twice as fast - doubling the frequency the chips run at. This doubling of frequency directly led to a doubling of performance. (Frequency ~  the number of instructions that can be run in a second.) Thus, year after year, the speed of a single processor grew at about 52%.  This predictable increase in processing power has driven the growth of the computer industry.  (Think about how awesome your iPhone is.  That wasn't possible 4 years ago.)

The Change

People have been predicting the demise of Moore's Law for years, but even with existing technology projections, Moore's Law seems to have at least a few cycles left. That said, the predictable implication of Moore's Law - that single processor performance doubles every 18 months - has already broken.

This is a huge huge point and suggests a fundamental shift in the computer industry. But before I discuss the implications, let me explain what is going on.

For 20 years, Intel kept doubling the frequency of their chips, thereby doubling performance. But in 2004, Intel hit "the power wall." (Source) The power consumption of a chip is directly related to the frequency it runs at. So every time you double the frequency (other things being equal), you double the power required to run the chip. This gets costly in terms of straight electricity cost, but you also have to spend a lot on expensive air conditioning to suck the excess heat away. And the further you push the chips, the more these power costs dominate the benefit of increased performance. And on top of that, it gets difficult to cool these chips even if you wanted to. So single-processor performance stalled.

But Moore's Law keeps trucking, so what can you do with the extra transistors? Well, instead of building a processor that is twice as fast, just build two of them.  When transistor density doubles again, build 4. Then 8. Etc.  You can see this taking place already in dual core and now quad core processors.

The Hidden Revolution

Anyone who's been paying attention to computers knows that we've moved to a multi-core world. And maybe in the back of our minds we've wondered about the implications. Is 2 really twice as good as 1? But I could imagine one processor is doing my antivirus check while one lets me surf the net and do email. And when we get to 4? And 8? 16? ... 128 cores? What in the world would my laptop do with 128 cores? And would it really be 128 times as fast as 1 core?

There's the hidden revolution: the answer is, basically, "no."  Or, at least, a 128 core computer is not going to be 128 times as fast as a 1 core computer without some serious changes in the industry.  That's the revolution.

In the past, if you wrote a program and didn't touch it, in 7 generations (about 18mths * 7 = 10.5 years), your program would run 128 times faster. (Ignoring I/O issues, which is obviously crucial). With many-core, to take advantage of the 128 cores, you would have to entirely rewrite your code to do parallel processing.  And this is assuming it is even possible to parallelize your code. If your program is fundamentally serial, then 7 microprocessor generations down the road it might not run any faster.

What This Means

There is a fundamental shift that needs to take place in the computing industry, from serial programming, to parallel programming. This is a very non-trivial change. Some companies, like Intel, are betting that it is too big a change, and their goal is to hide the complexity and paradigm shift from users beneath smarter processors, compilers, and OS's. My gut feel, admittedly knowing little, is this is a short term solution at best. If the future of computing is parallel, then the future of programming will be parallel too, and all new students coming out of college or grad school will be well versed in parallel programming models.

Another implication may be that for many applications, speed may simply not improve very much. It is not obvious that all programs (or algorithms) are parallelizable. (In fact, many probably aren't.) So if you have a set of tasks that require serial processing, the doubling of peformance every 18 months will no longer apply.

Conclusion

This is just a taste of the issues involved. (And I apologize for any mistakes or simplifications I've made - my knowledge is days old.) But the implications are huge.

Further Reading


comments powered by Disqus