Multi-core CPUs: the end of Moore’s law

While Moore’s law may still be technically valid, for all intents and purposes it is no longer true,

While [url=]Moore’s law[/url] may still be technically valid, for all intents and purposes it is no longer true. Yes, the number of transistors which can be inexpensively placed on a chip continue to double every two years or so (as Moore postulated back in 1965), but the performance enhancements that this has traditionally led to has ceased years ago. Instead of increasing clock speeds, which allow software to automatically run faster, chip manufacturers have been increasing the number of cores, or CPUs, they place on a single chip. For years, most home PCs have shipped with at least 2, and recently even 4, processors. Unfortunately, all these extra CPUs haven’t done much to speed up computers, or offer much in the way of appreciable benefits to users. Nor are they likely to in the foreseeable future. The ugly truth about multiple cores is that software has to be written to take advantage of all the additional processors. Instead of doing everything in a linear fashion (with one step following the completion of another), programmers need to ensure their applications are designed to be able to do many tasks simultaneously. This has proven to be a fiendishly difficult proposition. This is a similar dilemma to large engineering projects, where just throwing more people at the problem doesn’t result in getting the work done faster. In order to utilize the extra personnel, you have to create more bureaucracy and process, to manage and coordinate all the workers. You have to make sure that everyone arrives at the same place at the appropriate time. In a computer game, for example, you can’t just put every character in a separate process, running on different CPUs. What if one processor is a little speedier than another, resulting in one game character moving faster than another? Somehow the developer has to ensure that all the elements of their game are synchronized, even if they are running on different threads, across multiple cores. Developers know how to solve these synchronization problems, and programming tools (and computer languages) have been making it easier to accomplish this. However, there is no getting around the fact that creating a properly multi-threaded application, that scales across many cores, is a significantly more complex task than writing traditional linear code. Computer scientists are constantly searching for the holy grail, which will finally make multi-core exploitation second nature for every developer, but I fear this is an objective which will never be attainable. At least not within the existing software technologies as we know them. It is simply not human nature to be able to think of many things all at the same time. This is almost where we need to extra-dimensionality from the world of science fiction to make multi-threading anything more than a side-show. Ok, this is an exaggeration. There are applications which are multi-threaded, and are capable of exploiting multiple processors, but they require a lot more specialized expertise, and a greater investment in time, than those applications which run in a linear fashion. Just look at how the latest operating systems from Apple and Microsoft (Snow Leopard and Windows 7 respectively) have many components which are not multi-threaded. As a case in point, many (but not all) of the codecs operating systems rely on for Ripping and encoding audio and video are single-threaded, and unable to take advantage of multiple CPUs. This is why editing, and burning, home movies can still take hours, with almost no noticeable improvement when moving from a 2 core to a 4 core system. If Apple and Microsoft find it difficult to build every component of their flagship operating systems to utilize multiple cores, then what hope does the rest of the development community have? One could argue that not all applications need to be any faster, which is true enough. However, there are still plenty of tasks which could greatly benefit from speed gains (e.g. audio/video encoding, games with masses of objects and large worlds). It is just unfortunate that actually exploiting the numerous cores available in modern CPUs may never be a viable option for many developers. The obvious consequence of this break-down in Moore’s law (i.e. that PCs are not getting faster, even though the number of transistors are still increasing) is that there are fewer advantages to upgrading to newer systems. If you aren’t going to see any appreciable performance gain with 4 cores than you already experience on a 2 core machine, what would be the point of upgrading?


Copyright © 2009 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022