Americas

  • United States

New chip techniques are needed for the new computing workloads

Opinion
Dec 17, 20185 mins
Artificial IntelligenceComputers and PeripheralsData Center

With complex new computing workloads becoming the norm, and Moore's Law approaching its limit, it's time to rethink how we create computer processors.

processors
Credit: Thinkstock

Over the next two to three years, we will see an explosion of new complex processors that not only do the general-purpose computing we commonly see today (scalar and vector/graphics processing), but also do a significant amount of matrix and spatial data analysis (e.g., augmented reality/virtual reality, visual response systems, artificial intelligence/machine learning, specialized signal processing, communications, autonomous sensors, etc.).

In the past, we expected all newer-generation chips to add features/functions as they were being designed. But that approach is becoming problematic. As we scale Moore’s Law closer to the edge of physical possibility (from 10nm to 7, then 5), it becomes increasingly lengthy and costly to perfect the new processes. What was generally about 12 months between processing improvement steps now is closer to two years, and newer process factories can cost upwards of $10 billion or more.

Further, designs of specialized systems are pushing circuit “dies” to unprecedented sizes, making the yield of these chips (i.e., the number of good chips that can be obtained from processing a large silicon wafer) significantly lower than previous generations, which has the effect of raising prices and limiting supplies.

How to keep the promise of Moore’s Law alive

What’s needed is a different approach to keeping the promise of Moore’s Law alive (i.e., ever increasing performance and features/functions) while at the same time working within the limitations of the physics of chip making. There is no doubt that chip architectures will advance to mitigate some of the negative physical effects (e.g., FinFet transistors of a few years ago did this quite admirably). But the ability to reuse perfectly good technology without having to redesign it to achieve limited or no improvements/benefits is equally important.

How we experience performance of chips has also changed. In the not too distant past, it was mostly about the CPU performance. Then came the GPU for graphics processing. Then came the DSP for communications needs and video processing. We’re now getting to the point where specialized circuits for AI, (TPUs, Nervana, FPGAs), specialized visual processing (VPUs, Movidius), etc. are making their way into mainstream devices. In addition, new non-volatile memory types (e.g., 3D crosspoint, Optane) are needed to up the game as data sets become ever larger.

We did (and still do) have multi-chip modules that tie diverse circuits together in one package, and the old substrate approach to multi chip (basically a silicon circuit board) does allow a mix-and-match capability. It’s used extensively in multi-CPU high-performance systems. But it does not meet the high-performance criteria necessary for making heterogeneous chips attractive as a substitute for their monolithic equivalents. This has ramifications all the way from smaller chips at the Internet of Things (IoT) level chips, up to specialty edge servers, and into the cloud and data center.

Intel’s new strategy for designing processors

Intel has designed a new approach. Called Foveros, it allows many different chips built with different technology “nodes” and of different functionality to be stacked on top of each other with very fast communications between them. It also has sufficient power and heat transfer to make the resulting device nearly as effective as a monolithic chip. This type of technology has always been attractive, but it’s only now that Intel has found a way to make its performance and cost of manufacture competitive.

3D stacking techniques have been used in memory for some time, but that is a much simpler problem than in heterogeneous systems, with memory having more regular chip structures and simpler communications requirements than the diverse size/configurations/IO commonly found in heterogeneous processing circuits.

This is an important step for Intel, and ultimately the overall market. It allows Intel to use older technology it has already proven as being reliable and capable and that does not really benefit from being redesigned for newer process nodes. And it allows the components to be reused — thus extending the design cost recovery window, as well as making them available from already-proven, high-volume production facilities.

Some would say Intel is moving down this route because it lost its once two- to three-year advantage in process technology to more nimble players (e.g., TSMC). Certainly Intel has much to do to fix its process manufacturing problems. But many future chips will need circuits that don’t always lend themselves to the most modern process (e.g., FPGAs for AI programming, non-volatile memories, Input/Output and communications/5G), nor do well being embedded in massive monolithic system chips. Having an ability to mix and match circuits from various processes while maintaining overall performance is highly advantageous. Further, it relieves the burden of having to produce a fully monolithic implementation of specialty chips (expensive and with a lengthy time to market), and it creates an ability for Intel to put other circuits — even those potentially designed by a customer or third party — on the final product.

Ultimately, I believe this capability is an important step for Intel to achieve market advantage and that the benefits will be seen over the next one to two years. I also expect to see Intel’s competitors work on a similar approach to this 3D stacking capability (just as they did in FinFET transistor technology in the past) to recover some market advantages. But Intel claims it took them 10 years to perfect this technology, so it’s unlikely competitors can duplicate it very quickly.

Besides the benefits to Intel, which should be significant, I expect this technology to benefit the market in general, as it brings more heterogeneous compute capability to more specialized computing workloads faster and for lower costs — and especially in areas where more limited quantities don’t enable the massive runs needed for fully monolithic design solutions to be economical. And that should be good for everyone. After all, that’s what Moore’s Law is really about.

jack_gold

Jack E. Gold is founder and principal analyst at J. Gold Associates, LLC., an analyst firm in Northborough, Mass. With more than 45 years of experience in the computer and electronics industries, and 25 years as a tech industry analyst, he covers the many aspects of business and consumer computing and emerging technologies.

Follow Jack on Twitter at @jckgld and on LinkedIn.

The opinions expressed in this blog are those of Jack Gold and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.

More from this author