Over the next two to three years, we will see an explosion of new complex processors that not only do the general-purpose computing we commonly see today (scalar and vector\/graphics processing), but also do a significant amount of matrix and spatial data analysis (e.g., augmented reality\/virtual reality, visual response systems, artificial intelligence\/machine learning, specialized signal processing, communications, autonomous sensors, etc.).\nIn the past, we expected all newer-generation chips to add features\/functions as they were being designed. But that approach is becoming problematic. As we scale Moore\u2019s Law closer to the edge of physical possibility (from 10nm to 7, then 5), it becomes increasingly lengthy and costly to perfect the new processes. What was generally about 12 months between processing improvement steps now is closer to two years, and newer process factories can cost upwards of $10 billion or more.\nFurther, designs of specialized systems are pushing circuit \u201cdies\u201d to unprecedented sizes, making the yield of these chips (i.e., the number of good chips that can be obtained from processing a large silicon wafer) significantly lower than previous generations, which has the effect of raising prices and limiting supplies.\n\nHow to keep the promise of Moore's Law alive\nWhat\u2019s needed is a different approach to keeping the promise of Moore\u2019s Law alive (i.e., ever increasing performance and features\/functions) while at the same time working within the limitations of the physics of chip making. There is no doubt that chip architectures will advance to mitigate some of the negative physical effects (e.g., FinFet transistors of a few years ago did this quite admirably). But the ability to reuse perfectly good technology without having to redesign it to achieve limited or no improvements\/benefits is equally important.\nHow we experience performance of chips has also changed. In the not too distant past, it was mostly about the CPU performance. Then came the GPU for graphics processing. Then came the DSP for communications needs and video processing. We\u2019re now getting to the point where specialized circuits for AI, (TPUs, Nervana, FPGAs), specialized visual processing (VPUs, Movidius), etc. are making their way into mainstream devices. In addition, new non-volatile memory types (e.g., 3D crosspoint, Optane) are needed to up the game as data sets become ever larger.\nWe did (and still do) have multi-chip modules that tie diverse circuits together in one package, and the old substrate approach to multi chip (basically a silicon circuit board) does allow a mix-and-match capability. It\u2019s used extensively in multi-CPU high-performance systems. But it does not meet the high-performance criteria necessary for making heterogeneous chips attractive as a substitute for their monolithic equivalents. This has ramifications all the way from smaller chips at the Internet of Things (IoT) level chips, up to specialty edge servers, and into the cloud and data center.\nIntel's new strategy for designing processors\nIntel has designed a new approach. Called Foveros, it allows many different chips built with different technology \u201cnodes\u201d and of different functionality to be stacked on top of each other with very fast communications between them. It also has sufficient power and heat transfer to make the resulting device nearly as effective as a monolithic chip. This type of technology has always been attractive, but it\u2019s only now that Intel has found a way to make its performance and cost of manufacture competitive.\n3D stacking techniques have been used in memory for some time, but that is a much simpler problem than in heterogeneous systems, with memory having more regular chip structures and simpler communications requirements than the diverse size\/configurations\/IO commonly found in heterogeneous processing circuits.\nThis is an important step for Intel, and ultimately the overall market. It allows Intel to use older technology it has already proven as being reliable and capable and that does not really benefit from being redesigned for newer process nodes. And it allows the components to be reused \u2014 thus extending the design cost recovery window, as well as making them available from already-proven, high-volume production facilities.\nSome would say Intel is moving down this route because it lost its once two- to three-year advantage in process technology to more nimble players (e.g., TSMC). Certainly Intel has much to do to fix its process manufacturing problems. But many future chips will need circuits that don\u2019t always lend themselves to the most modern process (e.g., FPGAs for AI programming, non-volatile memories, Input\/Output and communications\/5G), nor do well being embedded in massive monolithic system chips. Having an ability to mix and match circuits from various processes while maintaining overall performance is highly advantageous. Further, it relieves the burden of having to produce a fully monolithic implementation of specialty chips (expensive and with a lengthy time to market), and it creates an ability for Intel to put other circuits \u2014 even those potentially designed by a customer or third party \u2014 on the final product.\nUltimately, I believe this capability is an important step for Intel to achieve market advantage and that the benefits will be seen over the next one to two years. I also expect to see Intel\u2019s competitors work on a similar approach to this 3D stacking capability (just as they did in FinFET transistor technology in the past) to recover some market advantages. But Intel claims it took them 10 years to perfect this technology, so it's unlikely competitors can duplicate it very quickly.\nBesides the benefits to Intel, which should be significant, I expect this technology to benefit the market in general, as it brings more heterogeneous compute capability to more specialized computing workloads faster and for lower costs \u2014\u00a0and especially in areas where more limited quantities don\u2019t enable the massive runs needed for fully monolithic design solutions to be economical. And that should be good for everyone. After all, that\u2019s what Moore\u2019s Law is really about.