GlobalFoundries will no longer make 7nm chips, a setback for AMD and a challenge for data centers' ability to scale. Credit: blickpixel The semiconductor world is buzzing over the news that custom semiconductor manufacturer GlobalFoundries, the foundry born when AMD divested itself of its fabrication facilities, announced the sudden decision to drop its 7nm FinFET development program and restructure its R&D teams around “enhanced portfolio initiatives.” For now, GlobalFoundries will stick to 12nm and 14nm manufacturing. All told, approximately 5 percent (of roughly 18,000 employees) will lose their jobs. But it also sets back AMD, a GlobalFoundries customer, in its bid to get ahead of Intel, which has struggled for two years to get to 10nm and won’t get there until 2020. “The vast majority of today’s fabless customers are looking to get more value out of each technology generation to leverage the substantial investments required to design into each technology node. Essentially, these nodes are transitioning to design platforms serving multiple waves of applications, giving each node greater longevity. This industry dynamic has resulted in fewer fabless clients designing into the outer limits of Moore’s Law,” said Thomas Caulfield, who was named CEO of GlobalFoundries last March, in a statement. Making the move to a new process node is no trivial matter. It takes billions to drop one size in process technology. What Caulfield is saying is there are fewer customers for such bleeding-edge manufacturing processes, so the return on investment isn’t there. “I think we’ve reached a change in Moore’s Law. Moore’s Law is an economic law: that we reduce the cost of transistors with each generation. We will still reduce the size of the transistor but at a slower rate,” said Jim McGregor, president of Tirias Research, who follows the semiconductor industry. To augment Moore’s Law, it’s likely that Intel, GlobalFoundries, and others will go into increased chip density packaging with multi-chip modules (MCM) and die stacking. AMD uses MCM in its Epyc server processor. The chip is not one 32-core monster, at least physically. It’s four 8-core modules with high speed interlinks. And NAND flash makers such as Samsung and Micron have doing 3D stacking with flash memory for some time now. Why you should care about process nodes All this talk of process nodes sounds very inside baseball, and it’s the bailiwick for extreme nerd sites such as Tom’s Hardware Guide or AnandTech. But why should you care? Because it’s all about efficiency, said McGregor. “You have to figure out how to scale those resources. With each generation of tech in a data center, you would expect to be able to handle more workloads in the same thermal and power envelope. If you can’t continue to scale, you get to a point where a data center is limited,” he said. “I’ve seen it where the data center is wall to wall, floor to ceiling hardware. They are maxed out in power thermals and space. So, you are always looking for the next generation of technology [to put more compute power in the same space] or you can’t do anything with that data center. So, you don’t bring down the cost curve and you won’t be competitive to competition,” he added. Imagine if hard-drive capacity stopped growing. You have X amount of space in the data center for storage. If you can’t put in bigger hard drives, then you reach the capacity limit of the data center and then what? Build another one? Companies want to get out of their data centers. Compute capacity is no different. Smaller process nodes for CPUs means more compute power in the same space. There needs to be a breakthrough, and it isn’t happening any time soon. A possible replacement called Extreme Ultraviolet Lithography is under development, but it costs billions and the equipment itself is the size of a two-story house, said McGregor. Meanwhile, transistor thickness is being measured in electrons. The laws of physics are a stubborn thing. Related content news analysis AMD launches Instinct AI accelerator to compete with Nvidia AMD enters the AI acceleration game with broad industry support. First shipping product is the Dell PowerEdge XE9680 with AMD Instinct MI300X. By Andy Patrizio Dec 07, 2023 6 mins CPUs and Processors Generative AI Data Center news analysis Western Digital keeps HDDs relevant with major capacity boost Western Digital and rival Seagate are finding new ways to pack data onto disk platters, keeping them relevant in the age of solid-state drives (SSD). By Andy Patrizio Dec 06, 2023 4 mins Enterprise Storage Data Center news Omdia: AI boosts server spending but unit sales still plunge A rush to build AI capacity using expensive coprocessors is jacking up the prices of servers, says research firm Omdia. By Andy Patrizio Dec 04, 2023 4 mins CPUs and Processors Generative AI Data Center news AWS and Nvidia partner on Project Ceiba, a GPU-powered AI supercomputer The companies are extending their AI partnership, and one key initiative is a supercomputer that will be integrated with AWS services and used by Nvidia’s own R&D teams. By Andy Patrizio Nov 30, 2023 3 mins CPUs and Processors Generative AI Supercomputers Podcasts Videos Resources Events NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe