Like its CPU-GPU hybrid, Intel plans to put CPU and FPGA chips onto one package. Credit: Thinkstock Two years ago, Intel spent $16.7 billion to acquire FPGA chip vendor Altera. So, what’s it going to do with that big purchase? The company is finally ready to say. A field-programmable gate array, or FPGA, is an integrated circuit that can be customized to perform specific functions. Whereas the x86 executes only the x86 instruction sets, an FPGA can be reprogrammed on the fly to perform specified tasks. That’s why x86s are considered general compute processors and FPGAs are viewed as customizable. It sounds like FPGA will compete with the Xeon Phi accelerator cards but Intel said that’s not the case. FPGA differs from its Xeon Phi acceleration strategy in that you can get multifunction acceleration with FPGAs vs. specialized acceleration with Phi. So FPGA complements, it does not compete with Phi. Like the GPU, FPGAs will be used in one of two ways: offload and inline. Offload, also called look aside, means the data coming in first goes through the CPU before being moved to the FPGA for processing. Inline means the CPU stays out of the way and data goes directly in and out of the FPGA for processing. FPGAs better for certain tasks than Xeon Phi or GPUs Now Intel is positioning the Altera FPGAs as co-processors and admits they will compete with Xeon Phi in some ways, but that the FPGAs are more versatile and suited for certain tasks better than the Phi or GPUs, according to Bernhard Friebe, senior director of software solutions in the Intel Programmable Solutions Group. “The advantage for FPGA is GPUs play in some areas but not all, and if you look at the use model of inline vs. offload, they are limited to offload mostly. So, there’s a broader application space you can cover with FPGA,” he said. The integrated solution provides tight coupling between CPU and FPGA with very high bandwidth, while the external PCI Express card is not as tightly coupled. For ultra-low latency and high-bandwidth applications, integrated is a great fit, Friebe said. “Most of the differentiation [between integrated and discrete] is due to system architecture and data movement. In a data center environment where [you] run many different workloads, you don’t want to tie it to a particular app,” he said. The more you do specialization, the more performance you can squeeze out of the accelerator, said Friebe. FPGAs as a multifunction accelerator will achieve great performance in some apps. The nature of the FPGA is highly parallel and programmability, which lends itself to accelerating workloads that can be parallelized. These include data analytics, artificial intelligence (AI) and machine learning, video transcoding, compression, security, financial analysis, and genomics. Two-pronged FPGA strategy Intel is taking a two-pronged approach with its FPGA strategy, offering both hybrid CPU-FPGA processors — similar to its desktop CPUs that have a GPU integrated on the die — and discrete Arria or Stratix brand FPGA devices on a PCI Express card. The hybrid CPU-FPGA device will be based on a Skylake generation CPU and Arria 10 FPGA and will use faster UltraPath Interconnect (UPI) link, Intel’s successor to QuickPath Interconnect (QPI). Not much is known about UPI other than it will operate at 9.6GT/s or 10.4GT/s data transfer rates and will be considerably more efficient than QPI because it will support multiple requests per message. Intel is also providing a complete developer toolset and APIs to design apps for both integrated and discrete products using the same tools, accelerators and libraries. All are written in OpenCL, a C-like language. “The beauty is it’s standardized and open source. Their investment is forward-compatible to new-generation processors, easy to migrate, and provides an abstraction for FPGA developers to target a much larger user base,” Friebe said. Intel is sampling a discrete card, called a Programmable Acceleration Card (PAC), with the Arria 10 GX FPGA now, and it expects availability in the first half of 2018. A Xeon Scalable Platform with the integrated FPGA on a Skylake-generation Xeon is sampling today, with general availability in the second half of 2018. Related content news analysis AMD launches Instinct AI accelerator to compete with Nvidia AMD enters the AI acceleration game with broad industry support. First shipping product is the Dell PowerEdge XE9680 with AMD Instinct MI300X. By Andy Patrizio Dec 07, 2023 6 mins CPUs and Processors Generative AI Data Center news analysis Western Digital keeps HDDs relevant with major capacity boost Western Digital and rival Seagate are finding new ways to pack data onto disk platters, keeping them relevant in the age of solid-state drives (SSD). By Andy Patrizio Dec 06, 2023 4 mins Enterprise Storage Data Center news Omdia: AI boosts server spending but unit sales still plunge A rush to build AI capacity using expensive coprocessors is jacking up the prices of servers, says research firm Omdia. By Andy Patrizio Dec 04, 2023 4 mins CPUs and Processors Generative AI Data Center news AWS and Nvidia partner on Project Ceiba, a GPU-powered AI supercomputer The companies are extending their AI partnership, and one key initiative is a supercomputer that will be integrated with AWS services and used by Nvidia’s own R&D teams. By Andy Patrizio Nov 30, 2023 3 mins CPUs and Processors Generative AI Supercomputers Podcasts Videos Resources Events NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe