Well, that was short.\nIntel is ending work on its Nervana neural network processors (NNP) in favor of an artificial intelligence line it gained in the recent $2 billion acquisition of Habana Labs.\n\nIntel acquired Nervana in 2016 and issued its first NNP chip one year later. After the $408 million acquisition by Intel, Nervana co-founder Naveen Rao was placed in charge of the AI platforms group, which is part of Intel's data platforms group. The Nervana chips were meant to compete with Nvidia GPUs in the AI inference training space, and Facebook worked with Intel \u201cin close collaboration, sharing its technical insights,\u201d according to former Intel CEO Brian Krzanich.\nFor now, Intel has ended development of its Nervana NNP-T training chips and will deliver on current customer commitments for its Nervana NNP-I inference chips; Intel will move forward with Habana Labs' Gaudi and Goya processors in their place.\nThere are two parts to neural networks: training, where the computer learns a process, such as image recognition; and inference, where the system puts what it was trained to do to work. Training is far more compute-intensive than inference, and it\u2019s where Nvidia has excelled.\nIntel said the decision was made after input from customers, and that this decision is part of strategic updates to its data-center AI acceleration roadmap. "We will leverage our combined AI talent and technology to build leadership AI products," the company said in a statement to me.\n\u201cThe Habana product line offers the strong, strategic advantage of a unified, highly-programmable architecture for both inference and training. By moving to a single hardware architecture and software stack for data-center AI acceleration, our engineering teams can join forces and focus on delivering more innovation, faster to our customers,\u201d Intel said.\nThis outcome from the Habana acquisition wasn't entirely unexpected. "We had thought that they might keep one for training and one for inference. However, Habana's execution has been much better and the architecture scales better. And, Intel still gained the IP and expertise of both companies,\u201d said Jim McGregor, president of Tirias Research.\nThe good news is that whatever developers created for Nervana won\u2019t have to be thrown out. \u201cThe frameworks work on either architecture,\u201d McGregor said. "While there will be some loss going from one architecture to another, there is still value in the learning, and I'm sure Intel will work with customers to help them with the migration.\u201d\nThis is the second AI\/machine learning effort Intel has shut down, the first being Xeon Phi. Xeon Phi itself was a bit of a problem child, dating back to Intel\u2019s failed Larrabee experiment to build a GPU based on x86 instructions. Larrabee never made it out of the gate, while Xeon Phi lasted a few generations as a co-processor but was ultimately axed in August 2018.\nIntel still has a lot of products targeting various AI: Mobileye, Movidius, Agilex FPGA, and its upcoming Xe architecture. Habana Labs has been shipping its Goya Inference Processor since late 2018, and samples of its Gaudi AI Training Processor were sent to select customers in the second half of 2019.