With the Loihi 2 neuromorphic chip, machines can perform application processing, problem-solving, adaptation,and learning much faster than before. Four years after Intel first introduced Loihi, the company’s first neuromorphic chip, the company has released its second generation processor, which Intel says will provide faster processing, greater resource density, and improved power efficiency. CPUs are often called the brains of the computer but aren’t, really, since they process only a handful of tasks at once in a serial manner, nothing like what the brain does automatically to keep you alive. Neuromorphic computing attempts to replicate the functions of the brain by performing numerous tasks simultaneously, with emphasis on perception and decision making Neuromorphic chips mimic neurological functions through computational “neurons” that communicate with one another. The first generation of Loihi chips had around 128,000 of those digital neurons; the Loihi 2 has more than a million. Intel states that early tests of Loihi 2 required more than 60 times fewer ops per inference when running deep neural networks compared to Loihi 1, without a loss in accuracy. This can mean real-time application processing, problem-solving, adaptation and learning. It has even learned how to smell. Loihi 2 also features faster I/O interfaces to support Ethernet connections with vision-based sensors and larger meshed networks. This will help the chip better integrate with the robotics and sensors that have been commonly used Loihi 1 in the past. Loihi isn’t sold like regular Intel chips. It is sold through complete systems to select members of its Intel Neuromorphic Research Community (INRC). Those systems are called Oheo Gulch, which uses a single Loihi 2 chip and is intended for early evaluation, and Kapoho Point, which offers eight Loihi 2 chips and will be available soon. Intel Releases Lava Framework To support development for neuromorphic applications, Intel has also introduced an open, modular, and extensible software framework known as “Lava”, which the company says provides the neuromorphic computing community with a common development framework. A component of Lava is Magma, an interface for mapping and executing neural-network models and other processes using neuromorphic hardware. Lava also includes offline training, integration with third-party frameworks, Python interfaces and more. The Lava framework is available now on GitHub. Related content news AWS and Nvidia partner on Project Ceiba, a GPU-powered AI supercomputer The companies are extending their AI partnership, and one key initiative is a supercomputer that will be integrated with AWS services and used by Nvidia’s own R&D teams. By Andy Patrizio Nov 30, 2023 3 mins CPUs and Processors Generative AI Supercomputers news VMware stung by defections and layoffs after Broadcom close Layoffs and executive departures are expected after an acquisition, but there's also concern about VMware customer retention. By Andy Patrizio Nov 30, 2023 3 mins Virtualization Data Center Industry news AI partly to blame for spike in data center costs Low vacancies and the cost of AI have driven up colocation fees by 15%, DatacenterHawk reports. By Andy Patrizio Nov 27, 2023 4 mins Generative AI Data Center opinion Winners and losers in the Top500 supercomputer ranking Besides Nvidia, who had a great showing on the list of the world’s most powerful supercomputers? Almost everyone. By Andy Patrizio Nov 20, 2023 4 mins CPUs and Processors Data Center Podcasts Videos Resources Events NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe