DARPA-funded IBM researchers today said they have developed a human brain-inspired computer chip loaded with more than 5 billion transistors and 256 million “synapses,” or programmable logic points, analogous to the connections between neurons in the brain.
In addition to being one of the world’s largest and most complex computer chips ever produced it requires only a fraction of the electrical power of conventional chips to operate, IBM and Defense Advanced Research Projects Agency (DARPA) stated.
The developers said the chip, which can be tiled to create large arrays, is built on Samsung Foundry's 28nm process technology, the 5.4 billion transistor chip has one of the highest transistor counts of any chip ever produced. Each chip consumes less than 100 milliWatts of electrical power during operation. When applied to benchmark tasks of pattern recognition, the new chip achieved two orders of magnitude in energy savings compared to state-of-the-art traditional computing systems, DARPA and IBM said.
The high energy efficiency is achieved, in part, by distributing data and computation across the chip, alleviating the need to move data over large distances. In addition, the chip runs in an asynchronous manner, processing and transmitting data only as required, similar to how the brain works. The new chip’s high energy efficiency makes it a candidate for defense applications such as mobile robots and remote sensors where electrical power is limited, IBM and DARPA stated.
“Computer chip design is driven by a desire to achieve the highest performance at the lowest cost. Historically, the most important cost was that of the computer chip. But Moore’s law—the exponentially decreasing cost of constructing high-transistor-count chips—now allows computer architects to borrow an idea from nature, where energy is a more important cost than complexity, and focus on designs that gain power efficiency by sparsely employing a very large number of components to minimize the movement of data. IBM’s chip, which is by far the largest one yet made that exploits these ideas, could give unmanned aircraft or robotic ground systems with limited power budgets a more refined perception of the environment, distinguishing threats more accurately and reducing the burden on system operators,” said Gill Pratt, DARPA program manager in a statement.
The chip was developed on the auspices of DARPA’s Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program which looks to speed the development of a brain-inspired chip that could perform difficult perception and control tasks while at the same time achieving significant energy savings, the agency says. The goal is to develop systems capable of analyzing vast amounts of data from many sources in the blink of an eye, letting the military or civilian businesses make rapid decisions in time to have a significant impact on a given problem or situation.
According to DARPA, programmable machines are limited not only by their computational capacity, but also by an architecture requiring (human-derived) algorithms to both describe and process information from their environment. In contrast, biological neural systems such as human brains, autonomously process information in complex environments by automatically learning relevant and probabilistically stable features and associations, DARPA stated.
As compared to biological systems for example, today’s programmable machines are less efficient by a factor of one million to one billion in complex, real-world environments. Many tasks that people and animals perform effortlessly, such as perception and pattern recognition, audio processing and motor control, are difficult for traditional computing architectures to do without consuming a lot of power. Biological systems consume much less energy than current computers attempting the same tasks, the researchers stated.
The new chip is just one of DARPA’s programs that aims to more deeply into how computers can mimic a key portion of our brain.
Last Fall it issued a Request For information, on how it could develop systems that go beyond machine learning, Bayesian techniques, and graphical technology to solve "extraordinarily difficult recognition problems in real-time."
What DARPA said it was interested in is looking at mimicking a portion of the brain known as the neocortex which is utilized in higher brain functions such as sensory perception, motor commands, spatial reasoning, conscious thought and language. Specifically, DARPA said it is looking for information that provides new concepts and technologies for developing what it calls a "Cortical Processor" based on Hierarchical Temporal Memory.
"Although a thorough understanding of how the cortex works is beyond current state of the art, we are at a point where some basic algorithmic principles are being identified and merged into machine learning and neural network techniques. Algorithms inspired by neural models, in particular neocortex, can recognize complex spatial and temporal patterns and can adapt to changing environments. Consequently, these algorithms are a promising approach to data stream filtering and processing and have the potential for providing new levels of performance and capabilities for a range of data recognition problems," DARPA stated. "The cortical computational model should be fault tolerant to gaps in data, massively parallel, extremely power efficient, and highly scalable. It should also have minimal arithmetic precision requirements, and allow ultra-dense, low power implementations."
Check out these other hot stories: