Beyond Moore's Law: Neuromorphic computing?

Some researchers think brain-copying architectures should replace traditional computing. One group explains how that might work.

ai artificial intelligence neural networks technology brain
4x-image / Getty Images

With the conceivable exhaustion of Moore’s Law – that the number of transistors on a microchip doubles every two years – the search is on for new paths that lead to reliable incremental processing gains over time.

One possibility is that machines inspired by how the brain works could take over, fundamentally shifting computing to a revolutionary new tier, according to an explainer study released this month by Applied Physics Reviews.

“Today’s state-of-the-art computers process roughly as many instructions per second as an insect brain,” say the paper’s authors Jack Kendall, of Rain Neuromorphics, and Suhas Kumar, of Hewlett Packard Labs. The two write that processor architecture must now be completely re-thought if Moore’s law is to be perpetuated, and that replicating the “natural processing system of a [human] brain” is the way forward.

Deep neural networks (DNNs) should be the foundation, the group believes. A DNN is basically dynamic deep learning where layers pull high- and low-level detail features (edges and shapes, for example) from data. Kendall and Kumar explain that a human brain, which DNN copies, can sort through massive datasets and generally identify data  better than a traditional computer, so therefore it should be the starting point.

This kind of thing is being attempted already. Existing artificial intelligence (AI) is a stab at getting computers to learn like a human brain. Much like the brain, AI engines learn from patterns in data. Algorithms are combined with processing power, and rewards are dished out when the machine gets it right.

A brain-inspired neuromorphic computer, however, would take computing a step further, the team believes. Neuromorphic computing mimics neuro-biological architectures in a kind of hybrid digital-analog circuit, in a way like a body does biologically.

The group says that they think there are 10 basics that need to be gotten right to get to this next level:

Parellelism – Similar to how a brain works rapidly, numerous mathematical operations must be made to occur simultaneously. It’s an extension of what we see now in graphical processing units (GPUs) where large scale graphics are created using concurrent calculations called matrix multiplications.

In-memory computing – It wastes resources to fetch data from remote places, and human brains, indeed, don’t do that; they store information in the same synapses that perform the thought. The introduction of electronic processing semiconductors that combine memory – Memristors – could help here. (I wrote a few weeks ago about progress being made combining transistors with storage. That combo could have similar resource advantages.)

Analog computing – Numbers are analog, not digital, the authors point out. Most real-world numbers aren’t zeros and ones, so, for efficiency, any new computing architecture needs to accept that concept, adapt and handle the inherent precision problems that result.

Plasticity – Real-time tuning needs to take place to account for things changing.

Probabilistic computing – The authors suggest computers should get less precise, just like the human brain. Coming up with certain degrees of probability is faster than precise calculation, and it requires less information.

Scalability – The depth of the network allows for complexity. By introducing more layers, one gains more scaling.

Sparsity – Large-scale networks, including neural computers, can’t connect every node, just as not all neurons are connected to each other in the brain. It’s a redundancy that wastes resources. Hub-and-spoke topology works better and allows for better scaling. The same should happen in the next computers, the researchers say.

Learning (credit assignment) – The adjustment of synaptic weights (the strength and amount of influence synapses have) needs attention related to new information presented.

Causality – The relationship between cause and effect in a result has to be addressed. Causal interference is a problem, and machine learning generally has had problems with getting this bit right.

Nonlinearity – The brain isn’t linear like a computer is. “The brain operates at the edge of chaos to produce the most optimal learning and computation,” the team says. The next computer architecture needs to encompass that brain-like nonlinearity, but also operate within linearity, like today’s electronics.

“Our present hardware is not able to keep up,” Kendall and Kumar say in their paper, which also looks at materials. “The future of computing will not be about cramming more components on a chip but in rethinking processor architecture,” which should be neuromorphic.

Copyright © 2020 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022