Brain-based computer chips aren’t just for AI anymore

With the insertion of a little mathematics, researchers at Sandia National Laboratories have shown that neuromorphic computers, which synthetically reproduce the logic of the brain, can solve problems more complex than those posed by artificial intelligence and can even win a place in high performance computing.

The results, detailed in a recent journal article Natural electronicsshow that neuromorphic simulations using the statistical method called random walks can track x-rays passing through bone and soft tissue, diseases passing through a population, information circulating on social media, and movements in financial markets, among other uses, a said Sandia’s theoretical neuroscientist. and principal investigator James Bradley Aimone.

“Basically, we have shown that neuromorphic hardware can provide relevant computational benefits for many applications, not just artificial intelligence to which it is obviously related,” Aimone said. “Recently discovered applications range from radiation transport and molecular simulations to computational finance, biological modeling and particle physics.”

In optimal cases, neuromorphic computers will solve problems faster and use less power than conventional computing, he said.

The bold claims should be of interest to the high-performance computing community, as finding capabilities to solve statistical problems is a growing concern, Aimone said.

“These problems are not really suitable for GPUs [graphics processing units]what future exascale systems are likely to build on,” Aimone said. “The exciting thing is that nobody has really looked at neuromorphic computing for these kinds of applications before.”

Sandia engineer and paper author Brian Franke said, “The natural randomness of the processes you list will render them inefficient when directly mapped to vector processors such as GPUs on the computational efforts of new generation. Meanwhile, neuromorphic architectures are an intriguing and radically different alternative for particles. simulation that can lead to a scalable, energy-efficient approach to solving the problems we care about.”

Franke models photon and electron radiation to understand their effects on components.

The team successfully applied neuromorphic computational algorithms to model random walks of gaseous molecules diffusing through a barrier, a basic chemistry problem, using the 50 million chip Loihi platform that Sandia received there about a year and a half ago from Intel Corp., Aimone said. . “Next, we showed that our algorithm can be extended to more sophisticated diffusion processes useful in a range of applications.”

The assertions are not intended to question the primacy of standard computing methods used to run utilities, desktops, and phones. “There are areas, however, where the combination of computational speed and lower energy costs can make neuromorphic computing the ultimate choice,” he said.

Unlike the difficulties of adding qubits to quantum computers — another attractive way to overcome the limitations of conventional computing — chips containing artificial neurons are cheap and easy to install, Aimone said.

There can always be a high cost to move data on or off the neurochip processor. “As you collect it, it slows down the system and eventually it won’t work at all,” said mathematician and Sandia paper author William Severa. “But we overcame that by setting up a small group of neurons that efficiently computed summary statistics, and we output those summaries instead of the raw data.”

Severa wrote several of the experiment’s algorithms.

Like the brain, neuromorphic computing works by electrifying small pin-like structures, adding tiny charges emitted by surrounding sensors until a certain electrical level is reached. Then the pin, like a biological neuron, emits a tiny burst of electricity, an action known as a spike. Unlike the metronomic regularity with which information is transmitted in conventional computers, Aimone said, artificial neurons in neuromorphic computing flash irregularly, as biological neurons do in the brain, and therefore may take longer to complete. to transmit information. But because the process only depletes the energies of sensors and neurons if they are contributing data, it requires less energy than formal computing, which must poll each processor whether it is contributing or not. The conceptually biobased process has another advantage: its computational and memory components exist in the same structure, whereas conventional computation consumes energy by remote transfer between these two functions. The slow reaction time of artificial neurons initially may slow its solutions, but this factor disappears as the number of neurons increases, so more information is available in the same time period to be totaled, Aimone said.

The process begins with the use of a Markov chain – a mathematical construct where, like a Monopoly game board, the next outcome depends only on the current state and not on the history of all previous states. This randomness contrasts, said mathematician and Sandia paper author Darby Smith, with most related events. For example, he said, the number of days a patient must stay in hospital is at least partially determined by the length of the previous stay.

Starting with the Markov random basis, the researchers used Monte Carlo simulations, a fundamental computational tool, to run a series of random walks that attempt to cover as many routes as possible.

“Monte Carlo algorithms are a natural solution method for radiation transport problems,” Franke said. “The particles are simulated in a process that mirrors the physical process.”

The energy of each step was recorded as a single energy spike by an artificial neuron reading the result of each step in turn. “This neural network is more energy-efficient in sum than recording every moment of every step, as ordinary computing has to do. This partly explains the speed and efficiency of the neuromorphic process,” Aimone said. More chips will help the process go faster using the same amount of energy, he said.

The next version of Loihi, said Sandia researcher Craig Vineyard, will increase its current chip scale from 128,000 neurons per chip to one million. Larger scale systems then combine multiple chips onto a board.

“It may make sense that a technology like Loihi finds its way into a future high-performance computing platform,” Aimone said. “It could help make HPC much more energy-efficient, climate-friendly and just plain more affordable.”

The work was funded through the NNSA Advanced Simulation and Computing program and Sandia’s lab-led research and development program.

Video: https://youtu.be/O_8E26axKFY

Sherry J. Basler