Nanomaterials pave the way for the next generation of computing

A neuromorphic chip, key to the predicted “paradigm shift” in computing performance.Credit: Seung Hwan Lee

Solid-state computing has been around for a long time since the 1950s, when transistors began to replace vacuum tubes as the key component in electronic circuits. Generations of new semiconductor devices that electronically process and store information at accelerated speeds have come and gone as germanium transistors were replaced by silicon transistors, followed by integrated circuits, and then by chips more increasingly complex filled with an ever-increasing number of smaller transistors.

Since 1965, the industry has been guided by Moore’s Law – predictions made by Gordon Moore, co-founder of microprocessor giant Intel – that ever-shrinking devices will lead to improved computing performance and energetic efficiency. Advances in nanotechnology have reduced the smallest features of today’s most advanced integrated circuits to the atomic scale, but this is incompatible with current devices. The next major step in computing requires not only new nanomaterials, but also new architecture.

CMOS (complementary metal-oxide-semiconductor) transistors have been the basic elements of integrated circuits since the 1980s. CMOS circuits, like the generations of digital computers before them, are based on the fundamental architecture chosen by John von Neumann in the middle of the 20th century. Its architecture was designed to separate the electronic components that store data in computers from those that process digital information. The computer stored information in one place and then sent it to other circuits for processing. Separating the stored memory from the processor prevents the signals from interfering with each other and maintains the precision needed for digital computation. However, the time spent moving data from memory to processors has become a bottleneck. Developers are now looking for alternative non-von Neumann architectures to perform “in-memory” calculations to avoid wasting time moving data around.

Another goal is to move to neuromorphic systems, which use algorithms and network designs that mimic the high connectivity and parallel processing of the human brain. This means developing new artificial neurons and synapses that are compatible with electronic processing, but exceed the performance of CMOS circuits, explains Mark Hersam, a researcher in chemistry and materials science. It’s no small feat, he adds, but it would be well worth the cost. “I’m more interested in neuromorphic computing than in-memory processing, because I think brain emulation is a bigger paradigm shift, with more potential benefits.”

The challenge in both cases is to identify the best technologies for the task, work that Hersam continues at Northwestern University in Evanston, Illinois. In the Nature Index, which tracks articles in 82 selected natural science journals, Northwestern University is ranked second in the United States for nanotechnology-related output, after the Massachusetts Institute of Technology in Cambridge.

The first hints of a major shift in computing emerged around 2012, as Moore’s Law began to run out of steam and developers of deep learning – where systems improve their performance based on experience past – realized that the general-purpose central processing units (CPUs) used in conventional computers could not meet their needs.

Towards faster treatment

The strength of processors was their versatility, says Wilfried Haensch, who led a concept development group for computer memory at IBM Watson Research Center in Yorktown Heights, New York, until his retirement in 2020. program you come up with, the processor can run it,” says Haensch. “Whether he can execute it effectively is another story.”

In search of better processors for deep learning, IBM developers turned to graphics processing units (GPUs), designed to perform advanced mathematical calculations used for high-speed three-dimensional imaging in games computers. IBM found that GPUs can run deep learning algorithms much more efficiently than CPUs, so the team hardwired chips to run particular processes.

“In other machines, you load data and instructions, but in dataflow machines, some instructions are hardwired into the processor, so you don’t have to load instructions,” Haensch explains. This marked a departure from the conventional von Neumann model, as data passed through the hard-wired processor, as if operations were being performed in memory. It also worked for the deep learning algorithm, as around 80% of its operations used the same advanced math as image processing.

Further adjustment of current materials only offers a short-term solution, says Haensch. There are many new ideas, new devices and new nanostructures, he says, but none are ready to replace CMOS. And there’s no guarantee as to whether, or when, they’ll be ready to deliver the transformation the industry needs.

Graphs showing overall world production in nanoscience and for major nations

Source: Natural Index

Among the most popular class of devices in development are memristors, which have both memory and electrical resistance. Memristors resemble standard electrical resistors, but applying an electrical input to them can change their resistance, thereby changing what is stored in memory. With three layers – two terminals that connect to other devices, separated by a storage layer – their structure allows them to store data and process information. The concept was proposed in 1971, but it wasn’t until 2007 that R. Stanley Williams, a research scientist at Hewlett-Packard Labs in Palo Alto, California, fabricated the first thin-film semiconductor memristor that could be used in a circuit.

Memristors can be fabricated at the nanometer scale and can switch in less than a nanosecond. They have “great potential for developing future computing systems beyond the eras of von Neumann and Moore’s Law,” Wei Lu and his group at the University of Michigan at Ann Arbor described in a 2018 review of memristor technology (MA Zidan et al. Natural electron. 1, 22–29; 2018). Building a single system that combines all the desired properties will not be easy.

Next generation materials

Researchers are turning to new classes of materials to meet the needs of edge computing. Hersam and his colleague Vinod K. Sangwan, a materials science and engineering researcher at Northwestern University, have listed a long list of potential neuromorphic electronic materials that includes zero-dimensional materials (quantum dots), one-dimensional and two-dimensional materials (graphene) , and van der Waals heterostructures (several two-dimensional layers of materials that adhere together) (VK Sangwan and MC Hersam Nature Nanotechnology. 15, 517–528; 2020).

One-dimensional carbon nanotubes, for example, have attracted attention for their use in neuromorphic systems because they resemble the tubular axons through which nerve cells transmit electrical signals in biological systems.

Opinions are divided on how these materials will factor into the future of computing. Abu Sebastian, the Zurich-based technical lead of the IBM Research AI Hardware Center in Albany, New York, is focused on short-term gains and sees opportunities to go deeper into digital and neuromorphic computing.

“Companies like Mythic [an artificial intelligence company based in Austin, Texas] are very close to commercialization,” he says. On the research side, Lu says there is a lot to discover. Complex calculations adapted from imaging need to be made “more accurate and precise” for neuromorphic computing to take full advantage of them, he says. Haensch adds that so far there is no material for viable commercial production.

Intel and IBM, which is the leading corporate institution for nanoscience and nanotechnology-related production in the nature index, have large groups working on non-von Neumann computing. Hewlett-Packard and the Paris-based artificial intelligence company Lights-On are among several companies that focus on near-term applications.

Sherry J. Basler