Neuromorphic chips – using the brain as a model
Munich, 14 December 2022
Neuromorphic computer architecture, which is inspired by connectivity in the human brain, promises to be very powerful yet energy efficient. Since current computer architecture will soon reach the physical limits of performance optimisation, the semi-conductor industry regards neuromorphic chips as the future of computer technology. But how far advanced is the development of neuromorphic IT? Will neuromorphic chips open up a whole new level of artificial intelligence (AI) and digitalisation? These questions were discussed with experts from science and industry at the “acatech am Dienstag” event on 22 November.
Neuromorphic chips are a fascinating example of how we can learn from nature, said Reinhard Ploss, Co-Chair of Plattform Lernende Systeme, in his welcome address. It is amazing, he continued, in the context of artificial intelligence in particular, how little energy the human brain expends to perform at a high level.
acatech member Artur Zrenner (Paderborn University, spokesperson for the acatech Topic network Nano and quantum technologies) emceed the event. He began his talk with an introduction to the challenges presented by current processor architecture. Today’s computers are still based on the von Neumann architecture, the technology that emerged in the mid-20th century as a practical solution for computer systems. With this architecture, the storage and processing of data is separated by a bus system, a physical bottleneck via which the instructions have to be fetched and data have to be transferred. Communication via this bus system takes time and energy. It restricts computer performance, which, although it has steadily increased in recent decades in line with Moore’s Law, is now pushing up against the limits of physics in terms of atomic-scale length. Even though there are many advantages associated with this architecture – such as high levels of reliability and precision – it also has its disadvantages, explained Artur Zrenner. For example, the von Neumann bottleneck limits computer performance. For AI applications specifically, immensely powerful neural networks have so far only been achieved in the form of highly energy-intensive emulators. However, this high-level performance is required in artificial neural networks, for image and speech recognition as well as translation or in medical diagnostics. As a consequence, the need for hardware that is more energy efficient and more powerful, which neuromorphic chips could make possible, is increasing, said Artur Zrenner in conclusion.
Martin Ziegler (TU Ilmenau) gave a vivid description of the difference in energy efficiency between conventional systems and neuromorphic systems by drawing a comparison with the human brain. Despite its capacity, the human brain runs on only between 20 and 25 watts. A computer with comparable power would, however, use around a megawatt – many times the brain’s power consumption – as well as fill a whole warehouse. The difference in energy consumption is especially clear when it comes to recognising patterns. Some time ago the AI “Google DeepMind” beat a human in the strategy board game “Go”; however, the energy and space taken up by the computer was vastly greater than that of the human it defeated. According to Martin Ziegler, this example illustrates that the current CPU architecture is not very energy efficient and that a new hardware model is required. In the von Neumann architecture, the storage and processing of data are separate. In contrast, what we see when the human brain is learning is that neurons connect locally and decentrally to form complex networks; that is, data storage and processing takes place in the same location. The development of such neuromorphic IT systems is under way. It is estimated that we can expect the first hybrid IT systems around 2025 and fully neuromorphic hardware no earlier than 2030, commented Martin Ziegler in his talk.
Speaker Heike Riel (IBM Research) focused on computers for artificial intelligence and neuromorphic computing. The more artificial intelligence is used, she said, the more data would be generated. However, new, more powerful computers would be necessary to be able to use these volumes of data and resulting applications. According to Heike Riel, IBM uses “approximate computing” as a solution to powerful yet energy-efficient systems. Approximate computing leverages reduced-precision AI computation while hardly affecting the accuracy of the model at all. With this reduction – for example, from 32-bit systems to 4-bit systems – each halving of the word size quadruples the energy efficiency, even though these systems are still based on the von Neumann architecture. Further improvements, said Heike Riel, can be achieved by analogue in-memory computing, which uses new electronic and photonic components. In her conclusion, Heike Riel painted a picture of the future of computing as a combination of established and novel technologies (bits (hybrid cloud)), neurons (AI) and qubits (quantum computing).
acatech member Klaus Mainzer (Technical University of Munich) reminded the gathering that theoretical approaches to solving the problems with the von Neumann architecture had been predicted by Leon Chua (UC Berkeley) back in 1971. These took the form of the “memristor” (memory resistor). An enhancement of these memristors using optical technologies would speed up reaction time from milliseconds to nanoseconds. Also Klaus Mainzer suggested integrating conventional, neuromorphic and quantum computing into hybrid computing to solve the problems of an ever more complex civilisation in an energy-efficient way.
Reinhard Ploss commented on the developments in neuromorphic chips and predicted that their practical application was still a long way off. The main reason for this is that the (computational) cells have to be on the lower nanoscale, which results in a lack of precision for which software engineers are yet to find a solution. Therefore, an attractive option would be to use the capacity of neuromorphic chips in larger units at first before pressing ahead with reducing the size of the chips. This, said Reinhard Ploss, would require significant developments in the architecture for the applications to find practical use. However, they hold enormous potential for “edge AI applications” such as smart city, smart mobility and in medical diagnostics due to their power and energy efficiency.