Popis: |
Machine learning, particularly in the form of deep learning (DL), has driven most of the recent fundamental developments in artificial intelligence (AI). DL is based on computational models that are, to a certain extent, bio‐inspired, as they rely on networks of connected simple computing units operating in parallel. The success of DL is supported by three factors: availability of vast amounts of data, continuous growth in computing power, and algorithmic innovations. The approaching demise of Moore's law, and the consequent expected modest improvements in computing power that can be achieved by scaling, raises the question of whether the progress will be slowed or halted due to hardware limitations. This article reviews the case for a novel beyond‐complementary metal–oxide–semiconductor (CMOS) technology—memristors—as a potential solution for the implementation of power‐efficient in‐memory computing, DL accelerators, and spiking neural networks. Central themes are the reliance on non‐von‐Neumann computing architectures and the need for developing tailored learning and inference algorithms. To argue that lessons from biology can be useful in providing directions for further progress in AI, an example‐based reservoir computing is briefly discussed. At the end, speculation is given on the “big picture” view of future neuromorphic and brain‐inspired computing systems. |