Zobrazeno 1 - 10
of 40
pro vyhledávání: '"Gallo, Manuel Le"'
Autor:
Mehonic, Adnan, Ielmini, Daniele, Roy, Kaushik, Mutlu, Onur, Kvatinsky, Shahar, Serrano-Gotarredona, Teresa, Linares-Barranco, Bernabe, Spiga, Sabina, Savelev, Sergey, Balanov, Alexander G, Chawla, Nitin, Desoli, Giuseppe, Malavena, Gerardo, Compagnoni, Christian Monzio, Wang, Zhongrui, Yang, J Joshua, Syed, Ghazi Sarwat, Sebastian, Abu, Mikolajick, Thomas, Noheda, Beatriz, Slesazeck, Stefan, Dieny, Bernard, Tuo-Hung, Hou, Varri, Akhil, Bruckerhoff-Pluckelmann, Frank, Pernice, Wolfram, Zhang, Xixiang, Pazos, Sebastian, Lanza, Mario, Wiefels, Stefan, Dittmann, Regina, Ng, Wing H, Buckwell, Mark, Cox, Horatio RJ, Mannion, Daniel J, Kenyon, Anthony J, Lu, Yingming, Yang, Yuchao, Querlioz, Damien, Hutin, Louis, Vianello, Elisa, Chowdhury, Sayeed Shafayet, Mannocci, Piergiulio, Cai, Yimao, Sun, Zhong, Pedretti, Giacomo, Strachan, John Paul, Strukov, Dmitri, Gallo, Manuel Le, Ambrogio, Stefano, Valov, Ilia, Waser, Rainer
The roadmap is organized into several thematic sections, outlining current computing challenges, discussing the neuromorphic computing approach, analyzing mature and currently utilized technologies, providing an overview of emerging technologies, add
Externí odkaz:
http://arxiv.org/abs/2407.02353
Autor:
Momeni, Ali, Rahmani, Babak, Scellier, Benjamin, Wright, Logan G., McMahon, Peter L., Wanjura, Clara C., Li, Yuhang, Skalli, Anas, Berloff, Natalia G., Onodera, Tatsuhiro, Oguz, Ilker, Morichetti, Francesco, del Hougne, Philipp, Gallo, Manuel Le, Sebastian, Abu, Mirhoseini, Azalia, Zhang, Cheng, Marković, Danijela, Brunner, Daniel, Moser, Christophe, Gigan, Sylvain, Marquardt, Florian, Ozcan, Aydogan, Grollier, Julie, Liu, Andrea J., Psaltis, Demetri, Alù, Andrea, Fleury, Romain
Physical neural networks (PNNs) are a class of neural-like networks that leverage the properties of physical systems to perform computation. While PNNs are so far a niche research area with small-scale laboratory demonstrations, they are arguably one
Externí odkaz:
http://arxiv.org/abs/2406.03372
A Precision-Optimized Fixed-Point Near-Memory Digital Processing Unit for Analog In-Memory Computing
Autor:
Ferro, Elena, Vasilopoulos, Athanasios, Lammie, Corey, Gallo, Manuel Le, Benini, Luca, Boybat, Irem, Sebastian, Abu
Analog In-Memory Computing (AIMC) is an emerging technology for fast and energy-efficient Deep Learning (DL) inference. However, a certain amount of digital post-processing is required to deal with circuit mismatches and non-idealities associated wit
Externí odkaz:
http://arxiv.org/abs/2402.07549
Autor:
Lammie, Corey, Vasilopoulos, Athanasios, Büchel, Julian, Camposampiero, Giacomo, Gallo, Manuel Le, Rasch, Malte, Sebastian, Abu
Analog-Based In-Memory Computing (AIMC) inference accelerators can be used to efficiently execute Deep Neural Network (DNN) inference workloads. However, to mitigate accuracy losses, due to circuit and device non-idealities, Hardware-Aware (HWA) trai
Externí odkaz:
http://arxiv.org/abs/2401.09859
Autor:
Gallo, Manuel Le, Lammie, Corey, Buechel, Julian, Carta, Fabio, Fagbohungbe, Omobayode, Mackin, Charles, Tsai, Hsinyu, Narayanan, Vijay, Sebastian, Abu, Maghraoui, Kaoutar El, Rasch, Malte J.
Publikováno v:
APL Machine Learning (2023) 1 (4): 041102
Analog In-Memory Computing (AIMC) is a promising approach to reduce the latency and energy consumption of Deep Neural Network (DNN) inference and training. However, the noisy and non-linear device characteristics, and the non-ideal peripheral circuit
Externí odkaz:
http://arxiv.org/abs/2307.09357
Autor:
Büchel, Julian, Vasilopoulos, Athanasios, Kersting, Benedikt, Odermatt, Frederic, Brew, Kevin, Ok, Injo, Choi, Sam, Saraf, Iqbal, Chan, Victor, Philip, Timothy, Saulnier, Nicole, Narayanan, Vijay, Gallo, Manuel Le, Sebastian, Abu
Publikováno v:
2022 International Electron Devices Meeting (IEDM), San Francisco, CA, USA, 2022, pp. 33.1.1-33.1.4
The precise programming of crossbar arrays of unit-cells is crucial for obtaining high matrix-vector-multiplication (MVM) accuracy in analog in-memory computing (AIMC) cores. We propose a radically different approach based on directly minimizing the
Externí odkaz:
http://arxiv.org/abs/2305.16647
Autor:
Benmeziane, Hadjer, Lammie, Corey, Boybat, Irem, Rasch, Malte, Gallo, Manuel Le, Tsai, Hsinyu, Muralidhar, Ramachandran, Niar, Smail, Hamza, Ouarnoughi, Narayanan, Vijay, Sebastian, Abu, Maghraoui, Kaoutar El
The advancement of Deep Learning (DL) is driven by efficient Deep Neural Network (DNN) design and new hardware accelerators. Current DNN design is primarily tailored for general-purpose use and deployment on commercially viable platforms. Inference a
Externí odkaz:
http://arxiv.org/abs/2305.10459
Autor:
Rasch, Malte J., Mackin, Charles, Gallo, Manuel Le, Chen, An, Fasoli, Andrea, Odermatt, Frederic, Li, Ning, Nandakumar, S. R., Narayanan, Pritish, Tsai, Hsinyu, Burr, Geoffrey W., Sebastian, Abu, Narayanan, Vijay
Analog in-memory computing (AIMC) -- a promising approach for energy-efficient acceleration of deep learning workloads -- computes matrix-vector multiplications (MVMs) but only approximately, due to nonidealities that often are non-deterministic or n
Externí odkaz:
http://arxiv.org/abs/2302.08469
Autor:
Gallo, Manuel Le, Khaddam-Aljameh, Riduan, Stanisavljevic, Milos, Vasilopoulos, Athanasios, Kersting, Benedikt, Dazzi, Martino, Karunaratne, Geethan, Braendli, Matthias, Singh, Abhairaj, Mueller, Silvia M., Buechel, Julian, Timoneda, Xavier, Joshi, Vinay, Egger, Urs, Garofalo, Angelo, Petropoulos, Anastasios, Antonakopoulos, Theodore, Brew, Kevin, Choi, Samuel, Ok, Injo, Philip, Timothy, Chan, Victor, Silvestre, Claire, Ahsan, Ishtiaq, Saulnier, Nicole, Narayanan, Vijay, Francese, Pier Andrea, Eleftheriou, Evangelos, Sebastian, Abu
Publikováno v:
Nature Electronics 6, 680-693 (2023)
The need to repeatedly shuttle around synaptic weight values from memory to processing units has been a key source of energy inefficiency associated with hardware implementation of artificial neural networks. Analog in-memory computing (AIMC) with sp
Externí odkaz:
http://arxiv.org/abs/2212.02872
Autor:
Zhou, Chuteng, Redondo, Fernando Garcia, Büchel, Julian, Boybat, Irem, Comas, Xavier Timoneda, Nandakumar, S. R., Das, Shidhartha, Sebastian, Abu, Gallo, Manuel Le, Whatmough, Paul N.
Always-on TinyML perception tasks in IoT applications require very high energy efficiency. Analog compute-in-memory (CiM) using non-volatile memory (NVM) promises high efficiency and also provides self-contained on-chip model storage. However, analog
Externí odkaz:
http://arxiv.org/abs/2111.06503