Zobrazeno 1 - 10
of 415
pro vyhledávání: '"Sebastian, Abu"'
Autor:
Mehonic, Adnan, Ielmini, Daniele, Roy, Kaushik, Mutlu, Onur, Kvatinsky, Shahar, Serrano-Gotarredona, Teresa, Linares-Barranco, Bernabe, Spiga, Sabina, Savelev, Sergey, Balanov, Alexander G, Chawla, Nitin, Desoli, Giuseppe, Malavena, Gerardo, Compagnoni, Christian Monzio, Wang, Zhongrui, Yang, J Joshua, Syed, Ghazi Sarwat, Sebastian, Abu, Mikolajick, Thomas, Noheda, Beatriz, Slesazeck, Stefan, Dieny, Bernard, Tuo-Hung, Hou, Varri, Akhil, Bruckerhoff-Pluckelmann, Frank, Pernice, Wolfram, Zhang, Xixiang, Pazos, Sebastian, Lanza, Mario, Wiefels, Stefan, Dittmann, Regina, Ng, Wing H, Buckwell, Mark, Cox, Horatio RJ, Mannion, Daniel J, Kenyon, Anthony J, Lu, Yingming, Yang, Yuchao, Querlioz, Damien, Hutin, Louis, Vianello, Elisa, Chowdhury, Sayeed Shafayet, Mannocci, Piergiulio, Cai, Yimao, Sun, Zhong, Pedretti, Giacomo, Strachan, John Paul, Strukov, Dmitri, Gallo, Manuel Le, Ambrogio, Stefano, Valov, Ilia, Waser, Rainer
The roadmap is organized into several thematic sections, outlining current computing challenges, discussing the neuromorphic computing approach, analyzing mature and currently utilized technologies, providing an overview of emerging technologies, add
Externí odkaz:
http://arxiv.org/abs/2407.02353
Autor:
Camposampiero, Giacomo, Hersche, Michael, Terzić, Aleksandar, Wattenhofer, Roger, Sebastian, Abu, Rahimi, Abbas
We introduce the Abductive Rule Learner with Context-awareness (ARLC), a model that solves abstract reasoning tasks based on Learn-VRF. ARLC features a novel and more broadly applicable training objective for abductive reasoning, resulting in better
Externí odkaz:
http://arxiv.org/abs/2406.19121
Autor:
Momeni, Ali, Rahmani, Babak, Scellier, Benjamin, Wright, Logan G., McMahon, Peter L., Wanjura, Clara C., Li, Yuhang, Skalli, Anas, Berloff, Natalia G., Onodera, Tatsuhiro, Oguz, Ilker, Morichetti, Francesco, del Hougne, Philipp, Gallo, Manuel Le, Sebastian, Abu, Mirhoseini, Azalia, Zhang, Cheng, Marković, Danijela, Brunner, Daniel, Moser, Christophe, Gigan, Sylvain, Marquardt, Florian, Ozcan, Aydogan, Grollier, Julie, Liu, Andrea J., Psaltis, Demetri, Alù, Andrea, Fleury, Romain
Physical neural networks (PNNs) are a class of neural-like networks that leverage the properties of physical systems to perform computation. While PNNs are so far a niche research area with small-scale laboratory demonstrations, they are arguably one
Externí odkaz:
http://arxiv.org/abs/2406.03372
A Precision-Optimized Fixed-Point Near-Memory Digital Processing Unit for Analog In-Memory Computing
Autor:
Ferro, Elena, Vasilopoulos, Athanasios, Lammie, Corey, Gallo, Manuel Le, Benini, Luca, Boybat, Irem, Sebastian, Abu
Analog In-Memory Computing (AIMC) is an emerging technology for fast and energy-efficient Deep Learning (DL) inference. However, a certain amount of digital post-processing is required to deal with circuit mismatches and non-idealities associated wit
Externí odkaz:
http://arxiv.org/abs/2402.07549
Autor:
Ruffino, Samuele, Karunaratne, Geethan, Hersche, Michael, Benini, Luca, Sebastian, Abu, Rahimi, Abbas
Classification based on Zero-shot Learning (ZSL) is the ability of a model to classify inputs into novel classes on which the model has not previously seen any training examples. Providing an auxiliary descriptor in the form of a set of attributes de
Externí odkaz:
http://arxiv.org/abs/2401.16876
Abstract reasoning is a cornerstone of human intelligence, and replicating it with artificial intelligence (AI) presents an ongoing challenge. This study focuses on efficiently solving Raven's progressive matrices (RPM), a visual test for assessing a
Externí odkaz:
http://arxiv.org/abs/2401.16024
Autor:
Lammie, Corey, Vasilopoulos, Athanasios, Büchel, Julian, Camposampiero, Giacomo, Gallo, Manuel Le, Rasch, Malte, Sebastian, Abu
Analog-Based In-Memory Computing (AIMC) inference accelerators can be used to efficiently execute Deep Neural Network (DNN) inference workloads. However, to mitigate accuracy losses, due to circuit and device non-idealities, Hardware-Aware (HWA) trai
Externí odkaz:
http://arxiv.org/abs/2401.09859
Autor:
Terzic, Aleksandar, Hersche, Michael, Karunaratne, Geethan, Benini, Luca, Sebastian, Abu, Rahimi, Abbas
MEGA is a recent transformer-based architecture, which utilizes a linear recurrent operator whose parallel computation, based on the FFT, scales as $O(LlogL)$, with $L$ being the sequence length. We build upon their approach by replacing the linear r
Externí odkaz:
http://arxiv.org/abs/2312.05605
Autor:
Menet, Nicolas, Hersche, Michael, Karunaratne, Geethan, Benini, Luca, Sebastian, Abu, Rahimi, Abbas
With the advent of deep learning, progressively larger neural networks have been designed to solve complex tasks. We take advantage of these capacity-rich models to lower the cost of inference by exploiting computation in superposition. To reduce the
Externí odkaz:
http://arxiv.org/abs/2312.02829
Autor:
Gallo, Manuel Le, Lammie, Corey, Buechel, Julian, Carta, Fabio, Fagbohungbe, Omobayode, Mackin, Charles, Tsai, Hsinyu, Narayanan, Vijay, Sebastian, Abu, Maghraoui, Kaoutar El, Rasch, Malte J.
Publikováno v:
APL Machine Learning (2023) 1 (4): 041102
Analog In-Memory Computing (AIMC) is a promising approach to reduce the latency and energy consumption of Deep Neural Network (DNN) inference and training. However, the noisy and non-linear device characteristics, and the non-ideal peripheral circuit
Externí odkaz:
http://arxiv.org/abs/2307.09357