Zobrazeno 1 - 10
of 36
pro vyhledávání: '"Andrea Fasoli"'
Autor:
Malte J. Rasch, Charles Mackin, Manuel Le Gallo, An Chen, Andrea Fasoli, Frédéric Odermatt, Ning Li, S. R. Nandakumar, Pritish Narayanan, Hsinyu Tsai, Geoffrey W. Burr, Abu Sebastian, Vijay Narayanan
Publikováno v:
Nature Communications, Vol 14, Iss 1, Pp 1-18 (2023)
Abstract Analog in-memory computing—a promising approach for energy-efficient acceleration of deep learning workloads—computes matrix-vector multiplications but only approximately, due to nonidealities that often are non-deterministic or nonlinea
Externí odkaz:
https://doaj.org/article/35c750944f924737bca0b86c136a18ab
Autor:
Charles Mackin, Malte J. Rasch, An Chen, Jonathan Timcheck, Robert L. Bruce, Ning Li, Pritish Narayanan, Stefano Ambrogio, Manuel Le Gallo, S. R. Nandakumar, Andrea Fasoli, Jose Luquin, Alexander Friz, Abu Sebastian, Hsinyu Tsai, Geoffrey W. Burr
Publikováno v:
Nature Communications, Vol 13, Iss 1, Pp 1-12 (2022)
Device-level complexity represents a big shortcoming for the hardware realization of analogue memory-based deep neural networks. Mackin et al. report a generalized computational framework, translating software-trained weights into analogue hardware w
Externí odkaz:
https://doaj.org/article/6ea0ca917cd64bd384362f7d3c473ddb
Autor:
Katie Spoon, Hsinyu Tsai, An Chen, Malte J. Rasch, Stefano Ambrogio, Charles Mackin, Andrea Fasoli, Alexander M. Friz, Pritish Narayanan, Milos Stanisavljevic, Geoffrey W. Burr
Publikováno v:
Frontiers in Computational Neuroscience, Vol 15 (2021)
Recent advances in deep learning have been driven by ever-increasing model sizes, with networks growing to millions or even billions of parameters. Such enormous models call for fast and energy-efficient hardware accelerators. We study the potential
Externí odkaz:
https://doaj.org/article/0ea5e2126e9644a1892bfada90cc4cb8
Autor:
Geoffrey W. Burr, Jose Luquin, Pritish Narayanan, Stefano Ambrogio, Kohji Hosokawa, Masatoshi Ishi, Charles Mackin, Hsinyu Tsai, Akiyo Nomura, Takeo Yasuda, Alexander Friz, Yasuteru Kohda, An Chen, Andrea Fasoli, Atsuya Okazaki
Publikováno v:
Proceedings of the Neuromorphic Materials, Devices, Circuits and Systems.
Autor:
Atsuya Okazaki, Pritish Narayanan, Stefano Ambrogio, Kohji Hosokawa, Hsinyu Tsai, Akiyo Nomura, Takeo Yasuda, Charles Mackin, Alexander Friz, Masatoshi Ishii, Yasuteru Kohda, Katie Spoon, An Chen, Andrea Fasoli, Malte J. Rasch, Geoffrey W. Burr
Publikováno v:
2022 IEEE International Symposium on Circuits and Systems (ISCAS).
Autor:
Katie Spoon, Stefano Ambrogio, Pritish Narayanan, Hsinyu Tsai, Charles Mackin, An Chen, Andrea Fasoli, Alexander Friz, Geoffrey W. Burr
Publikováno v:
Machine Learning and Non-volatile Memories ISBN: 9783031038402
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::3aaf24a72829ef921281d43e78913aba
https://doi.org/10.1007/978-3-031-03841-9_3
https://doi.org/10.1007/978-3-031-03841-9_3
Publikováno v:
Journal of CO2 Utilization, Vol 85, Iss , Pp 102864- (2024)
The use of synthetic natural gas (SNG) as a plug-and-play fuel coming from renewables can help to overcome the limitations given by the intermittency of renewable energy. A way to implement the production of SNG pass through the co-electrolysis of CO
Externí odkaz:
https://doaj.org/article/d22bb9471cf74ece9aa8350b90448ac4
Autor:
Abu Sebastian, Stefano Ambrogio, Malte J. Rasch, An Chen, Nandakumar Rajaleksh, Andrea Fasoli, Jonathan Timcheck, Charles Mackin, Jose Luquin, Pritish Narayanan, Hsinyu Tsai, Robert L. Bruce, Alexander Friz, Geoffrey W. Burr, Manuel Le Gallo
Analogue memory-based Deep Neural Networks (DNNs) provide energy-efficiency and per-area throughput gains relative to state-of-the-art digital counterparts such as graphic processing units (GPUs). Recent advances focus largely on hardware-aware algor
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::3cc478e8fcea6d6dfc345071fdfad6b8
https://doi.org/10.21203/rs.3.rs-1028668/v1
https://doi.org/10.21203/rs.3.rs-1028668/v1
Autor:
Kailash Gopalakrishnan, Swagath Venkataramani, Wei Zhang, George Saon, Xiao Sun, Andrea Fasoli, Chia-Yu Chen, Xiaodong Cui, Mauricio J. Serrano, Zoltán Tüske, Naigang Wang, Brian Kingsbury
We investigate the impact of aggressive low-precision representations of weights and activations in two families of large LSTM-based architectures for Automatic Speech Recognition (ASR): hybrid Deep Bidirectional LSTM - Hidden Markov Models (DBLSTM-H
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::692d7369d4cfd471cc31430dde91dc57
http://arxiv.org/abs/2108.12074
http://arxiv.org/abs/2108.12074
Autor:
Pritish Narayanan, Katie Spoon, Charles Mackin, Geoffrey W. Burr, Andrea Fasoli, Stefano Ambrogio, An Chen, Hsinyu Tsai, Malte J. Rasch, Alexander Friz, Milos Stanisavljevic
Publikováno v:
Frontiers in Computational Neuroscience, Vol 15 (2021)
Frontiers in Computational Neuroscience
Frontiers in Computational Neuroscience
Recent advances in deep learning have been driven by ever-increasing model sizes, with networks growing to millions or even billions of parameters. Such enormous models call for fast and energy-efficient hardware accelerators. We study the potential