Zobrazeno 1 - 10
of 54
pro vyhledávání: '"Appuswamy, Rathinakumar"'
Autor:
Bablani, Deepika, Mckinstry, Jeffrey L., Esser, Steven K., Appuswamy, Rathinakumar, Modha, Dharmendra S.
For efficient neural network inference, it is desirable to achieve state-of-the-art accuracy with the simplest networks requiring the least computation, memory, and power. Quantizing networks to lower precision is a powerful technique for simplifying
Externí odkaz:
http://arxiv.org/abs/2301.13330
Autor:
Esser, Steven K., McKinstry, Jeffrey L., Bablani, Deepika, Appuswamy, Rathinakumar, Modha, Dharmendra S.
Deep networks run with low precision operations at inference time offer power and space advantages over high precision alternatives, but need to overcome the challenge of maintaining high accuracy as precision decreases. Here, we present a method for
Externí odkaz:
http://arxiv.org/abs/1902.08153
Discovering Low-Precision Networks Close to Full-Precision Networks for Efficient Embedded Inference
Autor:
McKinstry, Jeffrey L., Esser, Steven K., Appuswamy, Rathinakumar, Bablani, Deepika, Arthur, John V., Yildiz, Izzet B., Modha, Dharmendra S.
To realize the promise of ubiquitous embedded deep network inference, it is essential to seek limits of energy and area efficiency. To this end, low-precision networks offer tremendous promise because both energy and area scale down quadratically wit
Externí odkaz:
http://arxiv.org/abs/1809.04191
Autor:
Appuswamy, Rathinakumar, Nayak, Tapan, Arthur, John, Esser, Steven, Merolla, Paul, Mckinstry, Jeffrey, Melano, Timothy, Flickner, Myron, Modha, Dharmendra
We derive a relationship between network representation in energy-efficient neuromorphic architectures and block Toplitz convolutional matrices. Inspired by this connection, we develop deep convolutional networks using a family of structured convolut
Externí odkaz:
http://arxiv.org/abs/1606.02407
Recent results show that deep neural networks achieve excellent performance even when, during training, weights are quantized and projected to a binary representation. Here, we show that this is just the tip of the iceberg: these same networks, durin
Externí odkaz:
http://arxiv.org/abs/1606.01981
Autor:
Esser, Steven K., Merolla, Paul A., Arthur, John V., Cassidy, Andrew S., Appuswamy, Rathinakumar, Andreopoulos, Alexander, Berg, David J., McKinstry, Jeffrey L., Melano, Timothy, Barch, Davis R., di Nolfo, Carmelo, Datta, Pallab, Amir, Arnon, Taba, Brian, Flickner, Myron D., Modha, Dharmendra S.
Publikováno v:
PNAS 113 (2016) 11441-11446
Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neuron
Externí odkaz:
http://arxiv.org/abs/1603.08270
We consider the scenario in which a set of sources generate messages in a network and a receiver node demands an arbitrary linear function of these messages. We formulate an algebraic test to determine whether an arbitrary network can compute linear
Externí odkaz:
http://arxiv.org/abs/1102.4825
We study the use of linear codes for network computing in single-receiver networks with various classes of target functions of the source messages. Such classes include reducible, injective, semi-injective, and linear target functions over finite fie
Externí odkaz:
http://arxiv.org/abs/1101.0085
The following \textit{network computing} problem is considered. Source nodes in a directed acyclic network generate independent messages and a single receiver node computes a target function $f$ of the messages. The objective is to maximize the avera
Externí odkaz:
http://arxiv.org/abs/0912.2820
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.