Zobrazeno 1 - 10
of 20
pro vyhledávání: '"Matthew Mattina"'
Autor:
Urmish Thakker, Chu Zhou, Matthew Mattina, Jesse Beu, Ganesh Dasika, Dibakar Gope, Igor Fedorov
Publikováno v:
ACM Journal on Emerging Technologies in Computing Systems. 17:1-18
Micro-controllers (MCUs) make up most of the processors in the world with widespread applicability from automobile to medical devices. The Internet of Things promises to enable these resource-constrained MCUs with machine learning algorithms to provi
Publikováno v:
IEEE Computer Architecture Letters. 19:34-37
Convolutional neural network (CNN) inference on mobile devices demands efficient hardware acceleration of low-precision (INT8) general matrix multiplication (GEMM). The systolic array (SA) is a pipelined 2D array of processing elements (PEs), with ve
Exploiting sparsity is a key technique in accelerating quantized convolutional neural network (CNN) inference on mobile devices. Prior sparse CNN accelerators largely exploit un-structured sparsity and achieve significant speedups. Due to the unbound
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::2f2ff0b4ab3429a3d5f9b00466588979
http://arxiv.org/abs/2107.07983
http://arxiv.org/abs/2107.07983
Publikováno v:
ISIT
Neural networks have gained importance as the machine learning models that achieve state-of-the-art performance on large-scale image classification, object detection and natural language processing tasks. In this paper, we consider noisy binary neura
Autor:
Paul N. Whatmough, Yury Krustalev, Dumidu Talagala, Patrick Hansen, Alexey Vilkin, Matthew Mattina, David Hanwell, James Imber
Publikováno v:
ICPR
Convolutional neural networks (CNNs) are now widely deployed in a variety of computer vision (CV) systems. These systems typically include an image signal processor (ISP), even though the ISP is traditionally designed to produce images that look appe
In recent years graph neural network (GNN)-based approaches have become a popular strategy for processing point cloud data, regularly achieving state-of-the-art performance on a variety of tasks. To date, the research community has primarily focused
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::8cad82519ad56e3f9f906a28161056d8
Autor:
Tushar Krishna, Ananda Samajdar, Yuhao Zhu, Jan Moritz Joseph, Matthew Mattina, Paul N. Whatmough
Publikováno v:
ISPASS
The compute demand for deep learning workloads is well known and is a prime motivator for powerful parallel computing platforms such as GPUs or dedicated hardware accelerators. The massive inherent parallelism of these workloads enables us to extract
Autor:
Marko Stamenovic, Igor Fedorov, Matthew Mattina, Yiming Gan, Carl Jensen, Ari Mandell, Li-Chia Yang, Paul N. Whatmough
Publikováno v:
INTERSPEECH
Modern speech enhancement algorithms achieve remarkable noise suppression by means of large recurrent neural networks (RNNs). However, large RNNs limit practical deployment in hearing aid hardware (HW) form-factors, which are battery powered and run
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::76d2994617790804e8a5a20860eff1d6
http://arxiv.org/abs/2005.11138
http://arxiv.org/abs/2005.11138
Sequence model based NLP applications can be large. Yet, many applications that benefit from them run on small devices with very limited compute and storage capabilities, while still having run-time constraints. As a result, there is a need for a com
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::bb28ec4bdde46a856f6513e7d93cc3ec
Autor:
Zhi-Gang Liu, Matthew Mattina
Publikováno v:
Computer Vision – ECCV 2020 ISBN: 9783030585280
ECCV (19)
ECCV (19)
Prior research has shown that Winograd algorithm can reduce the computational complexity of convolutional neural networks (CNN) with weights and activations represented in floating point. However it is difficult to apply the scheme to the inference o
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::04c03a4e68080e5eec853c0e34078a23
https://doi.org/10.1007/978-3-030-58529-7_4
https://doi.org/10.1007/978-3-030-58529-7_4