Zobrazeno 1 - 10
of 67
pro vyhledávání: '"Peter Milder"'
Publikováno v:
Transactions on Cryptographic Hardware and Embedded Systems, Vol 2018, Iss 3 (2018)
This paper presents a hardware implementation of a Residue Polynomial Multiplier (RPM), designed to accelerate the full Residue Number System (RNS) variant of the Fan-Vercauteren scheme proposed by Bajard et al. [BEHZ16]. Our design speeds up polynom
Externí odkaz:
https://doaj.org/article/339be0f47f0c486bb6b532aadce24204
Publikováno v:
Proceedings of the Great Lakes Symposium on VLSI 2023.
Publikováno v:
Proceedings of the 9th ACM Conference on Information-Centric Networking.
Publikováno v:
ACM Transactions on Reconfigurable Technology and Systems. 14:1-18
Software verification is an important stage of the software development process, particularly for mission-critical systems. As the traditional methodology of using unit tests falls short of verifying complex software, developers are increasingly rely
Publikováno v:
IEEE Transactions on Computers. 71:3072-3073
Publikováno v:
IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 28:1665-1675
Energy efficiency is a critical design objective in deep learning hardware, particularly for real-time machine learning applications where the processing takes place on resource-constrained platforms. The inherent resilience of these applications to
Publikováno v:
MILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM).
Publikováno v:
ICC
With the plethora of wireless devices in limited spaces running multiple WiFi standards (802.11 a/b/g/n/ac), gaining an understanding of network latency, loss, and medium utilization becomes extremely challenging. Microsecond timing fidelity with pro
Publikováno v:
IEEE Micro. 39:17-25
In this article, we present Argus, an end-to-end framework for accelerating convolutional neural networks (CNNs) on field-programmable gate arrays (FPGAs) with minimum user effort. Argus uses state-of-the-art methods to auto-generate highly optimized
Autor:
Tianchu Ji, Shraddhan Jain, H. Andrew Schwartz, Michael Ferdman, Peter Milder, Niranjan Balasubramanian
Publikováno v:
ACL/IJCNLP (Findings)
How much information do NLP tasks really need from a transformer's attention mechanism at application-time (inference)? From recent work, we know that there is sparsity in transformers and that the floating-points within its computation can be discre