Zobrazeno 1 - 7
of 7
pro vyhledávání: '"Julian Faraone"'
Autor:
Nicholas J. Fraser, Philip H. W. Leong, Stephen Tridgell, Craig Jin, Duncan J. M. Moss, JunKyu Lee, Julian Faraone
Publikováno v:
ACM Transactions on Reconfigurable Technology and Systems. 10:1-20
Kernel adaptive filters (KAFs) are online machine learning algorithms which are amenable to highly efficient streaming implementations. They require only a single pass through the data and can act as universal approximators, i.e. approximate any cont
Publikováno v:
FPT
Low-precision training for Deep Neural Networks(DNN) has recently become a viable alternative to standard full-precision algorithms. Crucially, low-precision computation reduces both memory usage and computational cost, providing more scalability for
Autor:
David Boland, Martin Hardieck, Xueyuan Liu, Martin Kumm, Philip H. W. Leong, Peter Zipf, Julian Faraone
Low-precision arithmetic operations to accelerate deep-learning applications on field-programmable gate arrays (FPGAs) have been studied extensively, because they offer the potential to save silicon area or increase throughput. However, these benefit
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::4002c70a5ad9d8fb373a0d6e8c58dd11
Autor:
Giulio Gambardella, Michaela Blott, Julian Faraone, David Boland, Philip H. W. Leong, Nicholas J. Fraser
Publikováno v:
FPL
In this paper, we argue that instead of solely focusing on developing efficient architectures to accelerate well-known low-precision CNNs, we should also seek to modify the network to suit the FPGA. We develop a fully automative toolflow which focuse
Publikováno v:
CVPR
Inference for state-of-the-art deep neural networks is computationally expensive, making them difficult to deploy on constrained hardware environments. An efficient way to reduce this complexity is to quantize the weight parameters and/or activations
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::d1ca9a60faffa2a653bddb993b23750e
http://arxiv.org/abs/1807.00301
http://arxiv.org/abs/1807.00301
Autor:
Peter Y. K. Cheung, Julian Faraone, David B. Thomas, Yiren Zhao, Philip H. W. Leong, Junyi Liu, Jiang Su
Publikováno v:
Applied Reconfigurable Computing. Architectures, Tools, and Applications ISBN: 9783319788890
ARC
ARC
Modern Convolutional Neural Networks (CNNs) excel in image classification and recognition applications on large-scale datasets such as ImageNet, compared to many conventional feature-based computer vision algorithms. However, the high computational c
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::0dd4487e88f570633ddd51cde1d2bfe6
https://doi.org/10.1007/978-3-319-78890-6_2
https://doi.org/10.1007/978-3-319-78890-6_2
Publikováno v:
Neural Information Processing ISBN: 9783319700953
ICONIP (2)
ICONIP (2)
A low precision deep neural network training technique for producing sparse, ternary neural networks is presented. The technique incorporates hardware implementation costs during training to achieve significant model compression for inference. Traini
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::0c990395b4d485cfcd411b400d3593a8
https://doi.org/10.1007/978-3-319-70096-0_41
https://doi.org/10.1007/978-3-319-70096-0_41