Zobrazeno 1 - 10
of 21
pro vyhledávání: '"Zhu Baozhou"'
Publikováno v:
IEEE Access, Vol 8, Pp 169957-169965 (2020)
High-level feature maps of Convolutional Neural Networks are computed by reusing their corresponding low-level feature maps, which brings into full play feature reuse to improve the computational efficiency. This form of feature reuse is referred to
Externí odkaz:
https://doaj.org/article/570e81f03be94f55ad83cb08fa19bce5
Data-free compression raises a new challenge because the original training dataset for a pre-trained model to be compressed is not available due to privacy or transmission issues. Thus, a common approach is to compute a reconstructed training dataset
Externí odkaz:
http://arxiv.org/abs/2105.12151
Binary Convolutional Neural Networks (CNNs) can significantly reduce the number of arithmetic operations and the size of memory storage, which makes the deployment of CNNs on mobile or embedded systems more promising. However, the accuracy degradatio
Externí odkaz:
http://arxiv.org/abs/2008.03520
Binary Convolutional Neural Networks (CNNs) have significantly reduced the number of arithmetic operations and the size of memory storage needed for CNNs, which makes their deployment on mobile and embedded systems more feasible. However, the CNN arc
Externí odkaz:
http://arxiv.org/abs/2008.03515
Publikováno v:
Chinese Journal of Electronics. 28:1158-1164
Range reduction is the initial and essential stage of function computation, but its pipelined implementation has the drawbacks of large cost and terrible accuracy. We proposed low cost and accurate pipelined range reduction, which adopts truncated mu
Publikováno v:
IEEE Transactions on Circuits and Systems I: Regular Papers. 64:892-905
CORDIC algorithm is suitable to implement sine/cosine function, but the large number of iterations lead to great delay and overhead. Moreover, due to finite bit-width of operands and number of iterations, the relative error of floating-point sine or
Publikováno v:
Chinese Journal of Electronics. 26:292-298
Focused on the issue that division is complex and needs a long latency to compute, a method to design the unit of high-performance Floating-point (FP) divider based on Goldschmidt algorithm was proposed. Bipartite reciprocal tables were adopted to ob
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Publikováno v:
2019 IEEE 4th International Conference on Big Data Analytics (ICBDA).
Convolutional Neural Networks (CNNs) are a class of widely used deep artificial neural networks. However, training large CNNs to produce state-of-the-art results can take a long time. In addition, we need to reduce compute time of the inference stage
Publikováno v:
Communications in Computer and Information Science ISBN: 9789811031588
NCCET
NCCET
To meet the precision requirement of different applications and reduce latency of operation for low precision, a unified structure for IEEE-754 double-precision/SIMD single-precision floating-point division and square root operation based on SRT-8 al
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::77c7ca51439ea12f9ce470a9922ef2cc
https://doi.org/10.1007/978-981-10-3159-5_1
https://doi.org/10.1007/978-981-10-3159-5_1