Zobrazeno 1 - 10
of 96
pro vyhledávání: '"Yingyong Qi"'
Publikováno v:
IEEE Access, Vol 11, Pp 78042-78051 (2023)
In this paper, we propose a feature affinity (FA) assisted knowledge distillation (KD) method to improve quantization-aware training of deep neural networks (DNN). The FA loss on intermediate feature maps of DNNs plays the role of teaching middle ste
Externí odkaz:
https://doaj.org/article/34d3af18b681405394ae5a4895065e5b
Publikováno v:
IEEE Access, Vol 10, Pp 65901-65912 (2022)
Differentiable architecture search (DARTS) is an effective method for data-driven neural network design based on solving a bilevel optimization problem. Despite its success in many architecture search tasks, there are still some concerns about the ac
Externí odkaz:
https://doaj.org/article/a174f3cb09e24b0387ae67e597cb8735
Publikováno v:
IEEE Access, Vol 9, Pp 115292-115314 (2021)
Convolutional neural networks (CNNs) have developed to become powerful models for various computer vision tasks ranging from object detection to semantic segmentation. However, most of the state-of-the-art CNNs cannot be deployed directly on edge dev
Externí odkaz:
https://doaj.org/article/c306665a2baf4aba8a3faa3e6cb4a6b6
Publikováno v:
Frontiers in Applied Mathematics and Statistics, Vol 6 (2021)
Convolutional neural networks (CNN) have been hugely successful recently with superior accuracy and performance in various imaging applications, such as classification, object detection, and segmentation. However, a highly accurate CNN model requires
Externí odkaz:
https://doaj.org/article/500ff8f495674c5cbcb399b32049d52a
Publikováno v:
IEEE Access, Vol 9, Pp 115292-115314 (2021)
Convolutional neural networks (CNNs) have developed to become powerful models for various computer vision tasks ranging from object detection to semantic segmentation. However, most of the state-of-the-art CNNs cannot be deployed directly on edge dev
Publikováno v:
IEEE Transactions on Multimedia. 22:1874-1888
Network quantization offers an effective solution to deep neural network compression for practical usage. Existing network quantization methods cannot theoretically guarantee the convergence. This paper proposes a novel iterative framework for networ
Publikováno v:
Journal of Computational Mathematics. 37:349-359
We present LBW-Net, an efficient optimization based method for quantization and training of the low bit-width convolutional neural networks (CNNs). Specifically, we quantize the weights to zero or powers of two by minimizing the Euclidean distance be
Publikováno v:
Advances in Visual Computing ISBN: 9783030904357
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::2fdde228078a6f4a37437ba4b8e7a11d
https://doi.org/10.1007/978-3-030-90436-4_26
https://doi.org/10.1007/978-3-030-90436-4_26
Autor:
Weiyao Lin, Yuhui Xu, Hongkai Xiong, Yuxi Li, Yi Chen, Yingyong Qi, Botao Wang, Wei Wen, Shuai Zhang
Publikováno v:
IJCAI
Scopus-Elsevier
Scopus-Elsevier
To enable DNNs on edge devices like mobile phones, low-rank approximation has been widely adopted because of its solid theoretical rationale and efficient implementations. Several previous works attempted to directly approximate a pretrained model by
Publikováno v:
Advances in Visual Computing ISBN: 9783030645557
ISVC (1)
ISVC (1)
In the last decade, convolutional neural networks (CNNs) have evolved to become the dominant models for various computer vision tasks, but they cannot be deployed in low-memory devices due to its high memory requirement and computational cost. One po
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::b375f8c5737486236b6148e31e753241
https://doi.org/10.1007/978-3-030-64556-4_4
https://doi.org/10.1007/978-3-030-64556-4_4