Zobrazeno 1 - 10
of 142 854
pro vyhledávání: '"DNN"'
Autor:
Liu, Chunqing1 (AUTHOR) chunqingliu98@outlook.com, Zhang, Fengliang1 (AUTHOR) zhangfengliang@hit.edu.cn, Ni, Yanchun2,3 (AUTHOR) yanchunni@tongji.edu.cn, Ai, Botao1 (AUTHOR) 22s054013@stu.hit.edu.cn, Zhu, Siyan1 (AUTHOR) 22s054016@stu.hit.edu.cn, Zhao, Zezhou1 (AUTHOR), Fu, Shengjie1 (AUTHOR)
Publikováno v:
Sensors (14248220). Sep2024, Vol. 24 Issue 17, p5557. 21p.
Autor:
Dejband, Erfan1 (AUTHOR) t109319413@ntut.edu.tw, Tan, Tan-Hsu1,2 (AUTHOR) thtan@ntut.edu.tw, Yao, Cheng-Kai3 (AUTHOR) t109658093@ntut.org.tw, Chang, En-Ming3 (AUTHOR) t110658052@ntut.org.tw, Peng, Peng-Chun3 (AUTHOR) pcpeng@ntut.edu.tw
Publikováno v:
Sensors (14248220). Aug2024, Vol. 24 Issue 15, p4903. 13p.
Publikováno v:
ACM Transactions on Embedded Computing Systems, Volume 23, Issue 4, Article 60 (July 2024), 32 pages
The relentless expansion of deep learning applications in recent years has prompted a pivotal shift toward on-device execution, driven by the urgent need for real-time processing, heightened privacy concerns, and reduced latency across diverse domain
Externí odkaz:
http://arxiv.org/abs/2409.01089
Classification tasks present challenges due to class imbalances and evolving data distributions. Addressing these issues requires a robust method to handle imbalances while effectively detecting out-of-distribution (OOD) samples not encountered durin
Externí odkaz:
http://arxiv.org/abs/2409.00980
Deep neural network (DNN) models have demonstrated impressive performance in various domains, yet their application in cognitive neuroscience is limited due to their lack of interpretability. In this study we employ two structurally different and com
Externí odkaz:
http://arxiv.org/abs/2409.00003
This study proves the two-phase dynamics of a deep neural network (DNN) learning interactions. Despite the long disappointing view of the faithfulness of post-hoc explanation of a DNN, in recent years, a series of theorems have been proven to show th
Externí odkaz:
http://arxiv.org/abs/2407.19198
Autor:
Zhang, Yongkang, Yu, Haoxuan, Han, Chenxia, Wang, Cheng, Lu, Baotong, Li, Yang, Chu, Xiaowen, Li, Huaicheng
Colocating high-priority, latency-sensitive (LS) and low-priority, best-effort (BE) DNN inference services reduces the total cost of ownership (TCO) of GPU clusters. Limited by bottlenecks such as VRAM channel conflicts and PCIe bus contentions, exis
Externí odkaz:
http://arxiv.org/abs/2407.13996
Accelerated edge devices, like Nvidia's Jetson with 1000+ CUDA cores, are increasingly used for DNN training and federated learning, rather than just for inferencing workloads. A unique feature of these compact devices is their fine-grained control o
Externí odkaz:
http://arxiv.org/abs/2407.13944
Autor:
Elango, Venmugil
Publikováno v:
2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS), Portland, OR, USA, 2021, pp. 1025-1034
Training a deep neural network (DNN) requires substantial computational and memory requirements. It is common to use multiple devices to train a DNN to reduce the overall training time. There are several choices to parallelize each layer in a DNN. Ex
Externí odkaz:
http://arxiv.org/abs/2407.04001