Zobrazeno 1 - 10
of 134
pro vyhledávání: '"Fardad, Makan"'
Autor:
Kearney, Griffin M., Fardad, Makan
We develop a general framework for state estimation in systems modeled with noise-polluted continuous time dynamics and discrete time noisy measurements. Our approach is based on maximum likelihood estimation and employs the calculus of variations to
Externí odkaz:
http://arxiv.org/abs/2311.02200
This work considers a Bayesian signal processing problem where increasing the power of the probing signal may cause risks or undesired consequences. We employ a market based approach to solve energy management problems for signal detection while bala
Externí odkaz:
http://arxiv.org/abs/2301.07789
Autor:
Kearney, Griffin M.1,2 (AUTHOR) griffin.kearney@opbdatainsights.com, Fardad, Makan2 (AUTHOR)
Publikováno v:
PLoS ONE. 9/20/2024, Vol. 19 Issue 9, p1-27. 27p.
Autor:
Qin, Minghai, Zhang, Tianyun, Sun, Fei, Chen, Yen-Kuang, Fardad, Makan, Wang, Yanzhi, Xie, Yuan
Deep neural networks (DNNs) have shown to provide superb performance in many real life applications, but their large computation cost and storage requirement have prevented them from being deployed to many edge and internet-of-things (IoT) devices. S
Externí odkaz:
http://arxiv.org/abs/2112.10930
Autor:
Zhang, Tianyun, Ma, Xiaolong, Zhan, Zheng, Zhou, Shanglin, Qin, Minghai, Sun, Fei, Chen, Yen-Kuang, Ding, Caiwen, Fardad, Makan, Wang, Yanzhi
To address the large model size and intensive computation requirement of deep neural networks (DNNs), weight pruning techniques have been proposed and generally fall into two categories, i.e., static regularization-based pruning and dynamic regulariz
Externí odkaz:
http://arxiv.org/abs/2004.05531
The worst-case training principle that minimizes the maximal adversarial loss, also known as adversarial training (AT), has shown to be a state-of-the-art approach for enhancing adversarial robustness. Nevertheless, min-max optimization beyond the pu
Externí odkaz:
http://arxiv.org/abs/1906.03563
Autor:
Ye, Shaokai, Feng, Xiaoyu, Zhang, Tianyun, Ma, Xiaolong, Lin, Sheng, Li, Zhengang, Xu, Kaidi, Wen, Wujie, Liu, Sijia, Tang, Jian, Fardad, Makan, Lin, Xue, Liu, Yongpan, Wang, Yanzhi
Weight pruning and weight quantization are two important categories of DNN model compression. Prior work on these techniques are mainly based on heuristics. A recent work developed a systematic frame-work of DNN weight pruning using the advanced opti
Externí odkaz:
http://arxiv.org/abs/1903.09769
Autor:
Ye, Shaokai, Zhang, Tianyun, Zhang, Kaiqi, Li, Jiayu, Xu, Kaidi, Yang, Yunfei, Yu, Fuxun, Tang, Jian, Fardad, Makan, Liu, Sijia, Chen, Xiang, Lin, Xue, Wang, Yanzhi
Deep neural networks (DNNs) although achieving human-level performance in many domains, have very large model size that hinders their broader applications on edge computing devices. Extensive research work have been conducted on DNN model compression
Externí odkaz:
http://arxiv.org/abs/1810.07378
Autor:
Zhang, Tianyun, Ye, Shaokai, Zhang, Kaiqi, Ma, Xiaolong, Liu, Ning, Zhang, Linfeng, Tang, Jian, Ma, Kaisheng, Lin, Xue, Fardad, Makan, Wang, Yanzhi
Weight pruning methods of DNNs have been demonstrated to achieve a good model pruning rate without loss of accuracy, thereby alleviating the significant computation/storage requirements of large-scale DNNs. Structured weight pruning methods have been
Externí odkaz:
http://arxiv.org/abs/1807.11091
Autor:
Zhang, Tianyun, Ye, Shaokai, Zhang, Kaiqi, Tang, Jian, Wen, Wujie, Fardad, Makan, Wang, Yanzhi
Publikováno v:
ECCV 2018, pp 191-207
Weight pruning methods for deep neural networks (DNNs) have been investigated recently, but prior work in this area is mainly heuristic, iterative pruning, thereby lacking guarantees on the weight reduction ratio and convergence time. To mitigate the
Externí odkaz:
http://arxiv.org/abs/1804.03294