Zobrazeno 1 - 10
of 37
pro vyhledávání: '"Wen, Shixian"'
Autor:
Ge, Yunhao, Li, Yuecheng, Wu, Di, Xu, Ao, Jones, Adam M., Rios, Amanda Sofie, Fostiropoulos, Iordanis, Wen, Shixian, Huang, Po-Hsuan, Murdock, Zachary William, Sahin, Gozde, Ni, Shuo, Lekkala, Kiran, Sontakke, Sumedh Anand, Itti, Laurent
In Lifelong Learning (LL), agents continually learn as they encounter new conditions and tasks. Most current LL is limited to a single agent that learns tasks sequentially. Dedicated LL machinery is then deployed to mitigate the forgetting of old tas
Externí odkaz:
http://arxiv.org/abs/2305.15591
Autor:
Xu, Yixin, Zhao, Zijian, Xiao, Yi, Yu, Tongguang, Mulaosmanovic, Halid, Kleimaier, Dominik, Duenkel, Stefan, Beyer, Sven, Gong, Xiao, Joshi, Rajiv, Hu, X. Sharon, Wen, Shixian, Rios, Amanda Sofie, Lekkala, Kiran, Itti, Laurent, Homan, Eric, George, Sumitha, Narayanan, Vijaykrishnan, Ni, Kai
Field Programmable Gate Array (FPGA) is widely used in acceleration of deep learning applications because of its reconfigurability, flexibility, and fast time-to-market. However, conventional FPGA suffers from the tradeoff between chip area and recon
Externí odkaz:
http://arxiv.org/abs/2212.00089
Understanding the patterns of misclassified ImageNet images is particularly important, as it could guide us to design deep neural networks (DNN) that generalize better. However, the richness of ImageNet imposes difficulties for researchers to visuall
Externí odkaz:
http://arxiv.org/abs/2201.08098
Deep neural networks can be fooled by adversarial attacks: adding carefully computed small adversarial perturbations to clean inputs can cause misclassification on state-of-the-art machine learning models. The reason is that neural networks fail to a
Externí odkaz:
http://arxiv.org/abs/2009.12724
Publikováno v:
IEEE Transactions on Neural Networks and Learning Systems 2021
The human brain is the gold standard of adaptive learning. It not only can learn and benefit from experience, but also can adapt to new situations. In contrast, deep neural networks only learn one sophisticated but fixed mapping from inputs to output
Externí odkaz:
http://arxiv.org/abs/2009.13954
Autor:
Wen, Shixian, Itti, Laurent
Adversarial training, in which a network is trained on both adversarial and clean examples, is one of the most trusted defense methods against adversarial attacks. However, there are three major practical difficulties in implementing and deploying th
Externí odkaz:
http://arxiv.org/abs/1910.04279
Autor:
Wen, Shixian, Itti, Laurent
Sequential learning of multiple tasks in artificial neural networks using gradient descent leads to catastrophic forgetting, whereby previously learned knowledge is erased during learning of new, disjoint knowledge. Here, we propose a fundamentally n
Externí odkaz:
http://arxiv.org/abs/1906.10528
Autor:
Wen, Shixian, Itti, Laurent
Sequential learning of multiple tasks in artificial neural networks using gradient descent leads to catastrophic forgetting, whereby previously learned knowledge is erased during learning of new, disjoint knowledge. Here, we propose a new approach to
Externí odkaz:
http://arxiv.org/abs/1805.07441
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Capturing spike train temporal pattern with wavelet average coefficient for brain machine interface.
Autor:
Wen, Shixian1 (AUTHOR) shixianw@usc.edu, Yin, Allen2 (AUTHOR), Tseng, Po-He2 (AUTHOR), Itti, Laurent1,3,4 (AUTHOR), Lebedev, Mikhail A.2,5 (AUTHOR), Nicolelis, Miguel2 (AUTHOR)
Publikováno v:
Scientific Reports. 9/24/2021, Vol. 11 Issue 1, p1-10. 10p.