Zobrazeno 1 - 10
of 51
pro vyhledávání: '"Ye Shaokai"'
Publikováno v:
Published in Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS) 2023
The process of quantifying and analyzing animal behavior involves translating the naturally occurring descriptive language of their actions into machine-readable code. Yet, codifying behavior analysis is often challenging without deep understanding o
Externí odkaz:
http://arxiv.org/abs/2307.04858
Autor:
Mao, Xiaofeng, Chen, Yuefeng, Duan, Ranjie, Zhu, Yao, Qi, Gege, Ye, Shaokai, Li, Xiaodan, Zhang, Rong, Xue, Hui
Adversarial Training (AT), which is commonly accepted as one of the most effective approaches defending against adversarial examples, can largely harm the standard performance, thus has limited usefulness on industrial-scale production and applicatio
Externí odkaz:
http://arxiv.org/abs/2209.07735
Autor:
Ye, Shaokai, Filippova, Anastasiia, Lauer, Jessy, Schneider, Steffen, Vidal, Maxime, Qiu, Tian, Mathis, Alexander, Mathis, Mackenzie Weygandt
Quantification of behavior is critical in applications ranging from neuroscience, veterinary medicine and animal conservation efforts. A common key step for behavioral analysis is first extracting relevant keypoints on animals, known as pose estimati
Externí odkaz:
http://arxiv.org/abs/2203.07436
Autor:
Mao, Xiaofeng, Qi, Gege, Chen, Yuefeng, Li, Xiaodan, Duan, Ranjie, Ye, Shaokai, He, Yuan, Xue, Hui
Recent advances on Vision Transformer (ViT) and its improved variants have shown that self-attention-based networks surpass traditional Convolutional Neural Networks (CNNs) in most vision tasks. However, existing ViTs focus on the standard accuracy a
Externí odkaz:
http://arxiv.org/abs/2105.07926
Though it is well known that the performance of deep neural networks (DNNs) degrades under certain light conditions, there exists no study on the threats of light beams emitted from some physical source as adversarial attacker on DNNs in a real-world
Externí odkaz:
http://arxiv.org/abs/2103.06504
Autor:
Li, Xiaodan, Li, Jinfeng, Chen, Yuefeng, Ye, Shaokai, He, Yuan, Wang, Shuhui, Su, Hang, Xue, Hui
We study the query-based attack against image retrieval to evaluate its robustness against adversarial examples under the black-box setting, where the adversary only has query access to the top-k ranked unlabeled images from the database. Compared wi
Externí odkaz:
http://arxiv.org/abs/2103.02927
Autor:
Tan, Zhanhong, Song, Jiebo, Ma, Xiaolong, Tan, Sia-Huat, Chen, Hongyang, Miao, Yuanqing, Wu, Yifu, Ye, Shaokai, Wang, Yanzhi, Li, Dehui, Ma, Kaisheng
Weight pruning is a powerful technique to realize model compression. We propose PCNN, a fine-grained regular 1D pruning method. A novel index format called Sparsity Pattern Mask (SPM) is presented to encode the sparsity in PCNN. Leveraging SPM with l
Externí odkaz:
http://arxiv.org/abs/2002.04997
Autor:
Ye, Shaokai, Wu, Kailu, Zhou, Mu, Yang, Yunfei, Tan, Sia huat, Xu, Kaidi, Song, Jiebo, Bao, Chenglong, Ma, Kaisheng
Existing domain adaptation methods aim at learning features that can be generalized among domains. These methods commonly require to update source classifier to adapt to the target domain and do not properly handle the trade off between the source do
Externí odkaz:
http://arxiv.org/abs/1911.12796
Autor:
Ma, Xiaolong, Lin, Sheng, Ye, Shaokai, He, Zhezhi, Zhang, Linfeng, Yuan, Geng, Tan, Sia Huat, Li, Zhengang, Fan, Deliang, Qian, Xuehai, Lin, Xue, Ma, Kaisheng, Wang, Yanzhi
Large deep neural network (DNN) models pose the key challenge to energy efficiency due to the significantly higher energy consumption of off-chip DRAM accesses than arithmetic or SRAM operations. It motivates the intensive research on model compressi
Externí odkaz:
http://arxiv.org/abs/1907.02124
A human does not have to see all elephants to recognize an animal as an elephant. On contrast, current state-of-the-art deep learning approaches heavily depend on the variety of training samples and the capacity of the network. In practice, the size
Externí odkaz:
http://arxiv.org/abs/1905.12171