Zobrazeno 1 - 5
of 5
pro vyhledávání: '"Kong, Zelun"'
Autor:
Yin, Bangjie, Wang, Wenxuan, Yao, Taiping, Guo, Junfeng, Kong, Zelun, Ding, Shouhong, Li, Jilin, Liu, Cong
Deep neural networks, particularly face recognition models, have been shown to be vulnerable to both digital and physical adversarial examples. However, existing adversarial examples against face recognition systems either lack transferability to bla
Externí odkaz:
http://arxiv.org/abs/2105.03162
Due to the increasing complexity seen in both workloads and hardware resources in state-of-the-art embedded systems, developing efficient real-time schedulers and the corresponding schedulability tests becomes rather challenging. Although close to op
Externí odkaz:
http://arxiv.org/abs/2007.05136
Although Deep neural networks (DNNs) are being pervasively used in vision-based autonomous driving systems, they are found vulnerable to adversarial attacks where small-magnitude perturbations into the inputs during test time cause dramatic changes t
Externí odkaz:
http://arxiv.org/abs/1907.04449
One of the key challenges of performing label prediction over a data stream concerns with the emergence of instances belonging to unobserved class labels over time. Previously, this problem has been addressed by detecting such instances and using the
Externí odkaz:
http://arxiv.org/abs/1811.05141
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.