Zobrazeno 1 - 10
of 59
pro vyhledávání: '"Jia, Yunhan"'
Autor:
Huangfu, Zizheng, Ju, Wei, Jia, Yunhan, Ren, Ruijun, Wang, Zhenbei, Li, Chen, Shang, Xiaomeng, Li, Yujie, Liu, Hongnan, Wang, Yu, Zheng, Hao, Qi, Fei, Ikhlaq, Amir, Kumirska, Jolanta, Siedlecka, Ewa Maria
Publikováno v:
In Journal of Environmental Sciences November 2024 145:216-231
Autor:
Jia, Yunhan
Publikováno v:
In Journal of Pragmatics September 2024 230:154-165
Autor:
Ren, Ruijun, Jia, Yunhan, Li, Chen, Liu, Yatao, Wang, Zhenbei, Li, Fan, Qi, Fei, Ikhlaq, Amir, Kumirska, Jolanta, Siedlecka, Ewa Maria, Ismailova, Oksana
Publikováno v:
In Journal of Membrane Science December 2024 712
Autor:
Li, Yujie, Li, Chen, Wang, Zhenbei, Liu, Yatao, Jia, Yunhan, Li, Fan, Ren, Ruijun, Ikhlaq, Amir, Kumirska, Jolanta, Siedlecka, Ewa Maria, Ismailova, Oksana, Qi, Fei
Publikováno v:
In Journal of Water Process Engineering June 2024 63
Automated Lane Centering (ALC) systems are convenient and widely deployed today, but also highly security and safety critical. In this work, we are the first to systematically study the security of state-of-the-art deep learning based ALC systems in
Externí odkaz:
http://arxiv.org/abs/2009.06701
Recent research has proposed the lottery ticket hypothesis, suggesting that for a deep neural network, there exist trainable sub-networks performing equally or better than the original model with commensurate training steps. While this discovery is i
Externí odkaz:
http://arxiv.org/abs/2003.05733
Lane-Keeping Assistance System (LKAS) is convenient and widely available today, but also extremely security and safety critical. In this work, we design and implement the first systematic approach to attack real-world DNN-based LKASes. We identify di
Externí odkaz:
http://arxiv.org/abs/2003.01782
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Lu, Yantao, Jia, Yunhan, Wang, Jianyu, Li, Bai, Chai, Weiheng, Carin, Lawrence, Velipasalar, Senem
Neural networks are known to be vulnerable to carefully crafted adversarial examples, and these malicious samples often transfer, i.e., they remain adversarial even against other models. Although great efforts have been delved into the transferabilit
Externí odkaz:
http://arxiv.org/abs/1911.11616
Autor:
Bhatt, Umang, Xiang, Alice, Sharma, Shubham, Weller, Adrian, Taly, Ankur, Jia, Yunhan, Ghosh, Joydeep, Puri, Ruchir, Moura, José M. F., Eckersley, Peter
Explainable machine learning offers the potential to provide stakeholders with insights into model behavior by using various methods such as feature importance scores, counterfactual explanations, or influential training data. Yet there is little und
Externí odkaz:
http://arxiv.org/abs/1909.06342