Zobrazeno 1 - 10
of 50
pro vyhledávání: '"Luo, Jinqi"'
Autor:
Luo, Jinqi, Ding, Tianjiao, Chan, Kwan Ho Ryan, Thaker, Darshan, Chattopadhyay, Aditya, Callison-Burch, Chris, Vidal, René
Large Language Models (LLMs) are being used for a wide variety of tasks. While they are capable of generating human-like responses, they can also produce undesirable output including potentially harmful information, racist or sexist language, and hal
Externí odkaz:
http://arxiv.org/abs/2406.04331
When it comes to deploying deep vision models, the behavior of these systems must be explicable to ensure confidence in their reliability and fairness. A common approach to evaluate deep learning models is to build a labeled test set with attributes
Externí odkaz:
http://arxiv.org/abs/2303.15441
In practice, metric analysis on a specific train and test dataset does not guarantee reliable or fair ML models. This is partially due to the fact that obtaining a balanced, diverse, and perfectly labeled dataset is typically expensive, time-consumin
Externí odkaz:
http://arxiv.org/abs/2303.13010
Deep learning based image recognition systems have been widely deployed on mobile devices in today's world. In recent studies, however, deep learning models are shown vulnerable to adversarial examples. One variant of adversarial examples, called adv
Externí odkaz:
http://arxiv.org/abs/2106.15202
Adversarial training is one of the most effective approaches defending against adversarial examples for deep learning models. Unlike other defense strategies, adversarial training aims to promote the robustness of models intrinsically. During the las
Externí odkaz:
http://arxiv.org/abs/2102.01356
Adversarial examples are inevitable on the road of pervasive applications of deep neural networks (DNN). Imperceptible perturbations applied on natural samples can lead DNN-based classifiers to output wrong prediction with fair confidence score. It i
Externí odkaz:
http://arxiv.org/abs/2011.01539
Deep neural networks have been shown vulnerable toadversarial patches, where exotic patterns can resultin models wrong prediction. Nevertheless, existing ap-proaches to adversarial patch generation hardly con-sider the contextual consistency between
Externí odkaz:
http://arxiv.org/abs/2009.09774
Autor:
Huang, Wen, Zhang, Xueping, Tang, Yaxin, Luo, Jinqi, Chen, Jiao, Lu, Yixin, Wang, Lin, Luo, Ze, Zhang, Jianqiang
Publikováno v:
In Journal of Water Process Engineering October 2022 49
Publikováno v:
In Atmospheric Environment 1 February 2019 198:133-141
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.