Zobrazeno 1 - 10
of 115
pro vyhledávání: '"Wu, Changshun"'
Autor:
Dong, Yi, Mu, Ronghui, Zhang, Yanghao, Sun, Siqi, Zhang, Tianle, Wu, Changshun, Jin, Gaojie, Qi, Yi, Hu, Jinwei, Meng, Jie, Bensalem, Saddek, Huang, Xiaowei
In the burgeoning field of Large Language Models (LLMs), developing a robust safety mechanism, colloquially known as "safeguards" or "guardrails", has become imperative to ensure the ethical use of LLMs within prescribed boundaries. This article prov
Externí odkaz:
http://arxiv.org/abs/2406.02622
The deployment of generative AI (GenAI) models raises significant fairness concerns, addressed in this paper through novel characterization and enforcement techniques specific to GenAI. Unlike standard AI performing specific tasks, GenAI's broad func
Externí odkaz:
http://arxiv.org/abs/2404.16663
Out-of-distribution (OoD) detection techniques for deep neural networks (DNNs) become crucial thanks to their filtering of abnormal inputs, especially when DNNs are used in safety-critical applications and interact with an open and dynamic environmen
Externí odkaz:
http://arxiv.org/abs/2403.18373
Autor:
AbdElSalam, Mohamed, Ali, Loai, Bensalem, Saddek, He, Weicheng, Katsaros, Panagiotis, Kekatos, Nikolaos, Peled, Doron, Temperekidis, Anastasios, Wu, Changshun
In this paper, we present a novel digital twin prototype for a learning-enabled self-driving vehicle. The primary objective of this digital twin is to perform traffic sign recognition and lane keeping. The digital twin architecture relies on co-simul
Externí odkaz:
http://arxiv.org/abs/2402.09097
Machine learning has made remarkable advancements, but confidently utilising learning-enabled components in safety-critical domains still poses challenges. Among the challenges, it is known that a rigorous, yet practical, way of achieving safety guar
Externí odkaz:
http://arxiv.org/abs/2307.11784
Out-of-distribution (OoD) detection techniques are instrumental for safety-related neural networks. We are arguing, however, that current performance-oriented OoD detection techniques geared towards matching metrics such as expected calibration error
Externí odkaz:
http://arxiv.org/abs/2306.08447
Autor:
Huang, Xiaowei, Ruan, Wenjie, Huang, Wei, Jin, Gaojie, Dong, Yi, Wu, Changshun, Bensalem, Saddek, Mu, Ronghui, Qi, Yi, Zhao, Xingyu, Cai, Kaiwen, Zhang, Yanghao, Wu, Sihao, Xu, Peipei, Wu, Dengyu, Freitas, Andre, Mustafa, Mustafa A.
Large Language Models (LLMs) have exploded a new heatwave of AI for their ability to engage end-users in human-level conversations with detailed and articulate answers across many knowledge domains. In response to their fast adoption in many industri
Externí odkaz:
http://arxiv.org/abs/2305.11391
For safety assurance of deep neural networks (DNNs), out-of-distribution (OoD) monitoring techniques are essential as they filter spurious input that is distant from the training dataset. This paper studies the problem of systematically testing OoD m
Externí odkaz:
http://arxiv.org/abs/2205.07736
Publikováno v:
In Journal of Logical and Algebraic Methods in Programming February 2024 137
Classification neural networks fail to detect inputs that do not fall inside the classes they have been trained for. Runtime monitoring techniques on the neuron activation pattern can be used to detect such inputs. We present an approach for monitori
Externí odkaz:
http://arxiv.org/abs/2104.14435