Zobrazeno 1 - 10
of 30
pro vyhledávání: '"Jin, Gaojie"'
The Invariant Risk Minimization (IRM) approach aims to address the challenge of domain generalization by training a feature representation that remains invariant across multiple environments. However, in noisy environments, IRM-related techniques suc
Externí odkaz:
http://arxiv.org/abs/2407.01749
Autor:
Dong, Yi, Mu, Ronghui, Zhang, Yanghao, Sun, Siqi, Zhang, Tianle, Wu, Changshun, Jin, Gaojie, Qi, Yi, Hu, Jinwei, Meng, Jie, Bensalem, Saddek, Huang, Xiaowei
In the burgeoning field of Large Language Models (LLMs), developing a robust safety mechanism, colloquially known as "safeguards" or "guardrails", has become imperative to ensure the ethical use of LLMs within prescribed boundaries. This article prov
Externí odkaz:
http://arxiv.org/abs/2406.02622
Spiking Neural Network (SNN) is acknowledged as the next generation of Artificial Neural Network (ANN) and hold great promise in effectively processing spatial-temporal information. However, the choice of timestep becomes crucial as it significantly
Externí odkaz:
http://arxiv.org/abs/2405.00699
Autor:
Dong, Yi, Mu, Ronghui, Jin, Gaojie, Qi, Yi, Hu, Jinwei, Zhao, Xingyu, Meng, Jie, Ruan, Wenjie, Huang, Xiaowei
As Large Language Models (LLMs) become more integrated into our daily lives, it is crucial to identify and mitigate their risks, especially when the risks can have profound impacts on human users and societies. Guardrails, which filter the inputs or
Externí odkaz:
http://arxiv.org/abs/2402.01822
Robust pedestrian trajectory forecasting is crucial to developing safe autonomous vehicles. Although previous works have studied adversarial robustness in the context of trajectory forecasting, some significant issues remain unaddressed. In this work
Externí odkaz:
http://arxiv.org/abs/2308.05985
Autor:
Huang, Xiaowei, Ruan, Wenjie, Huang, Wei, Jin, Gaojie, Dong, Yi, Wu, Changshun, Bensalem, Saddek, Mu, Ronghui, Qi, Yi, Zhao, Xingyu, Cai, Kaiwen, Zhang, Yanghao, Wu, Sihao, Xu, Peipei, Wu, Dengyu, Freitas, Andre, Mustafa, Mustafa A.
Large Language Models (LLMs) have exploded a new heatwave of AI for their ability to engage end-users in human-level conversations with detailed and articulate answers across many knowledge domains. In response to their fast adoption in many industri
Externí odkaz:
http://arxiv.org/abs/2305.11391
In recent years, there has been an explosion of research into developing more robust deep neural networks against adversarial examples. Adversarial training appears as one of the most successful methods. To deal with both the robustness against adver
Externí odkaz:
http://arxiv.org/abs/2303.10653
Spiking neural network (SNN), next generation of artificial neural network (ANN) that more closely mimic natural neural networks offers promising improvements in computational efficiency. However, current SNN training methodologies predominantly empl
Externí odkaz:
http://arxiv.org/abs/2301.09522
Cooperative multi-agent reinforcement learning (c-MARL) is widely applied in safety-critical scenarios, thus the analysis of robustness for c-MARL models is profoundly important. However, robustness certification for c-MARLs has not yet been explored
Externí odkaz:
http://arxiv.org/abs/2212.11746
Interpretability of Deep Learning (DL) is a barrier to trustworthy AI. Despite great efforts made by the Explainable AI (XAI) community, explanations lack robustness -- indistinguishable input perturbations may lead to different XAI results. Thus, it
Externí odkaz:
http://arxiv.org/abs/2208.09418