Zobrazeno 1 - 10
of 254
pro vyhledávání: '"Qi, Yanjun"'
Recent NLP literature pays little attention to the robustness of toxicity language predictors, while these systems are most likely to be used in adversarial contexts. This paper presents a novel adversarial attack, \texttt{ToxicTrap}, introducing sma
Externí odkaz:
http://arxiv.org/abs/2404.08690
Assessing the factual consistency of automatically generated texts in relation to source context is crucial for developing reliable natural language generation applications. Recent literature proposes AlignScore which uses a unified alignment model t
Externí odkaz:
http://arxiv.org/abs/2404.06579
Autor:
Fang, Xi, Xu, Weijie, Tan, Fiona Anting, Zhang, Jiani, Hu, Ziqing, Qi, Yanjun, Nickleach, Scott, Socolinsky, Diego, Sengamedu, Srinivasan, Faloutsos, Christos
Publikováno v:
TMLR 2024
Recent breakthroughs in large language modeling have facilitated rigorous exploration of their application in diverse tasks related to tabular data modeling, such as prediction, tabular data synthesis, question answering, and table understanding. Eac
Externí odkaz:
http://arxiv.org/abs/2402.17944
Recent advances in Large Language Models (LLMs) have led to an emergent ability of chain-of-thought (CoT) prompting, a prompt reasoning strategy that adds intermediate rationale steps between questions and answers to construct prompts. Conditioned on
Externí odkaz:
http://arxiv.org/abs/2312.04684
Recent studies have revealed that NLP predictive models are vulnerable to adversarial attacks. Most existing studies focused on designing attacks to evaluate the robustness of NLP models in the English language alone. Literature has seen an increasin
Externí odkaz:
http://arxiv.org/abs/2306.04874
Machine learning models fail to perform when facing out-of-distribution (OOD) domains, a challenging task known as domain generalization (DG). In this work, we develop a novel DG training strategy, we call PGrad, to learn a robust gradient direction,
Externí odkaz:
http://arxiv.org/abs/2305.01134
Publikováno v:
AAAI 2023
Recent NLP literature has seen growing interest in improving model interpretability. Along this direction, we propose a trainable neural network layer that learns a global interaction graph between words and then selects more informative words using
Externí odkaz:
http://arxiv.org/abs/2302.02016
Deep reinforcement learning algorithms have succeeded in several challenging domains. Classic Online RL job schedulers can learn efficient scheduling strategies but often takes thousands of timesteps to explore the environment and adapt from a random
Externí odkaz:
http://arxiv.org/abs/2212.00639
Generalized Zero-Shot Learning (GZSL) aims to train a classifier that can generalize to unseen classes, using a set of attributes as auxiliary information, and the visual features extracted from a pre-trained convolutional neural network. While recen
Externí odkaz:
http://arxiv.org/abs/2211.12494
The exponential growth in demand for digital services drives massive datacenter energy consumption and negative environmental impacts. Promoting sustainable solutions to pressing energy and digital infrastructure challenges is crucial. Several hypers
Externí odkaz:
http://arxiv.org/abs/2211.05346