Zobrazeno 1 - 10
of 88
pro vyhledávání: '"Deng, Zhun"'
Generative artificial intelligence (AI) systems are trained on large data corpora to generate new pieces of text, images, videos, and other media. There is growing concern that such systems may infringe on the copyright interests of training data con
Externí odkaz:
http://arxiv.org/abs/2404.13964
Reinforcement learning with human feedback (RLHF) is an emerging paradigm to align models with human preferences. Typically, RLHF aggregates preferences from multiple individuals who have diverse viewpoints that may conflict with each other. Our work
Externí odkaz:
http://arxiv.org/abs/2403.05006
Autor:
Wang, Haonan, Zou, James, Mozer, Michael, Goyal, Anirudh, Lamb, Alex, Zhang, Linjun, Su, Weijie J, Deng, Zhun, Xie, Michael Qizhe, Brown, Hannah, Kawaguchi, Kenji
Creativity serves as a cornerstone for societal progress and innovation. With the rise of advanced generative AI models capable of tasks once reserved for human creativity, the study of AI's creative potential becomes imperative for its responsible d
Externí odkaz:
http://arxiv.org/abs/2401.01623
As the number of large language models (LLMs) released to the public grows, there is a pressing need to understand the safety implications associated with these models learning from third-party custom finetuning data. We explore the behavior of LLMs
Externí odkaz:
http://arxiv.org/abs/2312.12736
Autor:
Zollo, Thomas P., Morrill, Todd, Deng, Zhun, Snell, Jake C., Pitassi, Toniann, Zemel, Richard
The recent explosion in the capabilities of large language models has led to a wave of interest in how best to prompt a model to perform a given task. While it may be tempting to simply choose a prompt based on average performance on a validation set
Externí odkaz:
http://arxiv.org/abs/2311.13628
Standard approaches for uncertainty quantification in deep learning and physics-informed learning have persistent limitations. Indicatively, strong assumptions regarding the data likelihood are required, the performance highly depends on the selectio
Externí odkaz:
http://arxiv.org/abs/2310.06923
Autor:
Zhou, Yiyang, Cui, Chenhang, Yoon, Jaehong, Zhang, Linjun, Deng, Zhun, Finn, Chelsea, Bansal, Mohit, Yao, Huaxiu
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects
Externí odkaz:
http://arxiv.org/abs/2310.00754
Explicit finite-sample statistical guarantees on model performance are an important ingredient in responsible machine learning. Previous work has focused mainly on bounding either the expected loss of a predictor or the probability that an individual
Externí odkaz:
http://arxiv.org/abs/2309.13786
Numerous deep learning algorithms have been inspired by and understood via the notion of information bottleneck, where unnecessary information is (often implicitly) minimized while task-relevant information is maximized. However, a rigorous argument
Externí odkaz:
http://arxiv.org/abs/2305.18887
As machine learning has been deployed ubiquitously across applications in modern data science, algorithmic fairness has become a great concern. Among them, imposing fairness constraints during learning, i.e. in-processing fair training, has been a po
Externí odkaz:
http://arxiv.org/abs/2304.03935