Zobrazeno 1 - 10
of 49
pro vyhledávání: '"He, Zexue"'
Autor:
Wang, Yu, Han, Chi, Wu, Tongtong, He, Xiaoxin, Zhou, Wangchunshu, Sadeq, Nafis, Chen, Xiusi, He, Zexue, Wang, Wei, Haffari, Gholamreza, Ji, Heng, McAuley, Julian
Building a human-like system that continuously interacts with complex environments -- whether simulated digital worlds or human society -- presents several key challenges. Central to this is enabling continuous, high-frequency interactions, where the
Externí odkaz:
http://arxiv.org/abs/2409.13265
Autor:
Shi, Taiwei, Wang, Zhuoer, Yang, Longqi, Lin, Ying-Chun, He, Zexue, Wan, Mengting, Zhou, Pei, Jauhar, Sujay, Xu, Xiaofeng, Song, Xia, Neville, Jennifer
As large language models (LLMs) continue to advance, aligning these models with human preferences has emerged as a critical challenge. Traditional alignment methods, relying on human or LLM annotated datasets, are limited by their resource-intensive
Externí odkaz:
http://arxiv.org/abs/2408.15549
Large language models show impressive abilities in memorizing world knowledge, which leads to concerns regarding memorization of private information, toxic or sensitive knowledge, and copyrighted content. We introduce the problem of Large Scale Knowl
Externí odkaz:
http://arxiv.org/abs/2405.16720
Large language models (LLMs) offer significant potential as tools to support an expanding range of decision-making tasks. Given their training on human (created) data, LLMs have been shown to inherit societal biases against protected groups, as well
Externí odkaz:
http://arxiv.org/abs/2403.00811
Large Language Models (LLMs) struggle to handle long input sequences due to high memory and runtime costs. Memory-augmented models have emerged as a promising solution to this problem, but current methods are hindered by limited memory capacity and r
Externí odkaz:
http://arxiv.org/abs/2402.13449
Enabling large language models (LLMs) to read videos is vital for multimodal LLMs. Existing works show promise on short videos whereas long video (longer than e.g.~1 minute) comprehension remains challenging. The major problem lies in the over-compre
Externí odkaz:
http://arxiv.org/abs/2402.12079
Ranking items regarding individual user interests is a core technique of multiple downstream tasks such as recommender systems. Learning such a personalized ranker typically relies on the implicit feedback from users' past click-through behaviors. Ho
Externí odkaz:
http://arxiv.org/abs/2401.12553
Publikováno v:
AAAI 2024
Understanding and accurately explaining compatibility relationships between fashion items is a challenging problem in the burgeoning domain of AI-driven outfit recommendations. Present models, while making strides in this area, still occasionally fal
Externí odkaz:
http://arxiv.org/abs/2312.11554
MedEval: A Multi-Level, Multi-Task, and Multi-Domain Medical Benchmark for Language Model Evaluation
Autor:
He, Zexue, Wang, Yu, Yan, An, Liu, Yao, Chang, Eric Y., Gentili, Amilcare, McAuley, Julian, Hsu, Chun-Nan
Curated datasets for healthcare are often limited due to the need of human annotations from experts. In this paper, we present MedEval, a multi-level, multi-task, and multi-domain medical benchmark to facilitate the development of language models for
Externí odkaz:
http://arxiv.org/abs/2310.14088
Autor:
Sachdeva, Noveen, He, Zexue, Kang, Wang-Cheng, Ni, Jianmo, Cheng, Derek Zhiyuan, McAuley, Julian
We study data distillation for auto-regressive machine learning tasks, where the input and output have a strict left-to-right causal structure. More specifically, we propose Farzi, which summarizes an event sequence dataset into a small number of syn
Externí odkaz:
http://arxiv.org/abs/2310.09983