Zobrazeno 1 - 10
of 3 622
pro vyhledávání: '"Lee, Dong-Ho"'
We present a mathematical framework for modeling two-player noncooperative games in which one player (the defender) is uncertain of the costs of the game and the second player's (the attacker's) intention but can preemptively allocate information-gat
Externí odkaz:
http://arxiv.org/abs/2404.00733
Autor:
Maharana, Adyasha, Lee, Dong-Ho, Tulyakov, Sergey, Bansal, Mohit, Barbieri, Francesco, Fang, Yuwei
Existing works on long-term open-domain dialogues focus on evaluating model responses within contexts spanning no more than five chat sessions. Despite advancements in long-context large language models (LLMs) and retrieval augmented generation (RAG)
Externí odkaz:
http://arxiv.org/abs/2402.17753
The burgeoning field of on-device AI communication, where devices exchange information directly through embedded foundation models, such as language models (LMs), requires robust, efficient, and generalizable communication frameworks. However, integr
Externí odkaz:
http://arxiv.org/abs/2402.11656
Autor:
Zhang, Zhihan, Lee, Dong-Ho, Fang, Yuwei, Yu, Wenhao, Jia, Mengzhao, Jiang, Meng, Barbieri, Francesco
Instruction tuning has remarkably advanced large language models (LLMs) in understanding and responding to diverse human instructions. Despite the success in high-resource languages, its application in lower-resource ones faces challenges due to the
Externí odkaz:
http://arxiv.org/abs/2311.08711
Although large language models (LLMs) have advanced the state-of-the-art in NLP significantly, deploying them for downstream applications is still challenging due to cost, responsiveness, control, or concerns around privacy and security. As such, tra
Externí odkaz:
http://arxiv.org/abs/2310.20111
Autor:
Moon, Jihyung, Lee, Dong-Ho, Cho, Hyundong, Jin, Woojeong, Park, Chan Young, Kim, Minwoo, May, Jonathan, Pujara, Jay, Park, Sungjoon
Toxic language, such as hate speech, can deter users from participating in online communities and enjoying popular platforms. Previous approaches to detecting toxic language and norm violations have been primarily concerned with conversations from on
Externí odkaz:
http://arxiv.org/abs/2305.10731
Temporal knowledge graph (TKG) forecasting benchmarks challenge models to predict future facts using knowledge of past facts. In this paper, we apply large language models (LLMs) to these benchmarks using in-context learning (ICL). We investigate whe
Externí odkaz:
http://arxiv.org/abs/2305.10613
Autor:
Lee, Dong Ho, Ahn, Jaemyung
In this paper, we study the Multi-Start Team Orienteering Problem (MSTOP), a mission re-planning problem where vehicles are initially located away from the depot and have different amounts of fuel. We consider/assume the goal of multiple vehicles is
Externí odkaz:
http://arxiv.org/abs/2303.01963
Autor:
Zhou, Pei, Cho, Hyundong, Jandaghi, Pegah, Lee, Dong-Ho, Lin, Bill Yuchen, Pujara, Jay, Ren, Xiang
Human communication relies on common ground (CG), the mutual knowledge and beliefs shared by participants, to produce coherent and interesting conversations. In this paper, we demonstrate that current response generation (RG) models produce generic a
Externí odkaz:
http://arxiv.org/abs/2211.09267
Autor:
Lee, Dong-Ho, Kadakia, Akshen, Joshi, Brihi, Chan, Aaron, Liu, Ziyi, Narahari, Kiran, Shibuya, Takashi, Mitani, Ryosuke, Sekiya, Toshiyuki, Pujara, Jay, Ren, Xiang
NLP models are susceptible to learning spurious biases (i.e., bugs) that work on some datasets but do not properly reflect the underlying task. Explanation-based model debugging aims to resolve spurious biases by showing human users explanations of m
Externí odkaz:
http://arxiv.org/abs/2210.16978