Zobrazeno 1 - 10
of 257
pro vyhledávání: '"Lee, DongHa"'
Autor:
Chae, Hyungjoo, Kim, Namyoung, Ong, Kai Tzu-iunn, Gwak, Minju, Song, Gwanwoo, Kim, Jihoon, Kim, Sunghwan, Lee, Dongha, Yeo, Jinyoung
Large language models (LLMs) have recently gained much attention in building autonomous agents. However, the performance of current LLM-based web agents in long-horizon tasks is far from optimal, often yielding errors such as repeatedly buying a non-
Externí odkaz:
http://arxiv.org/abs/2410.13232
In the domain of Aspect-Based Sentiment Analysis (ABSA), generative methods have shown promising results and achieved substantial advancements. However, despite these advancements, the tasks of extracting sentiment quadruplets, which capture the nuan
Externí odkaz:
http://arxiv.org/abs/2410.02297
Autor:
Kim, Sunghwan, Kang, Dongjin, Kwon, Taeyoon, Chae, Hyungjoo, Won, Jungsoo, Lee, Dongha, Yeo, Jinyoung
Reward models are key in reinforcement learning from human feedback (RLHF) systems, aligning the model behavior with human preferences. Particularly in the math domain, there have been plenty of studies using reward models to align policies for impro
Externí odkaz:
http://arxiv.org/abs/2410.01729
We study the code generation behavior of instruction-tuned models built on top of code pre-trained language models when they could access an auxiliary function to implement a function. We design several ways to provide auxiliary functions to the mode
Externí odkaz:
http://arxiv.org/abs/2409.13928
Autor:
Yang, Dongil, Lee, Suyeon, Kim, Minjin, Won, Jungsoo, Kim, Namyoung, Lee, Dongha, Yeo, Jinyoung
Engagement between instructors and students plays a crucial role in enhancing students'academic performance. However, instructors often struggle to provide timely and personalized support in large classes. To address this challenge, we propose a nove
Externí odkaz:
http://arxiv.org/abs/2409.00355
Language models (LMs) have exhibited impressive abilities in generating codes from natural language requirements. In this work, we highlight the diversity of code generated by LMs as a critical criterion for evaluating their code generation capabilit
Externí odkaz:
http://arxiv.org/abs/2408.14504
Language Models (LMs) are increasingly employed in recommendation systems due to their advanced language understanding and generation capabilities. Recent recommender systems based on generative retrieval have leveraged the inferential abilities of L
Externí odkaz:
http://arxiv.org/abs/2408.08686
Autor:
Kim, Jieyong, Kim, Hyunseo, Cho, Hyunjin, Kang, SeongKu, Chang, Buru, Yeo, Jinyoung, Lee, Dongha
Recent advancements in Large Language Models (LLMs) have demonstrated exceptional performance across a wide range of tasks, generating significant interest in their application to recommendation systems. However, existing methods have not fully capit
Externí odkaz:
http://arxiv.org/abs/2408.06276
Previous studies on continual knowledge learning (CKL) in large language models (LLMs) have predominantly focused on approaches such as regularization, architectural modifications, and rehearsal techniques to mitigate catastrophic forgetting. However
Externí odkaz:
http://arxiv.org/abs/2407.16920
Cross-lingual entity alignment (EA) enables the integration of multiple knowledge graphs (KGs) across different languages, providing users with seamless access to diverse and comprehensive knowledge. Existing methods, mostly supervised, face challeng
Externí odkaz:
http://arxiv.org/abs/2407.15588