Zobrazeno 1 - 10
of 69
pro vyhledávání: '"Yeo, Jinyoung"'
Autor:
Yang, Dongil, Lee, Suyeon, Kim, Minjin, Won, Jungsoo, Kim, Namyoung, Lee, Dongha, Yeo, Jinyoung
Engagement between instructors and students plays a crucial role in enhancing students'academic performance. However, instructors often struggle to provide timely and personalized support in large classes. To address this challenge, we propose a nove
Externí odkaz:
http://arxiv.org/abs/2409.00355
Language models (LMs) have exhibited impressive abilities in generating codes from natural language requirements. In this work, we highlight the diversity of code generated by LMs as a critical criterion for evaluating their code generation capabilit
Externí odkaz:
http://arxiv.org/abs/2408.14504
Guiding large language models with a selected set of human-authored demonstrations is a common practice for improving LLM applications. However, human effort can be costly, especially in specialized domains (e.g., clinical diagnosis), and does not gu
Externí odkaz:
http://arxiv.org/abs/2408.12315
Language Models (LMs) are increasingly employed in recommendation systems due to their advanced language understanding and generation capabilities. Recent recommender systems based on generative retrieval have leveraged the inferential abilities of L
Externí odkaz:
http://arxiv.org/abs/2408.08686
Autor:
Kim, Jieyong, Kim, Hyunseo, Cho, Hyunjin, Kang, SeongKu, Chang, Buru, Yeo, Jinyoung, Lee, Dongha
Recent advancements in Large Language Models (LLMs) have demonstrated exceptional performance across a wide range of tasks, generating significant interest in their application to recommendation systems. However, existing methods have not fully capit
Externí odkaz:
http://arxiv.org/abs/2408.06276
Previous studies on continual knowledge learning (CKL) in large language models (LLMs) have predominantly focused on approaches such as regularization, architectural modifications, and rehearsal techniques to mitigate catastrophic forgetting. However
Externí odkaz:
http://arxiv.org/abs/2407.16920
Cross-lingual entity alignment (EA) enables the integration of multiple knowledge graphs (KGs) across different languages, providing users with seamless access to diverse and comprehensive knowledge. Existing methods, mostly supervised, face challeng
Externí odkaz:
http://arxiv.org/abs/2407.15588
Autor:
Lee, Suyeon, Kim, Sunghwan, Kim, Minju, Kang, Dongjin, Yang, Dongil, Kim, Harim, Kang, Minseok, Jung, Dayi, Kim, Min Hee, Lee, Seungbeen, Chung, Kyoung-Mee, Yu, Youngjae, Lee, Dongha, Yeo, Jinyoung
Recently, the demand for psychological counseling has significantly increased as more individuals express concerns about their mental health. This surge has accelerated efforts to improve the accessibility of counseling by using large language models
Externí odkaz:
http://arxiv.org/abs/2407.03103
Autor:
Lee, Seungbeen, Lim, Seungwon, Han, Seungju, Oh, Giyeong, Chae, Hyungjoo, Chung, Jiwan, Kim, Minju, Kwak, Beong-woo, Lee, Yeonsoo, Lee, Dongha, Yeo, Jinyoung, Yu, Youngjae
The idea of personality in descriptive psychology, traditionally defined through observable behavior, has now been extended to Large Language Models (LLMs) to better understand their behavior. This raises a question: do LLMs exhibit distinct and cons
Externí odkaz:
http://arxiv.org/abs/2406.14703
Implicit knowledge hidden within the explicit table cells, such as data insights, is the key to generating a high-quality table summary. However, unveiling such implicit knowledge is a non-trivial task. Due to the complex nature of structured tables,
Externí odkaz:
http://arxiv.org/abs/2406.12269