Zobrazeno 1 - 10
of 604
pro vyhledávání: '"Jung, Jaehun"'
We present a principled approach to provide LLM-based evaluation with a rigorous guarantee of human agreement. We first propose that a reliable evaluation method should not uncritically rely on model preferences for pairwise evaluation, but rather as
Externí odkaz:
http://arxiv.org/abs/2407.18370
Autor:
Jung, Jaehun, Lu, Ximing, Jiang, Liwei, Brahman, Faeze, West, Peter, Koh, Pang Wei, Choi, Yejin
The current winning recipe for automatic summarization is using proprietary large-scale language models (LLMs) such as ChatGPT as is, or imitation learning from them as teacher models. While increasingly ubiquitous dependence on such large-scale lang
Externí odkaz:
http://arxiv.org/abs/2403.13780
The permanence of online content combined with the enhanced authorship identification techniques calls for stronger computational methods to protect the identity and privacy of online authorship when needed, e.g., blind reviews for scientific papers,
Externí odkaz:
http://arxiv.org/abs/2402.08761
While text style transfer has many applications across natural language processing, the core premise of transferring from a single source style is unrealistic in a real-world setting. In this work, we focus on arbitrary style transfer: rewriting a te
Externí odkaz:
http://arxiv.org/abs/2311.07167
Publikováno v:
European Journal of Adapted Physical Activity, Pp 20-30 (2017)
Externí odkaz:
https://doaj.org/article/4420d556a7054b0eb832c02f42c41861
Autor:
Jung, Jaehun, West, Peter, Jiang, Liwei, Brahman, Faeze, Lu, Ximing, Fisher, Jillian, Sorensen, Taylor, Choi, Yejin
We present Impossible Distillation, a novel framework for paraphrasing and sentence summarization, that distills a high-quality dataset and model from a low-quality teacher that itself cannot perform these tasks. Unlike prior works that rely on an ex
Externí odkaz:
http://arxiv.org/abs/2305.16635
Publikováno v:
Journal of Teaching in Physical Education; Oct2024, Vol. 43 Issue 4, p587-596, 10p
Autor:
Jung, Jaehun, Qin, Lianhui, Welleck, Sean, Brahman, Faeze, Bhagavatula, Chandra, Bras, Ronan Le, Choi, Yejin
Despite their impressive capabilities, large pre-trained language models (LMs) struggle with consistent reasoning; recently, prompting LMs to generate explanations that self-guide the inference has emerged as a promising direction to amend this. Howe
Externí odkaz:
http://arxiv.org/abs/2205.11822
Autor:
Eom, Jungmin, Kim, Yeonjae, Kim, Donghoon, Lee, Eunyoung, Kwon, Soon-Hwan, Jo, Min-Woo, Jung, Jaehun, Park, Hyesook, Park, Bomi
Publikováno v:
In Vaccine 17 September 2024 42(22)
Publikováno v:
Journal of Physical Activity & Health; May2024, Vol. 21 Issue 5, p465-471, 7p