Zobrazeno 1 - 10
of 940
pro vyhledávání: '"Yang Chenghao"'
Open-domain dialogue systems have seen remarkable advancements with the development of large language models (LLMs). Nonetheless, most existing dialogue systems predominantly focus on brief single-session interactions, neglecting the real-world deman
Externí odkaz:
http://arxiv.org/abs/2406.05925
Long-context modeling presents a significant challenge for transformer-based large language models (LLMs) due to the quadratic complexity of the self-attention mechanism and issues with length extrapolation caused by pretraining exclusively on short
Externí odkaz:
http://arxiv.org/abs/2405.13216
Recent studies suggest that self-reflective prompting can significantly enhance the reasoning capabilities of Large Language Models (LLMs). However, the use of external feedback as a stop criterion raises doubts about the true extent of LLMs' ability
Externí odkaz:
http://arxiv.org/abs/2404.09129
Autor:
Yang, Chenghao, Chakrabarty, Tuhin, Hochstatter, Karli R, Slavin, Melissa N, El-Bassel, Nabila, Muresan, Smaranda
In the last decade, the United States has lost more than 500,000 people from an overdose involving prescription and illicit opioids making it a national public health emergency (USDHHS, 2017). Medical practitioners require robust and timely tools tha
Externí odkaz:
http://arxiv.org/abs/2311.09066
Autor:
Yang, Chenghao, Ettinger, Allyson
Understanding sentence meanings and updating information states appropriately across time -- what we call "situational understanding" (SU) -- is a critical ability for human-like AI agents. SU is essential in particular for chat models, such as ChatG
Externí odkaz:
http://arxiv.org/abs/2310.16135
Publikováno v:
E3S Web of Conferences, Vol 233, p 04036 (2021)
Four subgrid-scale models based on large eddy simulation (LES), such as Smagorinsky–Lilly (SL), dynamic Smagorinsky–Lilly (DSL), wall-adapting local eddy-viscosity (WALE), and dynamic kinetic-energy transport (KET) were used and couple Ffowcs Wil
Externí odkaz:
https://doaj.org/article/431fab8b6353478c97b9351c7705b862
The increasing capabilities of large language models (LLMs) raise opportunities for artificial general intelligence but concurrently amplify safety concerns, such as potential misuse of AI systems, necessitating effective AI alignment. Reinforcement
Externí odkaz:
http://arxiv.org/abs/2309.16240
Despite the popularity of Shapley Values in explaining neural text classification models, computing them is prohibitive for large pretrained models due to a large number of model evaluations. In practice, Shapley Values are often estimated with a sma
Externí odkaz:
http://arxiv.org/abs/2305.19998
Autor:
Yang, Chenghao1,2 (AUTHOR) ych_798@163.com, Liu, Zongjun2 (AUTHOR), Zhang, Lingxiao2 (AUTHOR), Gao, Junqing2 (AUTHOR)
Publikováno v:
Journal of Health, Population & Nutrition. 10/16/2024, Vol. 43 Issue 1, p1-12. 12p.
Autor:
Wang, Shiqi, Li, Zheng, Qian, Haifeng, Yang, Chenghao, Wang, Zijian, Shang, Mingyue, Kumar, Varun, Tan, Samson, Ray, Baishakhi, Bhatia, Parminder, Nallapati, Ramesh, Ramanathan, Murali Krishna, Roth, Dan, Xiang, Bing
Code generation models have achieved impressive performance. However, they tend to be brittle as slight edits to a prompt could lead to very different generations; these robustness properties, critical for user experience when deployed in real-life a
Externí odkaz:
http://arxiv.org/abs/2212.10264