Zobrazeno 1 - 10
of 3 093
pro vyhledávání: '"Wang, Chenguang"'
The Travelling Salesman Problem (TSP) remains a fundamental challenge in combinatorial optimization, inspiring diverse algorithmic strategies. This paper revisits the "heatmap + Monte Carlo Tree Search (MCTS)" paradigm that has recently gained tracti
Externí odkaz:
http://arxiv.org/abs/2411.09238
Autor:
Tan, Sijun, Zhuang, Siyuan, Montgomery, Kyle, Tang, William Y., Cuadron, Alejandro, Wang, Chenguang, Popa, Raluca Ada, Stoica, Ion
LLM-based judges have emerged as a scalable alternative to human evaluation and are increasingly used to assess, compare, and improve models. However, the reliability of LLM-based judges themselves is rarely scrutinized. As LLMs become more advanced,
Externí odkaz:
http://arxiv.org/abs/2410.12784
In oncology clinical trials, tumor burden (TB) stands as a crucial longitudinal biomarker, reflecting the toll a tumor takes on a patient's prognosis. With certain treatments, the disease's natural progression shows the tumor burden initially recedin
Externí odkaz:
http://arxiv.org/abs/2409.13873
Electricity Consumption Profiles (ECPs) are crucial for operating and planning power distribution systems, especially with the increasing numbers of various low-carbon technologies such as solar panels and electric vehicles. Traditional ECP modeling
Externí odkaz:
http://arxiv.org/abs/2408.08399
We present a new method for large language models to solve compositional tasks. Although they have shown strong performance on traditional language understanding tasks, large language models struggle to solve compositional tasks, where the solution d
Externí odkaz:
http://arxiv.org/abs/2407.04787
Despite the remarkable advancement of Large language models (LLMs), they still lack delicate controllability under sophisticated constraints, which is critical to enhancing their response quality and the user experience. While conditional supervised
Externí odkaz:
http://arxiv.org/abs/2406.15938
Autor:
Li, Ming, Chen, Pei, Wang, Chenguang, Zhao, Hongyu, Liang, Yijun, Hou, Yupeng, Liu, Fuxiao, Zhou, Tianyi
Finetuning large language models with a variety of instruction-response pairs has enhanced their capability to understand and follow instructions. Current instruction tuning primarily relies on teacher models or human intervention to generate and ref
Externí odkaz:
http://arxiv.org/abs/2405.13326
Residential Load Profile (RLP) generation and prediction are critical for the operation and planning of distribution networks, especially as diverse low-carbon technologies (e.g., photovoltaic and electric vehicles) are increasingly adopted. This pap
Externí odkaz:
http://arxiv.org/abs/2405.02180
We present a new challenge to examine whether large language models understand social norms. In contrast to existing datasets, our dataset requires a fundamental understanding of social norms to solve. Our dataset features the largest set of social n
Externí odkaz:
http://arxiv.org/abs/2404.02491
Autor:
Rakuten Group, Levine, Aaron, Huang, Connie, Wang, Chenguang, Batista, Eduardo, Szymanska, Ewa, Ding, Hongyi, Chou, Hou Wei, Pessiot, Jean-François, Effendi, Johanes, Chiu, Justin, Ohlhus, Kai Torben, Chopra, Karan, Shinzato, Keiji, Murakami, Koji, Xiong, Lee, Chen, Lei, Kubota, Maki, Tkachenko, Maksim, Lee, Miroku, Takahashi, Naoki, Jwalapuram, Prathyusha, Tatsushima, Ryutaro, Jain, Saurabh, Yadav, Sunil Kumar, Cai, Ting, Chen, Wei-Te, Xia, Yandi, Nakayama, Yuki, Higashiyama, Yutaka
We introduce RakutenAI-7B, a suite of Japanese-oriented large language models that achieve the best performance on the Japanese LM Harness benchmarks among the open 7B models. Along with the foundation model, we release instruction- and chat-tuned mo
Externí odkaz:
http://arxiv.org/abs/2403.15484