Zobrazeno 1 - 10
of 83
pro vyhledávání: '"Huang, Jimin"'
Autor:
Yang, Yuzhe, Zhang, Yifei, Hu, Yan, Guo, Yilin, Gan, Ruoli, He, Yueru, Lei, Mingcong, Zhang, Xiao, Wang, Haining, Xie, Qianqian, Huang, Jimin, Yu, Honghai, Wang, Benyou
This paper introduces the UCFE: User-Centric Financial Expertise benchmark, an innovative framework designed to evaluate the ability of large language models (LLMs) to handle complex real-world financial tasks. UCFE benchmark adopts a hybrid approach
Externí odkaz:
http://arxiv.org/abs/2410.14059
Intelligent auditing represents a crucial advancement in modern audit practices, enhancing both the quality and efficiency of audits within the realm of artificial intelligence. With the rise of large language model (LLM), there is enormous potential
Externí odkaz:
http://arxiv.org/abs/2410.10873
Autor:
Gilson, Aidan, Ai, Xuguang, Xie, Qianqian, Srinivasan, Sahana, Pushpanathan, Krithi, Singer, Maxwell B., Huang, Jimin, Kim, Hyunjae, Long, Erping, Wan, Peixing, Del Priore, Luciano V., Ohno-Machado, Lucila, Xu, Hua, Liu, Dianbo, Adelman, Ron A., Tham, Yih-Chung, Chen, Qingyu
Large Language Models (LLMs) are poised to revolutionize healthcare. Ophthalmology-specific LLMs remain scarce and underexplored. We introduced an open-source, specialized LLM for ophthalmology, termed Language Enhanced Model for Eye (LEME). LEME was
Externí odkaz:
http://arxiv.org/abs/2410.03740
The emergence of social media has made the spread of misinformation easier. In the financial domain, the accuracy of information is crucial for various aspects of financial market, which has made financial misinformation detection (FMD) an urgent pro
Externí odkaz:
http://arxiv.org/abs/2409.16452
Recent advancements in large language model alignment leverage token-level supervisions to perform fine-grained preference optimization. However, existing token-level alignment methods either optimize on all available tokens, which can be noisy and i
Externí odkaz:
http://arxiv.org/abs/2408.13518
Autor:
Xie, Qianqian, Li, Dong, Xiao, Mengxi, Jiang, Zihao, Xiang, Ruoyu, Zhang, Xiao, Chen, Zhengyu, He, Yueru, Han, Weiguang, Yang, Yuzhe, Chen, Shunian, Zhang, Yifei, Shen, Lihang, Kim, Daniel, Liu, Zhiwei, Luo, Zheheng, Yu, Yangyang, Cao, Yupeng, Deng, Zhiyang, Yao, Zhiyuan, Li, Haohang, Feng, Duanyu, Dai, Yongfu, Somasundaram, VijayaSai, Lu, Peng, Zhao, Yilun, Long, Yitao, Xiong, Guojun, Smith, Kaleb, Yu, Honghai, Lai, Yanzhao, Peng, Min, Nie, Jianyun, Suchow, Jordan W., Liu, Xiao-Yang, Wang, Benyou, Lopez-Lira, Alejandro, Huang, Jimin, Ananiadou, Sophia
Large language models (LLMs) have advanced financial applications, yet they often lack sufficient financial knowledge and struggle with tasks involving multi-modal inputs like tables and time series data. To address these limitations, we introduce \t
Externí odkaz:
http://arxiv.org/abs/2408.11878
Autor:
Wang, Yuxin, Feng, Duanyu, Dai, Yongfu, Chen, Zhengyu, Huang, Jimin, Ananiadou, Sophia, Xie, Qianqian, Wang, Hao
Data serves as the fundamental foundation for advancing deep learning, particularly tabular data presented in a structured format, which is highly conducive to modeling. However, even in the era of LLM, obtaining tabular data from sensitive domains r
Externí odkaz:
http://arxiv.org/abs/2408.02927
Autor:
Yu, Yangyang, Yao, Zhiyuan, Li, Haohang, Deng, Zhiyang, Cao, Yupeng, Chen, Zhi, Suchow, Jordan W., Liu, Rong, Cui, Zhenyu, Zhang, Denghui, Subbalakshmi, Koduvayur, Xiong, Guojun, He, Yueru, Huang, Jimin, Li, Dong, Xie, Qianqian
Large language models (LLMs) have demonstrated notable potential in conducting complex tasks and are increasingly utilized in various financial applications. However, high-quality sequential financial investment decision-making remains challenging. T
Externí odkaz:
http://arxiv.org/abs/2407.06567
Recent advancements in large language models (LLMs) focus on aligning to heterogeneous human expectations and values via multi-objective preference alignment. However, existing methods are dependent on the policy model parameters, which require high-
Externí odkaz:
http://arxiv.org/abs/2403.17141
Autor:
Hu, Gang, Qin, Ke, Yuan, Chenhan, Peng, Min, Lopez-Lira, Alejandro, Wang, Benyou, Ananiadou, Sophia, Huang, Jimin, Xie, Qianqian
While the progression of Large Language Models (LLMs) has notably propelled financial analysis, their application has largely been confined to singular language realms, leaving untapped the potential of bilingual Chinese-English capacity. To bridge t
Externí odkaz:
http://arxiv.org/abs/2403.06249