Zobrazeno 1 - 10
of 657
pro vyhledávání: '"Petzold, Linda"'
Autor:
Li, Zekun, Yang, Xianjun, Choi, Kyuri, Zhu, Wanrong, Hsieh, Ryan, Kim, HyeonJung, Lim, Jin Hyuk, Ji, Sungyoung, Lee, Byungju, Yan, Xifeng, Petzold, Linda Ruth, Wilson, Stephen D., Lim, Woosang, Wang, William Yang
The rapid advancement of Large Language Models (LLMs) and Large Multimodal Models (LMMs) has heightened the demand for AI-based scientific assistants capable of understanding scientific articles and figures. Despite progress, there remains a signific
Externí odkaz:
http://arxiv.org/abs/2407.04903
Autor:
Zhang, Xinlu, Chen, Zhiyu Zoey, Ye, Xi, Yang, Xianjun, Chen, Lichang, Wang, William Yang, Petzold, Linda Ruth
Instruction Fine-Tuning (IFT) significantly enhances the zero-shot capabilities of pretrained Large Language Models (LLMs). While coding data is known to boost reasoning abilities during LLM pretraining, its role in activating internal reasoning capa
Externí odkaz:
http://arxiv.org/abs/2405.20535
Autor:
Chen, Zhiyu Zoey, Ma, Jing, Zhang, Xinlu, Hao, Nan, Yan, An, Nourbakhsh, Armineh, Yang, Xianjun, McAuley, Julian, Petzold, Linda, Wang, William Yang
In the fast-evolving domain of artificial intelligence, large language models (LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance, healthcare, and law: domains characterized by their reliance on professional expertise, challe
Externí odkaz:
http://arxiv.org/abs/2405.01769
This paper presents the development of a specialized chatbot for materials science, leveraging the Llama-2 language model, and continuing pre-training on the expansive research articles in the materials science domain from the S2ORC dataset. The meth
Externí odkaz:
http://arxiv.org/abs/2401.01089
Autor:
Zhang, Xinlu, Lu, Yujie, Wang, Weizhi, Yan, An, Yan, Jun, Qin, Lianke, Wang, Heng, Yan, Xifeng, Wang, William Yang, Petzold, Linda Ruth
Automatically evaluating vision-language tasks is challenging, especially when it comes to reflecting human judgments due to limitations in accounting for fine-grained details. Although GPT-4V has shown promising results in various multi-modal tasks,
Externí odkaz:
http://arxiv.org/abs/2311.01361
Autor:
Yang, Xianjun, Pan, Liangming, Zhao, Xuandong, Chen, Haifeng, Petzold, Linda, Wang, William Yang, Cheng, Wei
The burgeoning capabilities of advanced large language models (LLMs) such as ChatGPT have led to an increase in synthetic content generation with implications across a variety of sectors, including media, cybersecurity, public discourse, and educatio
Externí odkaz:
http://arxiv.org/abs/2310.15654
Instruction-finetuning (IFT) has become crucial in aligning Large Language Models (LLMs) with diverse human needs and has shown great potential in medical applications. However, previous studies mainly fine-tune LLMs on biomedical datasets with limit
Externí odkaz:
http://arxiv.org/abs/2310.14558
This work proposes a training-free approach for the detection of LLMs-generated codes, mitigating the risks associated with their indiscriminate usage. To the best of our knowledge, our research is the first to investigate zero-shot detection techniq
Externí odkaz:
http://arxiv.org/abs/2310.05103
Autor:
Yang, Xianjun, Wang, Xiao, Zhang, Qi, Petzold, Linda, Wang, William Yang, Zhao, Xun, Lin, Dahua
Warning: This paper contains examples of harmful language, and reader discretion is recommended. The increasing open release of powerful large language models (LLMs) has facilitated the development of downstream applications by reducing the essential
Externí odkaz:
http://arxiv.org/abs/2310.02949
Symbolic regression with polynomial neural networks and polynomial neural ordinary differential equations (ODEs) are two recent and powerful approaches for equation recovery of many science and engineering problems. However, these methods provide poi
Externí odkaz:
http://arxiv.org/abs/2308.10892