Evaluating LLM Reasoning in the Operations Research Domain with ORQA

Autor: Mostajabdaveh, Mahdi, Yu, Timothy T., Dash, Samarendra Chandan Bindu, Ramamonjison, Rindranirina, Byusa, Jabo Serge, Carenini, Giuseppe, Zhou, Zirui, Zhang, Yong
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: In this paper, we introduce and apply Operations Research Question Answering (ORQA), a new benchmark designed to assess the generalization capabilities of Large Language Models (LLMs) in the specialized technical domain of Operations Research (OR). This benchmark evaluates whether LLMs can emulate the knowledge and reasoning skills of OR experts when confronted with diverse and complex optimization problems. The dataset, developed by OR experts, features real-world optimization problems that demand multistep reasoning to construct their mathematical models. Our evaluations of various open source LLMs, such as LLaMA 3.1, DeepSeek, and Mixtral, reveal their modest performance, highlighting a gap in their ability to generalize to specialized technical domains. This work contributes to the ongoing discourse on LLMs generalization capabilities, offering valuable insights for future research in this area. The dataset and evaluation code are publicly available.
Comment: 12 pages, 10 figures. Accepted and to be published in AAAI25
Databáze: arXiv