AvaTaR: Optimizing LLM Agents for Tool Usage via Contrastive Reasoning

Autor: Wu, Shirley, Zhao, Shiyu, Huang, Qian, Huang, Kexin, Yasunaga, Michihiro, Cao, Kaidi, Ioannidis, Vassilis N., Subbian, Karthik, Leskovec, Jure, Zou, James
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Large language model (LLM) agents have demonstrated impressive capabilities in utilizing external tools and knowledge to boost accuracy and reduce hallucinations. However, developing prompting techniques that enable LLM agents to effectively use these tools and knowledge remains a heuristic and labor-intensive task. Here, we introduce AvaTaR, a novel and automated framework that optimizes an LLM agent to effectively leverage provided tools, improving performance on a given task. During optimization, we design a comparator module to iteratively deliver insightful and comprehensive prompts to the LLM agent by contrastively reasoning between positive and negative examples sampled from training data. We demonstrate AvaTaR on four complex multimodal retrieval datasets featuring textual, visual, and relational information, and three general question-answering (QA) datasets. We find AvaTaR consistently outperforms state-of-the-art approaches across all seven tasks, exhibiting strong generalization ability when applied to novel cases and achieving an average relative improvement of 14% on the Hit@1 metric for the retrieval datasets and 13% for the QA datasets. Code and dataset are available at https://github.com/zou-group/avatar.
Comment: NeurIPS 2024 main conference
Databáze: arXiv