Zobrazeno 1 - 10
of 101
pro vyhledávání: '"Arik, Sercan O."'
Retrieval-augmented generation (RAG) empowers large language models (LLMs) to utilize external knowledge sources. The increasing capacity of LLMs to process longer input sequences opens up avenues for providing more retrieved information, to potentia
Externí odkaz:
http://arxiv.org/abs/2410.05983
Autor:
Pourreza, Mohammadreza, Li, Hailong, Sun, Ruoxi, Chung, Yeounoh, Talaei, Shayan, Kakkar, Gaurav Tarlok, Gan, Yu, Saberi, Amin, Ozcan, Fatma, Arik, Sercan O.
In tackling the challenges of large language model (LLM) performance for Text-to-SQL tasks, we introduce CHASE-SQL, a new framework that employs innovative strategies, using test-time compute in multi-agent modeling to improve candidate generation an
Externí odkaz:
http://arxiv.org/abs/2410.01943
Autor:
Pourreza, Mohammadreza, Sun, Ruoxi, Li, Hailong, Miculicich, Lesly, Pfister, Tomas, Arik, Sercan O.
Recent advances in Text-to-SQL have largely focused on the SQLite dialect, neglecting the diverse landscape of SQL dialects like BigQuery and PostgreSQL. This limitation is due to the diversity in SQL syntaxes and functions, along with the high cost
Externí odkaz:
http://arxiv.org/abs/2408.12733
Multimodal Large Language Models (MLLMs) demonstrate remarkable image-language capabilities, but their widespread use faces challenges in cost-effective training and adaptation. Existing approaches often necessitate expensive language model retrainin
Externí odkaz:
http://arxiv.org/abs/2408.06610
Embeddings from Large Language Models (LLMs) have emerged as critical components in various applications, particularly for information retrieval. While high-dimensional embeddings generally demonstrate superior performance as they contain more salien
Externí odkaz:
http://arxiv.org/abs/2407.20243
Autor:
Su, Hongjin, Yen, Howard, Xia, Mengzhou, Shi, Weijia, Muennighoff, Niklas, Wang, Han-yu, Liu, Haisu, Shi, Quan, Siegel, Zachary S., Tang, Michael, Sun, Ruoxi, Yoon, Jinsung, Arik, Sercan O., Chen, Danqi, Yu, Tao
Existing retrieval benchmarks primarily consist of information-seeking queries (e.g., aggregated questions from search engines) where keyword or semantic-based retrieval is usually sufficient. However, many complex real-world queries require in-depth
Externí odkaz:
http://arxiv.org/abs/2407.12883
Large language models have demonstrated remarkable capabilities, but their performance is heavily reliant on effective prompt engineering. Automatic prompt optimization (APO) methods are designed to automate this and can be broadly categorized into t
Externí odkaz:
http://arxiv.org/abs/2406.15708
Large Language Models (LLMs), with their remarkable ability to tackle challenging and unseen reasoning problems, hold immense potential for tabular learning, that is vital for many real-world applications. In this paper, we propose a novel in-context
Externí odkaz:
http://arxiv.org/abs/2404.09491
Large language models (LLMs) have attracted huge interest in practical applications given their increasingly accurate responses and coherent reasoning abilities. Given their nature as black-boxes using complex reasoning processes on their inputs, it
Externí odkaz:
http://arxiv.org/abs/2312.01279
Large language models (LLMs) have recently shown great advances in a variety of tasks, including natural language understanding and generation. However, their use in high-stakes decision-making scenarios is still limited due to the potential for erro
Externí odkaz:
http://arxiv.org/abs/2310.11689