Zobrazeno 1 - 10
of 11 213
pro vyhledávání: '"Pourreza, A."'
Autor:
Pourreza, Mohammadreza, Li, Hailong, Sun, Ruoxi, Chung, Yeounoh, Talaei, Shayan, Kakkar, Gaurav Tarlok, Gan, Yu, Saberi, Amin, Ozcan, Fatma, Arik, Sercan O.
In tackling the challenges of large language model (LLM) performance for Text-to-SQL tasks, we introduce CHASE-SQL, a new framework that employs innovative strategies, using test-time compute in multi-agent modeling to improve candidate generation an
Externí odkaz:
http://arxiv.org/abs/2410.01943
Autor:
Pourreza, Mohammadreza, Sun, Ruoxi, Li, Hailong, Miculicich, Lesly, Pfister, Tomas, Arik, Sercan O.
Recent advances in Text-to-SQL have largely focused on the SQLite dialect, neglecting the diverse landscape of SQL dialects like BigQuery and PostgreSQL. This limitation is due to the diversity in SQL syntaxes and functions, along with the high cost
Externí odkaz:
http://arxiv.org/abs/2408.12733
Autor:
Panchal, Sunny, Bhattacharyya, Apratim, Berger, Guillaume, Mercier, Antoine, Bohm, Cornelius, Dietrichkeit, Florian, Pourreza, Reza, Li, Xuanlin, Madan, Pulkit, Lee, Mingu, Todorovich, Mark, Bax, Ingo, Memisevic, Roland
Vision-language models have shown impressive progress in recent years. However, existing models are largely limited to turn-based interactions, where each turn must be stepped (i.e., prompted) by the user. Open-ended, asynchronous interactions, where
Externí odkaz:
http://arxiv.org/abs/2407.08101
Autor:
Kavehzadeh, Parsa, Pourreza, Mohammadreza, Valipour, Mojtaba, Zhu, Tinashu, Bai, Haoli, Ghodsi, Ali, Chen, Boxing, Rezagholizadeh, Mehdi
Deployment of autoregressive large language models (LLMs) is costly, and as these models increase in size, the associated costs will become even more considerable. Consequently, different methods have been proposed to accelerate the token generation
Externí odkaz:
http://arxiv.org/abs/2407.01955
Autor:
Feng, Yuxi, Li, Raymond, Fan, Zhenan, Carenini, Giuseppe, Pourreza, Mohammadreza, Zhang, Weiwei, Zhang, Yong
While in-context Learning (ICL) has proven to be an effective technique to improve the performance of Large Language Models (LLMs) in a variety of complex tasks, notably in translating natural language questions into Structured Query Language (NL2SQL
Externí odkaz:
http://arxiv.org/abs/2406.07913
Translating natural language questions into SQL queries, known as text-to-SQL, is a long-standing research problem. Effective text-to-SQL synthesis can become very challenging due to (i) the extensive size of database catalogs (descriptions of tables
Externí odkaz:
http://arxiv.org/abs/2405.16755
Detecting structural similarity between queries is essential for selecting examples in in-context learning models. However, assessing structural similarity based solely on the natural language expressions of queries, without considering SQL queries,
Externí odkaz:
http://arxiv.org/abs/2403.16204
Autor:
Zhao, Liangyu, Maleki, Saeed, Shah, Aashaka, Yang, Ziyue, Pourreza, Hossein, Krishnamurthy, Arvind
As modern DNN models grow ever larger, collective communications between the accelerators (allreduce, etc.) emerge as a significant performance bottleneck. Designing efficient communication schedules is challenging, given today's highly diverse and h
Externí odkaz:
http://arxiv.org/abs/2402.06787
Autor:
Xiong, Yifan, Jiang, Yuting, Yang, Ziyue, Qu, Lei, Zhao, Guoshuai, Liu, Shuguang, Zhong, Dong, Pinzur, Boris, Zhang, Jie, Wang, Yang, Jose, Jithin, Pourreza, Hossein, Baxter, Jeff, Datta, Kushal, Ram, Prabhat, Melton, Luke, Chau, Joe, Cheng, Peng, Xiong, Yongqiang, Zhou, Lidong
Reliability in cloud AI infrastructure is crucial for cloud service providers, prompting the widespread use of hardware redundancies. However, these redundancies can inadvertently lead to hidden degradation, so called "gray failure", for AI workloads
Externí odkaz:
http://arxiv.org/abs/2402.06194
Autor:
Pourreza, Mohammadreza, Rafiei, Davood
Leading models for the text-to-SQL task heavily rely on proprietary Large Language Models (LLMs), posing concerns over data privacy. Closing the performance gap between small open-source models and large proprietary models is crucial to mitigate this
Externí odkaz:
http://arxiv.org/abs/2402.01117