Zobrazeno 1 - 10
of 197
pro vyhledávání: '"Lam, Monica S"'
While language model (LM)-powered chatbots and generative search engines excel at answering concrete queries, discovering information in the terrain of unknown unknowns remains challenging for users. To emulate the common educational scenario where c
Externí odkaz:
http://arxiv.org/abs/2408.15232
Autor:
Liu, Shicheng, Semnani, Sina J., Triedman, Harold, Xu, Jialiang, Zhao, Isaac Dan, Lam, Monica S.
Large Language Models (LLMs) have led to significant improvements in the Knowledge Base Question Answering (KBQA) task. However, datasets used in KBQA studies do not capture the true complexity of KBQA tasks. They either have simple questions, use sy
Externí odkaz:
http://arxiv.org/abs/2407.11417
Programming LLM-based knowledge and task assistants that faithfully conform to developer-provided policies is challenging. These agents must retrieve and provide consistent, accurate, and relevant information to address user's queries and needs. Yet
Externí odkaz:
http://arxiv.org/abs/2407.05674
Autor:
Furumai, Kazuaki, Legaspi, Roberto, Vizcarra, Julio, Yamazaki, Yudai, Nishimura, Yasutaka, Semnani, Sina J., Ikeda, Kazushi, Shi, Weiyan, Lam, Monica S.
Persuasion plays a pivotal role in a wide range of applications from health intervention to the promotion of social good. Persuasive chatbots employed responsibly for social good can be an enabler of positive individual and social change. Existing me
Externí odkaz:
http://arxiv.org/abs/2407.03585
Autor:
Zhang, Heidi C., Semnani, Sina J., Ghassemi, Farhad, Xu, Jialiang, Liu, Shicheng, Lam, Monica S.
We introduce SPAGHETTI: Semantic Parsing Augmented Generation for Hybrid English information from Text Tables and Infoboxes, a hybrid question-answering (QA) pipeline that utilizes information from heterogeneous knowledge sources, including knowledge
Externí odkaz:
http://arxiv.org/abs/2406.00562
Autor:
Lee, Andrew H., Semnani, Sina J., Castillo-López, Galo, de Chalendar, Gäel, Choudhury, Monojit, Dua, Ashna, Kavitha, Kapil Rajesh, Kim, Sungkyun, Kodali, Prashant, Kumaraguru, Ponnurangam, Lombard, Alexis, Moradshahi, Mehrad, Park, Gihyun, Semmar, Nasredine, Seo, Jiwon, Shen, Tianhao, Shrivastava, Manish, Xiong, Deyi, Lam, Monica S.
Creating multilingual task-oriented dialogue (TOD) agents is challenging due to the high cost of training data acquisition. Following the research trend of improving training data efficiency, we show for the first time, that in-context learning is su
Externí odkaz:
http://arxiv.org/abs/2405.17840
We study how to apply large language models to write grounded and organized long-form articles from scratch, with comparable breadth and depth to Wikipedia pages. This underexplored problem poses new challenges at the pre-writing stage, including how
Externí odkaz:
http://arxiv.org/abs/2402.14207
Autor:
Liu, Shicheng, Xu, Jialiang, Tjangnaka, Wesley, Semnani, Sina J., Yu, Chen Jie, Lam, Monica S.
While most conversational agents are grounded on either free-text or structured knowledge, many knowledge corpora consist of hybrid sources. This paper presents the first conversational agent that supports the full generality of hybrid data access fo
Externí odkaz:
http://arxiv.org/abs/2311.09818
Autor:
Moradshahi, Mehrad, Shen, Tianhao, Bali, Kalika, Choudhury, Monojit, de Chalendar, Gaël, Goel, Anmol, Kim, Sungkyun, Kodali, Prashant, Kumaraguru, Ponnurangam, Semmar, Nasredine, Semnani, Sina J., Seo, Jiwon, Seshadri, Vivek, Shrivastava, Manish, Sun, Michael, Yadavalli, Aditya, You, Chaobin, Xiong, Deyi, Lam, Monica S.
Task-oriented dialogue research has mainly focused on a few popular languages like English and Chinese, due to the high dataset creation cost for a new language. To reduce the cost, we apply manual editing to automatically translated data. We create
Externí odkaz:
http://arxiv.org/abs/2306.17674
Autor:
Yang, Jackie Junrui, Shi, Yingtian, Zhang, Yuhan, Li, Karina, Rosli, Daniel Wan, Jain, Anisha, Zhang, Shuning, Li, Tianshi, Landay, James A., Lam, Monica S.
By combining voice and touch interactions, multimodal interfaces can surpass the efficiency of either modality alone. Traditional multimodal frameworks require laborious developer work to support rich multimodal commands where the user's multimodal c
Externí odkaz:
http://arxiv.org/abs/2306.09649