Zobrazeno 1 - 10
of 25
pro vyhledávání: '"Choubey, Prafulla Kumar"'
Autor:
Choubey, Prafulla Kumar, Su, Xin, Luo, Man, Peng, Xiangyu, Xiong, Caiming, Le, Tiep, Rosenman, Shachar, Lal, Vasudev, Mui, Phil, Ho, Ricky, Howard, Phillip, Wu, Chien-Sheng
Knowledge graphs (KGs) generated by large language models (LLMs) are becoming increasingly valuable for Retrieval-Augmented Generation (RAG) applications that require knowledge-intensive reasoning. However, existing KG extraction methods predominantl
Externí odkaz:
http://arxiv.org/abs/2410.16597
Evaluating retrieval-augmented generation (RAG) systems remains challenging, particularly for open-ended questions that lack definitive answers and require coverage of multiple sub-topics. In this paper, we introduce a novel evaluation framework base
Externí odkaz:
http://arxiv.org/abs/2410.15531
Ideal summarization models should generalize to novel summary-worthy content without remembering reference training summaries by rote. However, a single average performance score on the entire test set is inadequate in determining such model competen
Externí odkaz:
http://arxiv.org/abs/2311.09458
Autor:
Huang, Kung-Hsiang, Laban, Philippe, Fabbri, Alexander R., Choubey, Prafulla Kumar, Joty, Shafiq, Xiong, Caiming, Wu, Chien-Sheng
Previous research in multi-document news summarization has typically concentrated on collating information that all sources agree upon. However, the summarization of diverse information dispersed across multiple articles about an event remains undere
Externí odkaz:
http://arxiv.org/abs/2309.09369
Autor:
Nijkamp, Erik, Xie, Tian, Hayashi, Hiroaki, Pang, Bo, Xia, Congying, Xing, Chen, Vig, Jesse, Yavuz, Semih, Laban, Philippe, Krause, Ben, Purushwalkam, Senthil, Niu, Tong, Kryściński, Wojciech, Murakhovs'ka, Lidiya, Choubey, Prafulla Kumar, Fabbri, Alex, Liu, Ye, Meng, Rui, Tu, Lifu, Bhat, Meghana, Wu, Chien-Sheng, Savarese, Silvio, Zhou, Yingbo, Joty, Shafiq, Xiong, Caiming
Large Language Models (LLMs) have become ubiquitous across various domains, transforming the way we interact with information and conduct research. However, most high-performing LLMs remain confined behind proprietary walls, hindering scientific prog
Externí odkaz:
http://arxiv.org/abs/2309.03450
State-of-the-art summarization models still struggle to be factually consistent with the input text. A model-agnostic way to address this problem is post-editing the generated summaries. However, existing approaches typically fail to remove entity er
Externí odkaz:
http://arxiv.org/abs/2211.06196
Pre-trained language models (PLMs) have been shown effective for zero-shot (0shot) text classification. 0shot models based on natural language inference (NLI) and next sentence prediction (NSP) employ cross-encoder architecture and infer by making a
Externí odkaz:
http://arxiv.org/abs/2210.12619
Prompt tuning approaches, which learn task-specific soft prompts for a downstream task conditioning on frozen pre-trained models, have attracted growing interest due to its parameter efficiency. With large language models and sufficient training data
Externí odkaz:
http://arxiv.org/abs/2210.12587
We propose to leverage news discourse profiling to model document-level temporal structures for building temporal dependency graphs. Our key observation is that the functional roles of sentences used for profiling news discourse signify different tim
Externí odkaz:
http://arxiv.org/abs/2210.11787
Recent work (e.g. LAMA (Petroni et al., 2019)) has found that the quality of the factual information extracted from Large Language Models (LLMs) depends on the prompts used to query them. This inconsistency is problematic because different users will
Externí odkaz:
http://arxiv.org/abs/2110.07280