Zobrazeno 1 - 10
of 65
pro vyhledávání: '"Munikoti, Sai"'
Multimodal foundation models (MFMs) such as OFASys show the potential to unlock analysis of complex data such as images, videos, and audio data via text prompts alone. However, their performance may suffer in the face of text input that differs even
Externí odkaz:
http://arxiv.org/abs/2408.14595
Autor:
Meyur, Rounak, Phan, Hung, Wagle, Sridevi, Strube, Jan, Halappanavar, Mahantesh, Horawalavithana, Sameera, Acharya, Anurag, Munikoti, Sai
In the rapidly evolving landscape of Natural Language Processing (NLP) and text generation, the emergence of Retrieval Augmented Generation (RAG) presents a promising avenue for improving the quality and reliability of generated text by leveraging in
Externí odkaz:
http://arxiv.org/abs/2408.11800
Autor:
Phan, Hung, Acharya, Anurag, Meyur, Rounak, Chaturvedi, Sarthak, Sharma, Shivam, Parker, Mike, Nally, Dan, Jannesari, Ali, Pazdernik, Karl, Halappanavar, Mahantesh, Munikoti, Sai, Horawalavithana, Sameera
As LLMs become increasingly ubiquitous, researchers have tried various techniques to augment the knowledge provided to these models. Long context and retrieval-augmented generation (RAG) are two such methods that have recently gained popularity. In t
Externí odkaz:
http://arxiv.org/abs/2407.07321
Autor:
Munikoti, Sai, Stewart, Ian, Horawalavithana, Sameera, Kvinge, Henry, Emerson, Tegan, Thompson, Sandra E, Pazdernik, Karl
Multimodal models are expected to be a critical component to future advances in artificial intelligence. This field is starting to grow rapidly with a surge of new design elements motivated by the success of foundation models in natural language proc
Externí odkaz:
http://arxiv.org/abs/2406.05496
Large language models record impressive performance on many natural language processing tasks. However, their knowledge capacity is limited to the pretraining corpus. Retrieval augmentation offers an effective solution by retrieving context from exte
Externí odkaz:
http://arxiv.org/abs/2311.12289
Large language models (LLMs) have shown remarkable achievements in natural language processing tasks, producing high-quality outputs. However, LLMs still exhibit limitations, including the generation of factually incorrect information. In safety-crit
Externí odkaz:
http://arxiv.org/abs/2311.09358
Despite the dramatic progress in Large Language Model (LLM) development, LLMs often provide seemingly plausible but not factual information, often referred to as hallucinations. Retrieval-augmented LLMs provide a non-parametric approach to solve thes
Externí odkaz:
http://arxiv.org/abs/2311.04348
Autor:
Acharya, Anurag, Munikoti, Sai, Hellinger, Aaron, Smith, Sara, Wagle, Sridevi, Horawalavithana, Sameera
As LLMs have become increasingly popular, they have been used in almost every field. But as the application for LLMs expands from generic fields to narrow, focused science domains, there exists an ever-increasing gap in ways to evaluate their efficac
Externí odkaz:
http://arxiv.org/abs/2310.10920
Instruction finetuning is a popular paradigm to align large language models (LLM) with human intent. Despite its popularity, this idea is less explored in improving the LLMs to align existing foundation models with scientific disciplines, concepts an
Externí odkaz:
http://arxiv.org/abs/2307.01139
Uncertainty quantification is a critical yet unsolved challenge for deep learning, especially for the time series imputation with irregularly sampled measurements. To tackle this problem, we propose a novel framework based on the principles of recurr
Externí odkaz:
http://arxiv.org/abs/2306.01189