Zobrazeno 1 - 10
of 16
pro vyhledávání: '"Raha, Tathagata"'
Autor:
Abdul, Wadood M, Pimentel, Marco AF, Salman, Muhammad Umar, Raha, Tathagata, Christophe, Clément, Kanithi, Praveen K, Hayat, Nasir, Rajan, Ronnie, Khan, Shadab
This technical report introduces a Named Clinical Entity Recognition Benchmark for evaluating language models in healthcare, addressing the crucial natural language processing (NLP) task of extracting structured information from clinical narratives t
Externí odkaz:
http://arxiv.org/abs/2410.05046
Autor:
Christophe, Clément, Raha, Tathagata, Maslenkova, Svetlana, Salman, Muhammad Umar, Kanithi, Praveen K, Pimentel, Marco AF, Khan, Shadab
Large Language Models (LLMs) have demonstrated significant potential in transforming clinical applications. In this study, we investigate the efficacy of four techniques in adapting LLMs for clinical use-cases: continuous pretraining, instruct fine-t
Externí odkaz:
http://arxiv.org/abs/2409.14988
Autor:
Kanithi, Praveen K, Christophe, Clément, Pimentel, Marco AF, Raha, Tathagata, Saadi, Nada, Javed, Hamza, Maslenkova, Svetlana, Hayat, Nasir, Rajan, Ronnie, Khan, Shadab
The rapid development of Large Language Models (LLMs) for healthcare applications has spurred calls for holistic evaluation beyond frequently-cited benchmarks like USMLE, to better reflect real-world performance. While real-world assessments are valu
Externí odkaz:
http://arxiv.org/abs/2409.07314
Med42-v2 introduces a suite of clinical large language models (LLMs) designed to address the limitations of generic models in healthcare settings. These models are built on Llama3 architecture and fine-tuned using specialized clinical data. They unde
Externí odkaz:
http://arxiv.org/abs/2408.06142
Beyond Metrics: A Critical Analysis of the Variability in Large Language Model Evaluation Frameworks
Autor:
Pimentel, Marco AF, Christophe, Clément, Raha, Tathagata, Munjal, Prateek, Kanithi, Praveen K, Khan, Shadab
As large language models (LLMs) continue to evolve, the need for robust and standardized evaluation benchmarks becomes paramount. Evaluating the performance of these models is a complex challenge that requires careful consideration of various linguis
Externí odkaz:
http://arxiv.org/abs/2407.21072
This paper describes our approach for SemEval-2024 Task 9: BRAINTEASER: A Novel Task Defying Common Sense. The BRAINTEASER task comprises multiple-choice Question Answering designed to evaluate the models' lateral thinking capabilities. It consists o
Externí odkaz:
http://arxiv.org/abs/2405.16129
Autor:
Christophe, Clément, Kanithi, Praveen K, Munjal, Prateek, Raha, Tathagata, Hayat, Nasir, Rajan, Ronnie, Al-Mahrooqi, Ahmed, Gupta, Avani, Salman, Muhammad Umar, Gosal, Gurpreet, Kanakiya, Bhargav, Chen, Charles, Vassilieva, Natalia, Amor, Boulbaba Ben, Pimentel, Marco AF, Khan, Shadab
This study presents a comprehensive analysis and comparison of two predominant fine-tuning methodologies - full-parameter fine-tuning and parameter-efficient tuning - within the context of medical Large Language Models (LLMs). We developed and refine
Externí odkaz:
http://arxiv.org/abs/2404.14779
Autor:
Raha, Tathagata, Choudhary, Mukund, Menon, Abhinav, Gupta, Harshit, Srivatsa, KV Aditya, Gupta, Manish, Varma, Vasudeva
Factual consistency is one of the most important requirements when editing high quality documents. It is extremely important for automatic text generation systems like summarization, question answering, dialog modeling, and language modeling. Still,
Externí odkaz:
http://arxiv.org/abs/2306.08872
Autor:
Raha, Tathagata, Indurthi, Vijayasaradhi, Upadhyaya, Aayush, Kataria, Jeevesh, Bommakanti, Pramud, Keswani, Vikram, Varma, Vasudeva
The evolution of social media platforms have empowered everyone to access information easily. Social media users can easily share information with the rest of the world. This may sometimes encourage spread of fake news, which can result in undesirabl
Externí odkaz:
http://arxiv.org/abs/2101.11954
Identifying adverse and hostile content on the web and more particularly, on social media, has become a problem of paramount interest in recent years. With their ever increasing popularity, fine-tuning of pretrained Transformer-based encoder models w
Externí odkaz:
http://arxiv.org/abs/2101.03382