Zobrazeno 1 - 10
of 93
pro vyhledávání: '"Bhatia, Parminder"'
Large language models (LLMs) can learn vast amounts of knowledge from diverse domains during pre-training. However, long-tail knowledge from specialized domains is often scarce and underrepresented, rarely appearing in the models' memorization. Prior
Externí odkaz:
http://arxiv.org/abs/2410.23605
Autor:
Jiang, Pengcheng, Xiao, Cao, Jiang, Minhao, Bhatia, Parminder, Kass-Hout, Taha, Sun, Jimeng, Han, Jiawei
Large language models (LLMs) have demonstrated significant potential in clinical decision support. Yet LLMs still suffer from hallucinations and lack fine-grained contextual medical knowledge, limiting their high-stake healthcare applications such as
Externí odkaz:
http://arxiv.org/abs/2410.04585
Parameter Efficient Fine-Tuning (PEFT) offers an efficient solution for fine-tuning large pretrained language models for downstream tasks. However, most PEFT strategies are manually designed, often resulting in suboptimal performance. Recent automati
Externí odkaz:
http://arxiv.org/abs/2410.09079
Autor:
Da, Longchao, Wang, Rui, Xu, Xiaojian, Bhatia, Parminder, Kass-Hout, Taha, Wei, Hua, Xiao, Cao
Medical imaging is crucial for diagnosing a patient's health condition, and accurate segmentation of these images is essential for isolating regions of interest to ensure precise diagnosis and treatment planning. Existing methods primarily rely on bo
Externí odkaz:
http://arxiv.org/abs/2410.12831
Knowledge Graph Embedding (KGE) techniques are crucial in learning compact representations of entities and relations within a knowledge graph, facilitating efficient reasoning and knowledge discovery. While existing methods typically focus either on
Externí odkaz:
http://arxiv.org/abs/2405.16412
Autor:
Wang, Xiyao, Chen, Jiuhai, Wang, Zhaoyang, Zhou, Yuhang, Zhou, Yiyang, Yao, Huaxiu, Zhou, Tianyi, Goldstein, Tom, Bhatia, Parminder, Huang, Furong, Xiao, Cao
Large vision-language models (LVLMs) have achieved impressive results in various visual question-answering and reasoning tasks through vision instruction tuning on specific datasets. However, there is still significant room for improvement in the ali
Externí odkaz:
http://arxiv.org/abs/2405.15973
The advent of large language models (LLMs) has significantly advanced natural language processing tasks like text summarization. However, their large size and computational demands, coupled with privacy concerns in data transmission, limit their use
Externí odkaz:
http://arxiv.org/abs/2403.10351
Autor:
Athiwaratkun, Ben, Gonugondla, Sujan Kumar, Gouda, Sanjay Krishna, Qian, Haifeng, Ding, Hantian, Sun, Qing, Wang, Jun, Guo, Jiacheng, Chen, Liangfu, Bhatia, Parminder, Nallapati, Ramesh, Sengupta, Sudipta, Xiang, Bing
This study introduces bifurcated attention, a method designed to enhance language model inference in shared-context batch decoding scenarios. Our approach addresses the challenge of redundant memory IO costs, a critical factor contributing to latency
Externí odkaz:
http://arxiv.org/abs/2403.08845
Autor:
Anand, Deepa, M, Gurunath Reddy, Singhal, Vanika, Shanbhag, Dattesh D., KS, Shriram, Patil, Uday, Bhushan, Chitresh, Manickam, Kavitha, Gui, Dawei, Mullick, Rakesh, Gopal, Avinash, Bhatia, Parminder, Kass-Hout, Taha
Recent advances in Vision Transformers (ViT) and Stable Diffusion (SD) models with their ability to capture rich semantic features of the image have been used for image correspondence tasks on natural images. In this paper, we examine the ability of
Externí odkaz:
http://arxiv.org/abs/2310.18642
Autor:
Ravishankar, Hariharan, Patil, Rohan, Melapudi, Vikram, Suthar, Harsh, Anzengruber, Stephan, Bhatia, Parminder, Taha, Kass-Hout, Annangi, Pavan
In this paper, we present SonoSAMTrack - that combines a promptable foundational model for segmenting objects of interest on ultrasound images called SonoSAM, with a state-of-the art contour tracking model to propagate segmentations on 2D+t and 3D ul
Externí odkaz:
http://arxiv.org/abs/2310.16872