Zobrazeno 1 - 10
of 179 747
pro vyhledávání: '"A., Agrawal"'
Medical image segmentation is crucial in robotic surgeries, disease diagnosis, and treatment plans. This research presents an innovative methodology that combines Kolmogorov-Arnold Networks (KAN) with an adapted Mamba layer for medical image segmenta
Externí odkaz:
http://arxiv.org/abs/2411.11926
Learning-based solutions for long-tailed recognition face difficulties in generalizing on balanced test datasets. Due to imbalanced data prior, the learned \textit{a posteriori} distribution is biased toward the most frequent (head) classes, leading
Externí odkaz:
http://arxiv.org/abs/2412.16540
Concurrent computation and communication (C3) is a pervasive paradigm in ML and other domains, making its performance optimization crucial. In this paper, we carefully characterize C3 in ML on GPUs, which are most widely deployed for ML training and
Externí odkaz:
http://arxiv.org/abs/2412.14335
Autor:
Kumar, Shanu, Kholkar, Gauri, Mendke, Saish, Sadana, Anubhav, Agrawal, Parag, Dandapat, Sandipan
With the growth of social media and large language models, content moderation has become crucial. Many existing datasets lack adequate representation of different groups, resulting in unreliable assessments. To tackle this, we propose a socio-cultura
Externí odkaz:
http://arxiv.org/abs/2412.13578
This study applies machine learning to predict S&P 500 membership changes: key events that profoundly impact investor behavior and market dynamics. Quarterly data from WRDS datasets (2013 onwards) was used, incorporating features such as industry cla
Externí odkaz:
http://arxiv.org/abs/2412.12539
Diffusion Policies have become widely used in Imitation Learning, offering several appealing properties, such as generating multimodal and discontinuous behavior. As models are becoming larger to capture more complex capabilities, their computational
Externí odkaz:
http://arxiv.org/abs/2412.12953
Multilingual language models (MLLMs) are crucial for handling text across various languages, yet they often show performance disparities due to differences in resource availability and linguistic characteristics. While the impact of pre-train data pe
Externí odkaz:
http://arxiv.org/abs/2412.12500
Humans distill complex experiences into fundamental abstractions that enable rapid learning and adaptation. Similarly, autoregressive transformers exhibit adaptive learning through in-context learning (ICL), which begs the question of how. In this pa
Externí odkaz:
http://arxiv.org/abs/2412.12276
Autor:
Shanmugam, Divya, Agrawal, Monica, Movva, Rajiv, Chen, Irene Y., Ghassemi, Marzyeh, Jacobs, Maia, Pierson, Emma
The increased capabilities of generative AI have dramatically expanded its possible use cases in medicine. We provide a comprehensive overview of generative AI use cases for clinicians, patients, clinical trial organizers, researchers, and trainees.
Externí odkaz:
http://arxiv.org/abs/2412.10337
Human readers can accurately count how many letters are in a word (e.g., 7 in ``buffalo''), remove a letter from a given position (e.g., ``bufflo'') or add a new one. The human brain of readers must have therefore learned to disentangle information r
Externí odkaz:
http://arxiv.org/abs/2412.10446