Zobrazeno 1 - 10
of 303
pro vyhledávání: '"Beam, Andrew"'
Causal inference is a critical task across fields such as healthcare, economics, and the social sciences. While recent advances in machine learning, especially those based on the deep-learning architectures, have shown potential in estimating causal
Externí odkaz:
http://arxiv.org/abs/2410.10044
Autor:
Hakim, Joe B, Painter, Jeffery L, Ramcharran, Darmendra, Kara, Vijay, Powell, Greg, Sobczak, Paulina, Sato, Chiho, Bate, Andrew, Beam, Andrew
Large language models (LLMs) are useful tools with the capacity for performing specific types of knowledge work at an effective scale. However, LLM deployments in high-risk and safety-critical domains pose unique challenges, notably the issue of ``ha
Externí odkaz:
http://arxiv.org/abs/2407.18322
Autor:
Hua, Yining, Liu, Fenglin, Yang, Kailai, Li, Zehan, Na, Hongbin, Sheu, Yi-han, Zhou, Peilin, Moran, Lauren V., Ananiadou, Sophia, Beam, Andrew, Torous, John
The integration of large language models (LLMs) in mental health care is an emerging field. There is a need to systematically review the application outcomes and delineate the advantages and limitations in clinical settings. This review aims to provi
Externí odkaz:
http://arxiv.org/abs/2401.02984
In this work we introduce Labrador, a pre-trained Transformer model for laboratory data. Labrador and BERT were pre-trained on a corpus of 100 million lab test results from electronic health records (EHRs) and evaluated on various downstream outcome
Externí odkaz:
http://arxiv.org/abs/2312.11502
Autor:
Kumar, Bhawesh, Lu, Charlie, Gupta, Gauri, Palepu, Anil, Bellamy, David, Raskar, Ramesh, Beam, Andrew
As large language models continue to be widely developed, robust uncertainty quantification techniques will become crucial for their safe deployment in high-stakes scenarios. In this work, we explore how conformal prediction can be used to provide un
Externí odkaz:
http://arxiv.org/abs/2305.18404
Autor:
Palepu, Anil, Beam, Andrew L.
In this paper, we introduce a novel regularization scheme on contrastive language-image pre-trained (CLIP) medical vision models. Our approach is based on the observation that on many medical imaging tasks text tokens should only describe a small num
Externí odkaz:
http://arxiv.org/abs/2212.06710
Self-supervised models trained with a contrastive loss such as CLIP have shown to be very powerful in zero-shot classification settings. However, to be used as a zero-shot classifier these models require the user to provide new captions over a fixed
Externí odkaz:
http://arxiv.org/abs/2210.15805
Autor:
Palepu, Anil, Beam, Andrew L
Deep learning models trained in a fully supervised manner have been shown to rely on so-called "shortcut" features. Shortcut features are inputs that are associated with the outcome of interest in the training data, but are either no longer associate
Externí odkaz:
http://arxiv.org/abs/2206.07155
The No Unmeasured Confounding Assumption is widely used to identify causal effects in observational studies. Recent work on proximal inference has provided alternative identification results that succeed even in the presence of unobserved confounders
Externí odkaz:
http://arxiv.org/abs/2205.09824
Autor:
Levine, David M †, Tuwani, Rudraksh †, Kompa, Benjamin, Varma, Amita, Finlayson, Samuel G, Mehrotra, Ateev, Beam, Andrew *
Publikováno v:
In The Lancet Digital Health August 2024 6(8):e555-e561