Zobrazeno 1 - 10
of 486
pro vyhledávání: '"Wells, William M."'
Autor:
Wang, Peiqi, Lam, Barbara D., Liu, Yingcheng, Asgari-Targhi, Ameneh, Panda, Rameswar, Wells, William M., Kapur, Tina, Golland, Polina
We present a novel approach to calibrating linguistic expressions of certainty, e.g., "Maybe" and "Likely". Unlike prior work that assigns a single score to each certainty phrase, we model uncertainty as distributions over the simplex to capture thei
Externí odkaz:
http://arxiv.org/abs/2410.04315
Autor:
Fehrentz, Maximilian, Azampour, Mohammad Farid, Dorent, Reuben, Rasheed, Hassan, Galvin, Colin, Golby, Alexandra, Wells, William M., Frisken, Sarah, Navab, Nassir, Haouchine, Nazim
We present in this paper a novel approach for 3D/2D intraoperative registration during neurosurgery via cross-modal inverse neural rendering. Our approach separates implicit neural representation into two components, handling anatomical structure pre
Externí odkaz:
http://arxiv.org/abs/2409.11983
Autor:
Dorent, Reuben, Haouchine, Nazim, Kögl, Fryderyk, Joutard, Samuel, Juvekar, Parikshit, Torio, Erickson, Golby, Alexandra, Ourselin, Sebastien, Frisken, Sarah, Vercauteren, Tom, Kapur, Tina, Wells, William M.
We introduce MHVAE, a deep hierarchical variational auto-encoder (VAE) that synthesizes missing images from various modalities. Extending multi-modal VAEs with a hierarchical latent structure, we introduce a probabilistic formulation for fusing multi
Externí odkaz:
http://arxiv.org/abs/2309.08747
Autor:
Wang, Peiqi, Liu, Yingcheng, Ko, Ching-Yun, Wells, William M., Berkowitz, Seth, Horng, Steven, Golland, Polina
Self-supervised representation learning on image-text data facilitates crucial medical applications, such as image classification, visual grounding, and cross-modal retrieval. One common approach involves contrasting semantically similar (positive) a
Externí odkaz:
http://arxiv.org/abs/2304.13181
Autor:
Lucassen, Ruben T., Jafari, Mohammad H., Duggan, Nicole M., Jowkar, Nick, Mehrtash, Alireza, Fischetti, Chanel, Bernier, Denie, Prentice, Kira, Duhaime, Erik P., Jin, Mike, Abolmaesumi, Purang, Heslinga, Friso G., Veta, Mitko, Duran-Mendicuti, Maria A., Frisken, Sarah, Shyn, Paul B., Golby, Alexandra J., Boyer, Edward, Wells, William M., Goldsmith, Andrew J., Kapur, Tina
Lung ultrasound (LUS) is an important imaging modality used by emergency physicians to assess pulmonary congestion at the patient bedside. B-line artifacts in LUS videos are key findings associated with pulmonary congestion. Not only can the interpre
Externí odkaz:
http://arxiv.org/abs/2302.07844
Image-text multimodal representation learning aligns data across modalities and enables important medical applications, e.g., image classification, visual grounding, and cross-modal retrieval. In this work, we establish a connection between multimoda
Externí odkaz:
http://arxiv.org/abs/2212.05561
Autor:
Xue, Tengfei, Zhang, Fan, Zekelman, Leo R., Zhang, Chaoyi, Chen, Yuqian, Cetin-Karayumak, Suheyla, Pieper, Steve, Wells, William M., Rathi, Yogesh, Makris, Nikos, Cai, Weidong, O'Donnell, Lauren J.
Neuroimaging-based prediction of neurocognitive measures is valuable for studying how the brain's structure relates to cognitive function. However, the accuracy of prediction using popular linear regression models is relatively low. We propose a nove
Externí odkaz:
http://arxiv.org/abs/2210.07411
Autor:
Young, Sean I., Balbastre, Yaël, Dalca, Adrian V., Wells, William M., Iglesias, Juan Eugenio, Fischl, Bruce
In recent years, learning-based image registration methods have gradually moved away from direct supervision with target warps to instead use self-supervision, with excellent results in several registration benchmarks. These approaches utilize a loss
Externí odkaz:
http://arxiv.org/abs/2205.07399
We demonstrate an object tracking method for 3D images with fixed computational cost and state-of-the-art performance. Previous methods predicted transformation parameters from convolutional layers. We instead propose an architecture that does not in
Externí odkaz:
http://arxiv.org/abs/2103.10255
Autor:
Liao, Ruizhi, Moyer, Daniel, Cha, Miriam, Quigley, Keegan, Berkowitz, Seth, Horng, Steven, Golland, Polina, Wells, William M.
Publikováno v:
In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 273-283. Springer, Cham, 2021
We propose and demonstrate a representation learning approach by maximizing the mutual information between local features of images and text. The goal of this approach is to learn useful image representations by taking advantage of the rich informati
Externí odkaz:
http://arxiv.org/abs/2103.04537