Zobrazeno 1 - 10
of 599
pro vyhledávání: '"Wang, Alan P."'
Distribution shifts between sites can seriously degrade model performance since models are prone to exploiting unstable correlations. Thus, many methods try to find features that are stable across sites and discard unstable features. However, unstabl
Externí odkaz:
http://arxiv.org/abs/2409.05996
Autor:
Nguyen, Minh, Karaman, Batuhan K., Kim, Heejong, Wang, Alan Q., Liu, Fengbei, Sabuncu, Mert R.
Deep learning models can extract predictive and actionable information from complex inputs. The richer the inputs, the better these models usually perform. However, models that leverage rich inputs (e.g., multi-modality) can be difficult to deploy wi
Externí odkaz:
http://arxiv.org/abs/2405.20448
We present a keypoint-based foundation model for general purpose brain MRI registration, based on the recently-proposed KeyMorph framework. Our model, called BrainMorph, serves as a tool that supports multi-modal, pairwise, and scalable groupwise reg
Externí odkaz:
http://arxiv.org/abs/2405.14019
Publikováno v:
In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security (CCS '23). Association for Computing Machinery, New York, NY, USA, 2053-2067
Speculative execution attacks undermine the security of constant-time programming, the standard technique used to prevent microarchitectural side channels in security-sensitive software such as cryptographic code. Constant-time code must therefore al
Externí odkaz:
http://arxiv.org/abs/2312.09336
Healthcare data often come from multiple sites in which the correlations between confounding variables can vary widely. If deep learning models exploit these unstable correlations, they might fail catastrophically in unseen sites. Although many metho
Externí odkaz:
http://arxiv.org/abs/2310.15766
Autor:
Wang, Alan Q., Karaman, Batuhan K., Kim, Heejong, Rosenthal, Jacob, Saluja, Rachit, Young, Sean I., Sabuncu, Mert R.
Interpretability for machine learning models in medical imaging (MLMI) is an important direction of research. However, there is a general sense of murkiness in what interpretability means. Why does the need for interpretability in MLMI arise? What go
Externí odkaz:
http://arxiv.org/abs/2310.01685
Machine learning models will often fail when deployed in an environment with a data distribution that is different than the training distribution. When multiple environments are available during training, many methods exist that learn representations
Externí odkaz:
http://arxiv.org/abs/2309.13377
Autor:
Hsu, Wei-Che, Nujhat, Nabila, Kupp, Benjamin, Conley Jr, John F., Rong, Haisheng, Kumar, Ranjeet, Wang, Alan X.
Low driving voltage (Vpp), high-speed silicon microring modulator plays a critical role in energy-efficient optical interconnect and optical computing systems owing to its ultra-compact footprint and capability for on-chip wavelength-division multipl
Externí odkaz:
http://arxiv.org/abs/2308.16255
We present KeyMorph, a deep learning-based image registration framework that relies on automatically detecting corresponding keypoints. State-of-the-art deep learning methods for registration often are not robust to large misalignments, are not inter
Externí odkaz:
http://arxiv.org/abs/2304.09941
As IoT devices become cheaper, smaller, and more ubiquitously deployed, they can reveal more information than their intended design and threaten user privacy. Indoor Environmental Quality (IEQ) sensors previously installed for energy savings and indo
Externí odkaz:
http://arxiv.org/abs/2304.06477